feat: Add ModelsLab chat model and image generation nodes#5777
feat: Add ModelsLab chat model and image generation nodes#5777adhikjoshi wants to merge 3 commits intoFlowiseAI:mainfrom
Conversation
Summary of ChangesHello @adhikjoshi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands Flowise's AI model offerings by integrating ModelsLab's services. It provides users with new options for both conversational AI, featuring uncensored Llama 3.1 models, and advanced image generation capabilities, including various popular models. The implementation is designed to be efficient, reusing existing libraries and handling asynchronous operations for image generation seamlessly. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces two new Flowise nodes for ModelsLab, enabling chat and image generation functionalities. The implementation is well-structured, leveraging existing patterns within the codebase. The chat node correctly utilizes an OpenAI-compatible endpoint, and the image generation node includes robust asynchronous polling. I've identified a few minor issues related to handling numeric inputs, particularly when the value is 0, and have provided specific suggestions for improvement in the comments below.
| const width = parseInt(nodeData.inputs?.width as string) || 1024 | ||
| const height = parseInt(nodeData.inputs?.height as string) || 1024 | ||
| const numInferenceSteps = parseInt(nodeData.inputs?.numInferenceSteps as string) || 20 | ||
| const guidanceScale = parseFloat(nodeData.inputs?.guidanceScale as string) || 7.5 | ||
| const seed = parseInt(nodeData.inputs?.seed as string) || -1 | ||
| const samples = parseInt(nodeData.inputs?.samples as string) || 1 | ||
| const pollTimeout = (parseInt(nodeData.inputs?.pollTimeout as string) || 180) * 1000 |
There was a problem hiding this comment.
Using the logical OR || operator for default values of numeric inputs is problematic when 0 is a valid input. For example, if a user enters 0 for seed, parseInt('0') || -1 results in seed being incorrectly set to -1. This issue affects all numeric inputs in this block. A more robust approach is to parse the number and then check if it's NaN before applying a default. The suggested implementation, while slightly more verbose, correctly handles all cases including 0, empty strings, and invalid inputs.
| const width = parseInt(nodeData.inputs?.width as string) || 1024 | |
| const height = parseInt(nodeData.inputs?.height as string) || 1024 | |
| const numInferenceSteps = parseInt(nodeData.inputs?.numInferenceSteps as string) || 20 | |
| const guidanceScale = parseFloat(nodeData.inputs?.guidanceScale as string) || 7.5 | |
| const seed = parseInt(nodeData.inputs?.seed as string) || -1 | |
| const samples = parseInt(nodeData.inputs?.samples as string) || 1 | |
| const pollTimeout = (parseInt(nodeData.inputs?.pollTimeout as string) || 180) * 1000 | |
| const width = ((v) => !isNaN(v) ? v : 1024)(parseInt(nodeData.inputs?.width as string)); | |
| const height = ((v) => !isNaN(v) ? v : 1024)(parseInt(nodeData.inputs?.height as string)); | |
| const numInferenceSteps = ((v) => !isNaN(v) ? v : 20)(parseInt(nodeData.inputs?.numInferenceSteps as string)); | |
| const guidanceScale = ((v) => !isNaN(v) ? v : 7.5)(parseFloat(nodeData.inputs?.guidanceScale as string)); | |
| const seed = ((v) => !isNaN(v) ? v : -1)(parseInt(nodeData.inputs?.seed as string)); | |
| const samples = ((v) => !isNaN(v) ? v : 1)(parseInt(nodeData.inputs?.samples as string)); | |
| const pollTimeout = ((v) => !isNaN(v) ? v : 180)(parseInt(nodeData.inputs?.pollTimeout as string)) * 1000; |
| configuration: { | ||
| baseURL: MODELSLAB_CHAT_BASE_URL | ||
| }, | ||
| temperature: parseFloat(temperature) || 0.7, |
There was a problem hiding this comment.
The use of the logical OR || operator for setting a default temperature can lead to a bug. If a user sets the temperature to 0, parseFloat(temperature) will evaluate to 0, and 0 || 0.7 will then incorrectly resolve to 0.7, overriding the user's intended input. Using the nullish coalescing operator ?? will correctly handle this case, as it only falls back to the default value for null or undefined.
| temperature: parseFloat(temperature) || 0.7, | |
| temperature: parseFloat(temperature) ?? 0.7, |
| safety_checker: 'no', | ||
| enhance_prompt: 'yes' | ||
| } | ||
| if (seed && seed !== -1) payload.seed = seed |
There was a problem hiding this comment.
The condition if (seed && seed !== -1) will evaluate to false if seed is 0. Since 0 is a valid seed value, this is a bug that prevents it from being sent to the API. Using seed != null ensures that 0 is correctly handled as a valid value.
| if (seed && seed !== -1) payload.seed = seed | |
| if (seed != null && seed !== -1) payload.seed = seed |
feat: Add ModelsLab chat model and image generation nodes
Summary
Adds two new Flowise nodes for ModelsLab:
What is ModelsLab?
ModelsLab is an AI API platform offering uncensored language models, Flux/SDXL image generation, video, voice, and more — at competitive pricing via clean REST APIs.
New Files
credentials/ModelsLabApi.credential.tsnodes/chatmodels/ChatModelsLab/ChatModelsLab.tsnodes/utilities/ModelsLabImageGenerator/ModelsLabImageGenerator.tsChatModelsLab Node
Category: Chat Models
Credential:
modelsLabApillama-3.1-8b-uncensoredImplementation: Uses
@langchain/openaiChatOpenAIwithconfiguration.baseURLset to ModelsLab's OpenAI-compatible endpoint — no new npm dependencies.ModelsLab Image Generator Node
Category: Utilities
Credential:
modelsLabApi(shared with chat node)Output: image URL string(s) compatible with downstream Flowise nodes
fluxImplementation: Calls
POST /api/v6/images/text2imgdirectly, with automatic async polling via/api/v6/fetch/{id}for heavy model jobs.Models Supported
Chat
llama-3.1-8b-uncensoredllama-3.1-70b-uncensoredImage
Flux, FLUX Schnell, SDXL, Playground v2.5, DreamShaper 8, Realistic Vision v5, JuggernautXL, Deliberate v3, RevAnimated v2, Dreamlike Photoreal 2.0
Checklist
INode/INodeCredentialpattern (same as TogetherAI, Groq, etc.)@langchain/openai(already a dependency)node-fetch(already a dependency)modelsLabApiwith password field{ nodeClass }/{ credClass }correctlyadditionalParams: truefor advanced options (keeps UI clean)