Skip to content

feat: Add ModelsLab chat model and image generation nodes#5777

Open
adhikjoshi wants to merge 3 commits intoFlowiseAI:mainfrom
adhikjoshi:ml
Open

feat: Add ModelsLab chat model and image generation nodes#5777
adhikjoshi wants to merge 3 commits intoFlowiseAI:mainfrom
adhikjoshi:ml

Conversation

@adhikjoshi
Copy link

feat: Add ModelsLab chat model and image generation nodes

Summary

Adds two new Flowise nodes for ModelsLab:

  1. ChatModelsLab (Chat Models) — uncensored Llama 3.1 8B & 70B with 128K context
  2. ModelsLab Image Generator (Utilities) — Flux, SDXL, Playground v2.5 + 7 other models

What is ModelsLab?

ModelsLab is an AI API platform offering uncensored language models, Flux/SDXL image generation, video, voice, and more — at competitive pricing via clean REST APIs.

New Files

File Description
credentials/ModelsLabApi.credential.ts API key credential
nodes/chatmodels/ChatModelsLab/ChatModelsLab.ts Chat LLM node
nodes/utilities/ModelsLabImageGenerator/ModelsLabImageGenerator.ts Image gen node

ChatModelsLab Node

Category: Chat Models
Credential: modelsLabApi

Input Type Default
Model Name options llama-3.1-8b-uncensored
Temperature number 0.7
Streaming boolean true
Max Tokens number
Top P number
Frequency/Presence Penalty number

Implementation: Uses @langchain/openai ChatOpenAI with configuration.baseURL set to ModelsLab's OpenAI-compatible endpoint — no new npm dependencies.

ModelsLab Image Generator Node

Category: Utilities
Credential: modelsLabApi (shared with chat node)
Output: image URL string(s) compatible with downstream Flowise nodes

Input Default Notes
Prompt required
Model flux 10 options including Flux, SDXL, Playground
Negative Prompt standard optional
Width / Height 1024 optional
Steps / CFG / Seed standard optional
Number of Images 1 optional
Poll Timeout 180s handles ModelsLab async jobs

Implementation: Calls POST /api/v6/images/text2img directly, with automatic async polling via /api/v6/fetch/{id} for heavy model jobs.

Models Supported

Chat

Model Context
llama-3.1-8b-uncensored 128K
llama-3.1-70b-uncensored 128K

Image

Flux, FLUX Schnell, SDXL, Playground v2.5, DreamShaper 8, Realistic Vision v5, JuggernautXL, Deliberate v3, RevAnimated v2, Dreamlike Photoreal 2.0

Checklist

  • Follows INode / INodeCredential pattern (same as TogetherAI, Groq, etc.)
  • Chat node uses @langchain/openai (already a dependency)
  • Image node uses node-fetch (already a dependency)
  • Credential uses modelsLabApi with password field
  • Both nodes export { nodeClass } / { credClass } correctly
  • Async polling for image generation handled transparently
  • additionalParams: true for advanced options (keeps UI clean)
  • No new npm dependencies

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @adhikjoshi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands Flowise's AI model offerings by integrating ModelsLab's services. It provides users with new options for both conversational AI, featuring uncensored Llama 3.1 models, and advanced image generation capabilities, including various popular models. The implementation is designed to be efficient, reusing existing libraries and handling asynchronous operations for image generation seamlessly.

Highlights

  • New ModelsLab Integration: Introduced two new Flowise nodes to integrate ModelsLab's AI capabilities: a chat model node and an image generation node.
  • Chat Model Node: Added 'ChatModelsLab' node providing access to uncensored Llama 3.1 8B & 70B models with 128K context, utilizing ModelsLab's OpenAI-compatible endpoint via @langchain/openai.
  • Image Generation Node: Implemented 'ModelsLab Image Generator' node supporting various models like Flux, SDXL, and Playground v2.5, with built-in asynchronous polling for job completion.
  • No New Dependencies: The new nodes leverage existing project dependencies (@langchain/openai and node-fetch), ensuring no additional npm packages are required.
Changelog
  • packages/components/credentials/ModelsLabApi.credential.ts
    • Added a new credential class to securely store and manage ModelsLab API keys.
  • packages/components/nodes/chatmodels/ChatModelsLab/ChatModelsLab.ts
    • Created a new chat model node, 'ChatModelsLab', to interface with ModelsLab's uncensored Llama 3.1 models.
    • Configured the node to use @langchain/openai with ModelsLab's specific base URL for OpenAI compatibility.
    • Included options for model selection, temperature, streaming, max tokens, top P, frequency penalty, and presence penalty.
  • packages/components/nodes/utilities/ModelsLabImageGenerator/ModelsLabImageGenerator.ts
    • Developed a new utility node, 'ModelsLab Image Generator', for text-to-image generation.
    • Implemented logic to handle both synchronous and asynchronous image generation requests, including polling for results.
    • Provided configurable inputs for prompt, model selection, negative prompt, dimensions, inference steps, guidance scale, seed, number of images, and poll timeout.
Activity
  • No human activity (comments, reviews) has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces two new Flowise nodes for ModelsLab, enabling chat and image generation functionalities. The implementation is well-structured, leveraging existing patterns within the codebase. The chat node correctly utilizes an OpenAI-compatible endpoint, and the image generation node includes robust asynchronous polling. I've identified a few minor issues related to handling numeric inputs, particularly when the value is 0, and have provided specific suggestions for improvement in the comments below.

Comment on lines +162 to +168
const width = parseInt(nodeData.inputs?.width as string) || 1024
const height = parseInt(nodeData.inputs?.height as string) || 1024
const numInferenceSteps = parseInt(nodeData.inputs?.numInferenceSteps as string) || 20
const guidanceScale = parseFloat(nodeData.inputs?.guidanceScale as string) || 7.5
const seed = parseInt(nodeData.inputs?.seed as string) || -1
const samples = parseInt(nodeData.inputs?.samples as string) || 1
const pollTimeout = (parseInt(nodeData.inputs?.pollTimeout as string) || 180) * 1000
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Using the logical OR || operator for default values of numeric inputs is problematic when 0 is a valid input. For example, if a user enters 0 for seed, parseInt('0') || -1 results in seed being incorrectly set to -1. This issue affects all numeric inputs in this block. A more robust approach is to parse the number and then check if it's NaN before applying a default. The suggested implementation, while slightly more verbose, correctly handles all cases including 0, empty strings, and invalid inputs.

Suggested change
const width = parseInt(nodeData.inputs?.width as string) || 1024
const height = parseInt(nodeData.inputs?.height as string) || 1024
const numInferenceSteps = parseInt(nodeData.inputs?.numInferenceSteps as string) || 20
const guidanceScale = parseFloat(nodeData.inputs?.guidanceScale as string) || 7.5
const seed = parseInt(nodeData.inputs?.seed as string) || -1
const samples = parseInt(nodeData.inputs?.samples as string) || 1
const pollTimeout = (parseInt(nodeData.inputs?.pollTimeout as string) || 180) * 1000
const width = ((v) => !isNaN(v) ? v : 1024)(parseInt(nodeData.inputs?.width as string));
const height = ((v) => !isNaN(v) ? v : 1024)(parseInt(nodeData.inputs?.height as string));
const numInferenceSteps = ((v) => !isNaN(v) ? v : 20)(parseInt(nodeData.inputs?.numInferenceSteps as string));
const guidanceScale = ((v) => !isNaN(v) ? v : 7.5)(parseFloat(nodeData.inputs?.guidanceScale as string));
const seed = ((v) => !isNaN(v) ? v : -1)(parseInt(nodeData.inputs?.seed as string));
const samples = ((v) => !isNaN(v) ? v : 1)(parseInt(nodeData.inputs?.samples as string));
const pollTimeout = ((v) => !isNaN(v) ? v : 180)(parseInt(nodeData.inputs?.pollTimeout as string)) * 1000;

configuration: {
baseURL: MODELSLAB_CHAT_BASE_URL
},
temperature: parseFloat(temperature) || 0.7,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The use of the logical OR || operator for setting a default temperature can lead to a bug. If a user sets the temperature to 0, parseFloat(temperature) will evaluate to 0, and 0 || 0.7 will then incorrectly resolve to 0.7, overriding the user's intended input. Using the nullish coalescing operator ?? will correctly handle this case, as it only falls back to the default value for null or undefined.

Suggested change
temperature: parseFloat(temperature) || 0.7,
temperature: parseFloat(temperature) ?? 0.7,

safety_checker: 'no',
enhance_prompt: 'yes'
}
if (seed && seed !== -1) payload.seed = seed
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The condition if (seed && seed !== -1) will evaluate to false if seed is 0. Since 0 is a valid seed value, this is a bug that prevents it from being sent to the API. Using seed != null ensures that 0 is correctly handled as a valid value.

Suggested change
if (seed && seed !== -1) payload.seed = seed
if (seed != null && seed !== -1) payload.seed = seed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant