Skip to content

feat(components): add Avian as a new Chat Model provider#5859

Open
avianion wants to merge 2 commits intoFlowiseAI:mainfrom
avianion:feat/add-avian-chat-model
Open

feat(components): add Avian as a new Chat Model provider#5859
avianion wants to merge 2 commits intoFlowiseAI:mainfrom
avianion:feat/add-avian-chat-model

Conversation

@avianion
Copy link

Summary

Adds Avian as a new Chat Model provider for Flowise. Avian provides an OpenAI-compatible LLM inference API with competitive pricing on frontier models.

Changes

  • New node: ChatAvian — chat model node with full support for streaming, function calling, temperature, top-p, frequency/presence penalties, stop sequences, max tokens, and caching
  • New credential: AvianApi — API key credential (AVIAN_API_KEY)
  • Model catalog added to models.json:
Model Context Max Output Input Cost Output Cost
deepseek/deepseek-v3.2 164K 65K $0.26/M $0.38/M
moonshotai/kimi-k2.5 131K 8K $0.45/M $2.20/M
z-ai/glm-5 131K 16K $0.30/M $2.55/M
minimax/minimax-m2.5 1M 1M $0.30/M $1.10/M
  • SVG icon for the node

Provider Details

  • API Base URL: https://api.avian.io/v1
  • Auth: Bearer token via API key
  • Compatibility: Full OpenAI-compatible API (chat completions, streaming, function calling)
  • Implementation: Uses @langchain/openai ChatOpenAI class with custom base URL, following the same pattern as ChatCerebras, ChatDeepseek, and other OpenAI-compatible providers

Files Changed

  • packages/components/credentials/AvianApi.credential.ts — API key credential
  • packages/components/nodes/chatmodels/ChatAvian/ChatAvian.ts — Chat model node
  • packages/components/nodes/chatmodels/ChatAvian/avian.svg — Node icon
  • packages/components/models.json — Model definitions with pricing

Test plan

  • Verify ChatAvian node appears in the Chat Models category in the canvas
  • Verify credential creation with Avian API key
  • Verify model dropdown populates with all 4 models from models.json
  • Test chat completion with deepseek/deepseek-v3.2 (streaming and non-streaming)
  • Test function calling / tool use with a simple agent flow
  • Verify temperature, max tokens, and other parameters are passed through correctly

cc @HenryHengZJ

Add Avian (https://avian.io) as a new LLM provider node for Flowise.
Avian provides an OpenAI-compatible inference API with competitive
pricing on models including DeepSeek V3.2, Kimi K2.5, GLM-5, and
MiniMax M2.5.

Changes:
- Add ChatAvian node with credential support, model selection,
  streaming, and standard chat parameters
- Add AvianApi credential for API key management
- Add Avian models to models.json with pricing info
- Add Avian SVG icon
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates Avian as a new chat model provider, expanding the available LLM options within Flowise. It enables users to leverage Avian's OpenAI-compatible API for chat completions, streaming, and function calling, providing access to a range of frontier models with detailed pricing information.

Highlights

  • New Chat Model Provider: Introduced Avian as a new chat model provider, offering an OpenAI-compatible LLM inference API with competitive pricing on frontier models.
  • ChatAvian Node: Added a new ChatAvian node with comprehensive support for streaming, function calling, temperature, top-p, frequency/presence penalties, stop sequences, max tokens, and caching.
  • AvianApi Credential: Implemented a new AvianApi credential type for securely managing Avian API keys.
  • Model Catalog Expansion: Updated models.json to include four new Avian models: deepseek/deepseek-v3.2, moonshotai/kimi-k2.5, z-ai/glm-5, and minimax/minimax-m2.5, along with their context, max output, and pricing details.
  • Node Icon: Included a dedicated SVG icon for the ChatAvian node.
Changelog
  • packages/components/credentials/AvianApi.credential.ts
    • Added the AvianAPIAuth class to handle Avian API key credentials.
  • packages/components/models.json
    • Inserted a new chatAvian entry into the model catalog, listing four specific Avian models with their respective costs and descriptions.
  • packages/components/nodes/chatmodels/ChatAvian/ChatAvian.ts
    • Created the ChatAvian_ChatModels class, which wraps the @langchain/openai ChatOpenAI class to connect to the Avian API, including support for various parameters like temperature, max tokens, streaming, and stop sequences.
  • packages/components/nodes/chatmodels/ChatAvian/avian.svg
    • Added the SVG asset for the ChatAvian node's icon.
Activity
  • No human activity has occurred on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for the Avian Chat Model provider. The changes are well-structured and follow the existing patterns for adding new OpenAI-compatible providers, including the credential, node implementation, model definitions, and UI icon. I've identified a couple of areas for improvement regarding the handling of optional numeric parameters, which could lead to unexpected behavior if a user provides 0 as a value or clears a field. I've provided suggestions to make these checks more robust and also noted the use of a deprecated property from the @langchain/openai library.

Comment on lines 147 to 153
const obj: ChatOpenAIFields = {
temperature: parseFloat(temperature),
modelName,
openAIApiKey: avianApiKey,
apiKey: avianApiKey,
streaming: streaming ?? true
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The temperature property is being unconditionally parsed and assigned. If the temperature input is not provided or is an empty string, parseFloat will result in NaN, which could cause issues when passed to the ChatOpenAI constructor. This should be handled conditionally, similar to other optional numeric parameters.

Additionally, the openAIApiKey property is deprecated in @langchain/openai. You should only use apiKey.

Suggested change
const obj: ChatOpenAIFields = {
temperature: parseFloat(temperature),
modelName,
openAIApiKey: avianApiKey,
apiKey: avianApiKey,
streaming: streaming ?? true
}
const obj: ChatOpenAIFields = {
modelName,
apiKey: avianApiKey,
streaming: streaming ?? true
}
if (temperature !== undefined && temperature !== null && temperature !== '') {
obj.temperature = parseFloat(temperature)
}

Comment on lines 156 to 158
if (topP) obj.topP = parseFloat(topP)
if (frequencyPenalty) obj.frequencyPenalty = parseFloat(frequencyPenalty)
if (presencePenalty) obj.presencePenalty = parseFloat(presencePenalty)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The conditional check if (topP) (and similarly for frequencyPenalty and presencePenalty) is problematic because 0 is a valid value for these parameters, but it evaluates to false in a conditional. This means if a user sets these values to 0, they will be ignored. You should use a more robust check to verify that the value is actually provided.

Suggested change
if (topP) obj.topP = parseFloat(topP)
if (frequencyPenalty) obj.frequencyPenalty = parseFloat(frequencyPenalty)
if (presencePenalty) obj.presencePenalty = parseFloat(presencePenalty)
if (topP !== undefined && topP !== null && topP !== '') obj.topP = parseFloat(topP)
if (frequencyPenalty !== undefined && frequencyPenalty !== null && frequencyPenalty !== '') obj.frequencyPenalty = parseFloat(frequencyPenalty)
if (presencePenalty !== undefined && presencePenalty !== null && presencePenalty !== '') obj.presencePenalty = parseFloat(presencePenalty)

- Remove deprecated openAIApiKey, use apiKey only
- Guard temperature parseFloat against undefined/null/empty input
- Use proper null checks for topP, frequencyPenalty, presencePenalty
  to allow valid value 0
@avianion
Copy link
Author

Thanks for the review — fixed all three issues in the latest push.

@avianion
Copy link
Author

Hey @HenryHengZJ, would love your review on this when you get a chance. Happy to address any feedback!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant