Skip to content

Unify OpenAI-compatible providers and custom provider flow#930

Open
PeterDaveHello wants to merge 2 commits intoChatGPTBox-dev:masterfrom
PeterDaveHello:refactor/openai-provider-platform
Open

Unify OpenAI-compatible providers and custom provider flow#930
PeterDaveHello wants to merge 2 commits intoChatGPTBox-dev:masterfrom
PeterDaveHello:refactor/openai-provider-platform

Conversation

@PeterDaveHello
Copy link
Member

@PeterDaveHello PeterDaveHello commented Feb 24, 2026

User description

This PR unifies OpenAI-compatible API execution into a shared core + provider registry,
and completes the custom provider workflow in API Modes.

Included changes

  • Consolidated OpenAI-compatible request logic into shared execution path
  • Unified provider resolution/secret lookup for built-in and custom providers
  • Added config-driven custom provider flow in API Modes
  • Kept GPT-5/OpenAI token param behavior (max_completion_tokens where required)
  • Hardened migration for legacy configs/sessions (provider IDs, keys, custom URL mapping)
  • Fixed settings/key update edge cases and stream completion signaling consistency

Compatibility

  • No intended breaking behavior for existing users
  • Existing configs are migrated with backward-compatible handling
  • Existing OpenAI-compatible providers continue to work under unified config management

Validation

  • npm run lint passed
  • npm run build passed

GitHub Copilot PR summary

This pull request introduces a major refactor to how OpenAI-compatible and custom API providers are managed and configured. The changes unify the handling of various OpenAI-compatible APIs (including custom, Ollama, DeepSeek, Moonshot, ChatGLM, OpenRouter, AIML, and legacy GPT Completion APIs) under a single provider system, and add a robust migration and normalization layer for user configuration. This will make it easier to add new providers, manage secrets, and ensure backward compatibility with legacy config data.

Key changes include:

Unification and Refactor of API Provider Handling

  • Consolidated all OpenAI-compatible API providers (custom, Ollama, DeepSeek, Moonshot, ChatGLM, OpenRouter, AIML, GPT Completion, etc.) to be handled by a single generateAnswersWithOpenAICompatibleApi function, replacing multiple specific imports and execution branches in src/background/index.mjs. This simplifies the codebase and centralizes provider logic. [1] [2] [3] [4]

  • Added isUsingOpenAICompatibleApiSession utility to determine if a session should be routed through the unified OpenAI-compatible API handler.

Configuration Schema and Migration

  • Introduced a configuration schema versioning system and a comprehensive migration function (migrateUserConfig) in src/config/index.mjs. This function normalizes provider IDs, migrates legacy secrets, deduplicates and renames custom providers, and ensures all config data is up-to-date and consistent.

  • Updated the default config to include new fields: customOpenAIProviders, providerSecrets, and configSchemaVersion.

Popup UI and Provider Management

  • Refactored the API modes section in the popup UI (src/popup/sections/ApiModes.jsx) to use the new provider registry and normalization logic. Added helpers for provider ID normalization, uniqueness, and base URL sanitization.

  • Ensured that UI state and config updates are consistent with the new provider structure, and that legacy custom providers are handled gracefully.


References:

  • Unified provider imports and execution: [1] [2] [3] [4]
  • Added isUsingOpenAICompatibleApiSession:
  • Config migration and normalization:
  • Default config updates:
  • Popup/provider UI refactor:

PR Type

Enhancement, Tests, Documentation


Description

  • Unified OpenAI-compatible provider system: Consolidated all OpenAI-compatible API providers (custom, Ollama, DeepSeek, Moonshot, ChatGLM, OpenRouter, AIML) into a single shared execution path via generateAnswersWithOpenAICompatibleApi, eliminating duplicate provider-specific implementations

  • Configuration migration and schema versioning: Introduced comprehensive migrateUserConfig function that normalizes provider IDs, consolidates legacy secrets, handles custom provider deduplication, and maintains backward compatibility with existing configs

  • Unified provider registry: Created provider-registry.mjs with centralized provider resolution logic supporting both built-in and custom providers, with secret lookup and endpoint URL construction

  • Custom provider UI workflow: Refactored API Modes popup to support custom provider creation/editing with provider selector dropdown, validation, and dynamic provider list management

  • Provider secret management: Added buildProviderSecretUpdate utility for managing provider secrets across built-in and custom providers with legacy field support

  • API mode normalization: Implemented normalizeApiMode utility to ensure consistent API mode object structure throughout the codebase

  • Stream completion signaling: Fixed stream completion consistency to emit final message with { answer: null, done: true, session } format

  • Comprehensive test coverage: Added 16 config migration tests, 8 provider registry tests, and additional tests for provider utilities and stream handling

  • Multilingual support: Updated localization strings across 12 languages for custom provider UI labels and validation messages

  • Removed 6 provider-specific API modules: Deleted ollama-api.mjs, chatglm-api.mjs, aiml-api.mjs, moonshot-api.mjs, openrouter-api.mjs, and deepseek-api.mjs as functionality consolidated into unified handler


Diagram Walkthrough

flowchart LR
  A["Legacy Provider APIs<br/>Ollama, DeepSeek, etc."] -->|"Consolidated"| B["generateAnswersWithOpenAICompatibleApi"]
  C["User Config<br/>Legacy format"] -->|"migrateUserConfig"| D["Normalized Config<br/>customOpenAIProviders<br/>providerSecrets"]
  B -->|"Resolves via"| E["provider-registry"]
  E -->|"Looks up"| D
  F["API Modes UI"] -->|"Uses"| G["Provider utilities<br/>createProviderId<br/>parseChatCompletionsUrl"]
  G -->|"Updates via"| H["buildProviderSecretUpdate"]
  H -->|"Persists to"| D
Loading

File Walkthrough

Relevant files
Enhancement
13 files
index.mjs
Configuration migration and provider registry unification

src/config/index.mjs

  • Added comprehensive configuration migration system with
    migrateUserConfig function to normalize provider IDs, consolidate
    legacy secrets, and handle custom provider deduplication
  • Introduced new config fields: customOpenAIProviders, providerSecrets,
    and configSchemaVersion for unified provider management
  • Implemented migration logic that promotes legacy customUrl and apiKey
    fields into provider-based storage while maintaining backward
    compatibility
  • Enhanced getUserConfig to automatically persist migrated configuration
    changes to storage
+474/-3 
ApiModes.jsx
Custom provider UI editor and provider selector integration

src/popup/sections/ApiModes.jsx

  • Refactored API modes UI to support custom provider management with
    create/edit provider dialogs
  • Added provider selector dropdown and validation for provider name and
    chat completions URL
  • Implemented sanitizeApiModeForSave to ensure proper provider ID and
    API key handling during save
  • Integrated getCustomOpenAIProviders and provider utilities for dynamic
    provider list management
+303/-46
provider-registry.mjs
Unified provider registry and resolution system                   

src/services/apis/provider-registry.mjs

  • Created unified provider registry with getAllOpenAIProviders,
    getProviderById, and resolveOpenAICompatibleRequest functions
  • Implemented provider resolution logic that handles both built-in
    providers (OpenAI, DeepSeek, Ollama, etc.) and custom providers
  • Added getProviderSecret function to resolve API keys from
    providerSecrets map with fallback to legacy fields
  • Provides endpoint URL construction and normalization for various
    provider configurations
+362/-0 
GeneralPart.jsx
Unified provider detection and API key management               

src/popup/sections/GeneralPart.jsx

  • Replaced individual provider detection functions with unified
    resolveOpenAICompatibleRequest call
  • Updated API key input and balance check to use provider-resolved
    credentials via buildProviderSecretUpdate
  • Simplified provider-specific UI logic by consolidating multiple
    isUsingXxxApiModel checks into single isUsingOpenAICompatibleProvider
    flag
+45/-123
openai-api.mjs
OpenAI-compatible API core extraction and unification       

src/services/apis/openai-api.mjs

  • Extracted common OpenAI-compatible request logic into new
    generateAnswersWithOpenAICompatible core function
  • Added generateAnswersWithOpenAICompatibleApi as unified entry point
    that resolves provider configuration and delegates to core
  • Implemented touchOllamaKeepAlive for Ollama-specific keep-alive
    handling
  • Simplified legacy API functions to delegate to unified core
    implementation
+98/-160
index.mjs
Unified background API execution routing                                 

src/background/index.mjs

  • Consolidated multiple provider-specific imports into single
    generateAnswersWithOpenAICompatibleApi call
  • Added isUsingOpenAICompatibleApiSession utility to route all
    OpenAI-compatible providers through unified handler
  • Removed individual provider execution branches for custom, Ollama,
    DeepSeek, Moonshot, ChatGLM, OpenRouter, and AIML APIs
+19/-68 
openai-compatible-core.mjs
OpenAI-compatible API core implementation                               

src/services/apis/openai-compatible-core.mjs

  • Created new core module with generateAnswersWithOpenAICompatible
    function handling both chat and completion endpoints
  • Implements unified request body construction, streaming response
    parsing, and message answer building
  • Supports legacy response field handling via allowLegacyResponseField
    parameter for backward compatibility
  • Centralizes token parameter handling and error management for all
    OpenAI-compatible providers
+160/-0 
model-name-convert.mjs
API mode normalization and comparison utilities                   

src/utils/model-name-convert.mjs

  • Added normalizeApiMode function to ensure consistent API mode object
    structure with default values
  • Updated apiModeToModelName and getApiModesFromConfig to use normalized
    API modes for safer property access
  • Enhanced isApiModeSelected comparison logic to normalize both sides
    before comparing relevant fields
+40/-5   
init-session.mjs
Session initialization API mode normalization                       

src/services/init-session.mjs

  • Added import of normalizeApiMode utility function
  • Applied normalizeApiMode to session initialization to ensure API mode
    objects have consistent structure
+6/-2     
custom-api.mjs
Consolidate custom API into unified OpenAI-compatible handler

src/services/apis/custom-api.mjs

  • Removed 84 lines of duplicated OpenAI-compatible API request logic
  • Replaced custom implementation with call to unified
    generateAnswersWithOpenAICompatible function
  • Simplified function to delegate to shared core with provider-specific
    parameters
+11/-92 
provider-secret-utils.mjs
Provider secret update utility for API modes                         

src/popup/sections/provider-secret-utils.mjs

  • New utility module with buildProviderSecretUpdate function for
    managing provider secrets
  • Handles both built-in and custom provider secret updates with legacy
    field support
  • Clears inherited mode-level keys matching old provider secrets
  • Syncs selected mode keys to new provider secret values
+78/-0   
api-modes-provider-utils.mjs
Provider configuration utilities for API modes UI               

src/popup/sections/api-modes-provider-utils.mjs

  • New utility module with helper functions for provider management in
    API modes UI
  • createProviderId generates unique normalized provider IDs avoiding
    conflicts
  • parseChatCompletionsEndpointUrl validates and parses chat completions
    endpoint URLs
  • resolveProviderChatEndpointUrl resolves endpoint URL from provider
    configuration
+79/-0   
wrappers.mjs
API mode normalization in port listener                                   

src/services/wrappers.mjs

  • Added import of normalizeApiMode function from model-name-convert
    utility
  • Added normalization of session.apiMode during port listener
    registration
  • Ensures API mode is normalized before processing in message handlers
+6/-1     
Tests
5 files
migrate-user-config.test.mjs
Configuration migration test suite                                             

tests/unit/config/migrate-user-config.test.mjs

  • Added 16 comprehensive test cases covering config migration scenarios
    including legacy URL/key consolidation, provider ID normalization, and
    deduplication
  • Tests validate migration of custom providers from legacy customUrl
    fields, secret key consolidation, and backward compatibility
  • Covers edge cases like trailing slashes in URLs, multiple modes with
    same URL but different keys, and schema version updates
+447/-0 
provider-registry.test.mjs
Provider registry resolution test suite                                   

tests/unit/services/apis/provider-registry.test.mjs

  • Added 8 test cases validating provider resolution logic including
    endpoint type detection and URL normalization
  • Tests cover custom provider resolution by normalized ID, legacy URL
    matching, and duplicate /v1 path prevention
  • Validates correct endpoint URL construction for OpenAI, Ollama, and
    custom providers
+161/-0 
provider-secret-utils.test.mjs
Unit tests for provider secret update logic                           

tests/unit/popup/provider-secret-utils.test.mjs

  • New test file with 114 lines covering the buildProviderSecretUpdate
    function
  • Tests validate provider secret updates for both built-in and custom
    providers
  • Tests verify clearing of inherited mode-level keys and syncing of
    selected mode keys
  • Tests ensure unrelated providers are not modified during secret
    updates
+114/-0 
api-modes-provider-utils.test.mjs
Unit tests for API modes provider utilities                           

tests/unit/popup/api-modes-provider-utils.test.mjs

  • New test file with 60 lines covering provider utility functions
  • Tests validate provider ID creation with conflict avoidance
  • Tests verify chat completions endpoint URL parsing and validation
  • Tests ensure endpoint URL resolution from provider base URL and path
+60/-0   
openai-api-compat.test.mjs
Stream completion signaling consistency tests                       

tests/unit/services/apis/openai-api-compat.test.mjs

  • Updated existing test to expect { answer: null, done: true, session }
    final message instead of { done: true }
  • Added new test case for fallback done message when stream ends without
    finish reason
  • New test verifies stream completion signaling consistency for
    incomplete streams
+39/-1   
Documentation
13 files
main.json
Japanese localization updates                                                       

src/_locales/ja/main.json

  • Added Japanese translations for Custom Provider and Provider UI labels
  • Added error message translation for Chat Completions URL validation
+3/-0     
main.json
Turkish localization updates                                                         

src/_locales/tr/main.json

  • Added Turkish translations for Custom Provider and Provider UI labels
  • Added error message translation for Chat Completions URL validation
+3/-0     
main.json
Traditional Chinese localization updates                                 

src/_locales/zh-hant/main.json

  • Added Traditional Chinese translations for Custom Provider and
    Provider UI labels
  • Added error message translation for Chat Completions URL validation
+3/-0     
main.json
Russian localization updates                                                         

src/_locales/ru/main.json

  • Added Russian translations for Custom Provider and Provider UI labels
  • Added error message translation for Chat Completions URL validation
+3/-0     
main.json
Italian localization updates                                                         

src/_locales/it/main.json

  • Added Italian translations for Custom Provider and Provider UI labels
  • Added error message translation for Chat Completions URL validation
+3/-0     
main.json
French localization updates                                                           

src/_locales/fr/main.json

  • Added French translations for Custom Provider and Provider UI labels
  • Added error message translation for Chat Completions URL validation
+3/-0     
main.json
German localization updates                                                           

src/_locales/de/main.json

  • Added German translations for Custom Provider and Provider UI labels
  • Added error message translation for Chat Completions URL validation
+3/-0     
main.json
Korean localization updates                                                           

src/_locales/ko/main.json

  • Added Korean translations for Custom Provider and Provider UI labels
  • Added error message translation for Chat Completions URL validation
+3/-0     
main.json
Spanish localization updates                                                         

src/_locales/es/main.json

  • Added Spanish translations for Custom Provider and Provider UI labels
  • Added error message translation for Chat Completions URL validation
+3/-0     
main.json
Indonesian localization updates                                                   

src/_locales/in/main.json

  • Added Indonesian translations for Custom Provider and Provider UI
    labels
  • Added error message translation for Chat Completions URL validation
+3/-0     
main.json
Portuguese localization updates                                                   

src/_locales/pt/main.json

  • Added Portuguese translations for Custom Provider and Provider UI
    labels
  • Added error message translation for Chat Completions URL validation
+3/-0     
main.json
Simplified Chinese localization updates                                   

src/_locales/zh-hans/main.json

  • Added Simplified Chinese translations for Custom Provider and Provider
    UI labels
  • Added error message translation for Chat Completions URL validation
+3/-0     
main.json
Localization strings for custom provider UI                           

src/_locales/en/main.json

  • Added new localization string "Custom Provider" for UI display
  • Added new localization string "Provider" for provider selection UI
  • Added new localization string "Please enter a full Chat Completions
    URL" for URL validation error message
+3/-0     
Configuration changes
1 files
openai-provider-mappings.mjs
Provider ID and legacy field mapping configuration             

src/config/openai-provider-mappings.mjs

  • New configuration mapping module with provider ID to legacy API key
    field mappings
  • Defines LEGACY_API_KEY_FIELD_BY_PROVIDER_ID for 8 built-in providers
  • Provides reverse mapping LEGACY_SECRET_KEY_TO_PROVIDER_ID for
    migration
  • Maps legacy group names to normalized provider IDs via
    OPENAI_COMPATIBLE_GROUP_TO_PROVIDER_ID
+30/-0   
Additional files
6 files
aiml-api.mjs +0/-12   
chatglm-api.mjs +0/-14   
deepseek-api.mjs +0/-12   
moonshot-api.mjs +0/-12   
ollama-api.mjs +0/-36   
openrouter-api.mjs +0/-12   

Summary by CodeRabbit

  • New Features

    • Full provider management (create/edit/select/validate) and provider-aware API mode UI with provider-specific API key and balance flows.
    • Unified OpenAI‑compatible request/streaming path for chat and completion endpoints.
  • Refactor

    • Consolidated many per‑provider adapters into a single OpenAI‑compatible execution flow.
  • Chores

    • Automatic migration of legacy settings into the new provider model and added localization strings for provider UI.
  • Tests

    • Extensive unit tests added for migration, provider utilities, registry, compatibility core, and related helpers.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 24, 2026

📝 Walkthrough

Walkthrough

Consolidates many per-provider adapters into a unified OpenAI-compatible provider registry and SSE-capable core; adds provider catalog, providerSecrets and migration logic, provider-management UI, apiMode normalization, and routes background execution through the new OpenAI-compatible generation path. (34 words)

Changes

Cohort / File(s) Summary
Provider registry & core
src/services/apis/provider-registry.mjs, src/services/apis/openai-compatible-core.mjs
Add built-in + custom provider catalog, provider resolution/secret lookup, and implement centralized SSE-capable OpenAI-compatible streaming core generateAnswersWithOpenAICompatible.
OpenAI wrappers & delegation
src/services/apis/openai-api.mjs, src/services/apis/custom-api.mjs
Refactor wrappers to delegate to the OpenAI-compatible core via generateAnswersWithOpenAICompatible/generateAnswersWithOpenAICompatibleApi; add URL normalization and Ollama keep‑alive logic.
Removed per-provider adapters
src/services/apis/*-api.mjs (chatglm, deepseek, moonshot, openrouter, aiml, ollama, ...)
Delete many small adapter modules; their responsibilities moved to provider registry + openai-compatible core.
Background & session routing
src/background/index.mjs, src/services/init-session.mjs, src/services/wrappers.mjs
Add isUsingOpenAICompatibleApiSession detection, normalize apiMode on session init/wrappers, and route generation requests through provider resolution + unified core.
Configuration & mappings
src/config/index.mjs, src/config/openai-provider-mappings.mjs
Introduce migration layer, new config fields (customOpenAIProviders, providerSecrets, configSchemaVersion), legacy secret→provider mappings, and normalization/migration logic.
Popup: provider UI & utils
src/popup/sections/ApiModes.jsx, src/popup/sections/GeneralPart.jsx, src/popup/sections/api-modes-provider-utils.mjs
Add provider selector/editor UI, provider ID/URL parsing utilities, provider-aware apiMode save flow, and provider selection integration in popup.
Provider secret sync
src/popup/sections/provider-secret-utils.mjs
Add buildProviderSecretUpdate to sync providerSecrets, legacy fields, customApiModes, and selected apiMode when API keys change.
Balance & formatting helpers
src/popup/sections/general-balance-utils.mjs
Add formatFiniteBalance utility used by balance UI.
Model-name & apiMode normalization
src/utils/model-name-convert.mjs
Add normalizeApiMode, include providerId in apiMode shapes, and use normalization for comparisons and listings.
Tests added/updated
tests/unit/...
Add comprehensive tests for migration, provider registry, openai-compatible SSE behavior, provider utils, provider-secret sync, and balance formatting; adjust openai-api tests.
I18n
src/_locales/*/main.json
Add translation keys: "Custom Provider", "Provider", and "Please enter a full Chat Completions URL" across locales.

Sequence Diagram(s)

sequenceDiagram
  participant Port as UI Port
  participant BG as Background
  participant Registry as Provider Registry
  participant Core as OpenAI-Compatible Core
  participant Ext as External Provider/API

  Port->>BG: request generation (question, session)
  BG->>Registry: resolveOpenAICompatibleRequest(config, session)
  Registry-->>BG: { providerId, requestUrl, apiKey, endpointType, model }
  BG->>Core: generateAnswersWithOpenAICompatible(params)
  Core->>Ext: open SSE/HTTP stream (requestUrl, headers, body)
  Ext-->>Core: SSE stream (partial tokens/events)
  Core->>Port: post interim answers
  Ext-->>Core: final event / done
  Core->>Port: post final answer + done
  Core->>BG: persist Q/A record
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

Suggested labels

Review effort 3/5

Poem

🐰 I hopped through configs, keys in tow,
Providers gathered in a tidy row,
Streams of tokens danced like spring light,
Old adapters merged into one bright flight,
A rabbit cheers — unified and mellow.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 10.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: unifying OpenAI-compatible providers and consolidating the custom provider flow into a shared infrastructure.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link

Summary of Changes

Hello @PeterDaveHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly refactors the extension's architecture for managing AI API providers. By unifying OpenAI-compatible API interactions and introducing a comprehensive provider registry with robust configuration migration, the changes aim to improve maintainability, simplify the addition of new providers, and enhance the user experience for custom API configurations. This also ensures better consistency in how API keys and endpoints are handled across the application.

Highlights

  • Unified OpenAI-compatible API Handling: Consolidated all OpenAI-compatible API execution logic into a single shared core function, replacing multiple individual API implementations and simplifying the codebase.
  • Enhanced Provider Registry and Configuration: Introduced a robust provider registry for both built-in and custom OpenAI-compatible providers, centralizing their management, secret lookup, and configuration.
  • Comprehensive Configuration Migration: Implemented a new configuration schema versioning system and a migration function to normalize provider IDs, migrate legacy API keys, deduplicate custom providers, and ensure backward compatibility for existing user settings.
  • Improved Custom Provider Workflow in UI: Refactored the API Modes section in the popup UI to support the new provider registry, allowing users to define and manage custom providers with unique IDs, base URLs, and API key handling.
  • Streamlined API Key Management: Centralized API key storage and retrieval through providerSecrets, ensuring that API keys are consistently managed across different providers and API modes, and handling updates gracefully.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • src/background/index.mjs
    • Removed direct imports and calls for individual OpenAI-compatible APIs (e.g., Custom, Ollama, Moonshot, DeepSeek, ChatGLM, OpenRouter, AIML, GPT Completion).
    • Introduced a new isUsingOpenAICompatibleApiSession utility function to determine if a session should use the unified API handler.
    • Replaced multiple conditional API calls with a single call to generateAnswersWithOpenAICompatibleApi for all OpenAI-compatible providers.
  • src/config/index.mjs
    • Added providerId field to the default API mode configuration.
    • Introduced new configuration fields: customOpenAIProviders, providerSecrets, and configSchemaVersion.
    • Implemented a migrateUserConfig function to handle schema versioning, normalize provider IDs, migrate legacy API keys to providerSecrets, and deduplicate/rename custom providers.
    • Updated getUserConfig to apply the migrateUserConfig function and persist any necessary migrations.
  • src/popup/sections/ApiModes.jsx
    • Removed CustomApiKeyGroups and CustomUrlGroups imports, relying on the new provider registry.
    • Added LEGACY_CUSTOM_PROVIDER_ID constant for backward compatibility.
    • Introduced defaultProviderDraft for creating new custom providers.
    • Added utility functions normalizeProviderId, createProviderId, normalizeBaseUrl, and sanitizeApiModeForSave for UI-related provider management.
    • Updated useLayoutEffect to fetch customProviders from the new registry.
    • Refactored the onSaveEditing function to handle new custom provider creation and update existing API modes with provider IDs and secrets.
    • Modified the API mode editing component to include a provider selector for choosing existing custom providers or creating new ones.
    • Adjusted the logic for setting editingApiMode and providerSelector when editing an existing API mode or adding a new one.
  • src/popup/sections/GeneralPart.jsx
    • Removed imports for individual API model checks (e.g., isUsingOpenAiApiModel, isUsingChatGLMApiModel, isUsingMoonshotApiModel, etc.).
    • Introduced LEGACY_API_KEY_FIELD_BY_PROVIDER_ID mapping for secret migration.
    • Added buildProviderSecretUpdate function to manage API key updates across providerSecrets and legacy fields, and to sync API keys in customApiModes and apiMode.
    • Updated getBalance function to use the resolved OpenAI-compatible provider's base URL and API key.
    • Modified the API key input field to dynamically display the API key from the selectedProviderApiKey and update it using buildProviderSecretUpdate.
    • Removed specific API key input fields for ChatGLM, Moonshot, DeepSeek, Ollama, OpenRouter, and AIML, centralizing API key management.
  • src/services/apis/custom-api.mjs
    • Removed extensive custom API logic, including SSE fetching and message parsing.
    • Replaced the custom API implementation with a call to the new generateAnswersWithOpenAICompatible core function, passing relevant parameters.
  • src/services/apis/openai-api.mjs
    • Removed direct SSE fetching and message parsing logic from generateAnswersWithGptCompletionApi and generateAnswersWithChatgptApiCompat.
    • Introduced normalizeBaseUrl and resolveModelName utility functions.
    • Added touchOllamaKeepAlive function for Ollama provider-specific keep-alive requests.
    • Refactored generateAnswersWithGptCompletionApi and generateAnswersWithChatgptApiCompat to utilize the new generateAnswersWithOpenAICompatible core function.
    • Added generateAnswersWithOpenAICompatibleApi as a unified entry point for all OpenAI-compatible providers, resolving provider details and calling the core function.
  • src/services/apis/openai-compatible-core.mjs
    • Added new file implementing generateAnswersWithOpenAICompatible.
    • Implemented a generic SSE fetching and message parsing mechanism for OpenAI-compatible APIs.
    • Provided helper functions buildHeaders, buildMessageAnswer, and hasFinished for consistent API interaction.
    • Dynamically constructs request bodies for both 'chat' and 'completion' endpoint types, handling conversation context and token parameters.
  • src/services/apis/provider-registry.mjs
    • Added new file to manage OpenAI-compatible API providers.
    • Defined DEFAULT_CHAT_PATH, DEFAULT_COMPLETION_PATH, and LEGACY_KEY_BY_PROVIDER_ID constants.
    • Defined BUILTIN_PROVIDER_TEMPLATE for various pre-configured providers (OpenAI, DeepSeek, Moonshot, OpenRouter, AIML, ChatGLM, Ollama, Legacy Custom).
    • Defined OPENAI_COMPATIBLE_GROUP_TO_PROVIDER_ID mapping for legacy group names.
    • Implemented utility functions like getModelNamePresetPart, resolveProviderIdFromLegacyModelName, isLegacyCompletionModelName, toStringOrEmpty, trimSlashes, ensureLeadingSlash, and joinUrl.
    • Provided functions to build built-in providers (buildBuiltinProviders), normalize custom providers (normalizeCustomProvider), get custom providers (getCustomOpenAIProviders), and get all providers (getAllOpenAIProviders).
    • Implemented functions to resolve provider ID for a session (resolveProviderIdForSession), endpoint type (resolveEndpointTypeForSession), get provider by ID (getProviderById), and retrieve provider secrets (getProviderSecret).
    • Added resolveOpenAICompatibleRequest as a central function to determine the correct provider, endpoint, URL, and API key for a given session.
  • src/services/init-session.mjs
    • Imported normalizeApiMode from model-name-convert.mjs.
    • Applied normalizeApiMode to the apiMode during session initialization to ensure consistency.
  • src/services/wrappers.mjs
    • Imported normalizeApiMode from model-name-convert.mjs.
    • Applied normalizeApiMode to session.apiMode within the registerPortListener to ensure API mode objects are consistently normalized.
  • src/utils/model-name-convert.mjs
    • Added providerId field to the modelNameToApiMode function's return object for custom API modes.
    • Introduced a new normalizeApiMode function to ensure API mode objects have consistent structure and default values.
    • Updated apiModeToModelName to use normalizeApiMode for robust handling of API mode objects.
    • Updated getApiModesFromConfig to use normalizeApiMode when processing custom API modes and to correctly handle Ollama model names.
    • Modified isApiModeSelected to use normalizeForCompare (which leverages normalizeApiMode) for more reliable comparison of API mode objects.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@qodo-code-review
Copy link
Contributor

qodo-code-review bot commented Feb 24, 2026

PR Compliance Guide 🔍

Below is a summary of compliance checks for this PR:

Security Compliance
SSRF credential exfiltration

Description: resolveUrlFromProvider() allows session.apiMode.customUrl (for customApiModelKeys) to
fully override the request destination while getProviderSecret() can still supply a real
provider API key (e.g., OpenAI) from config.providerSecrets, enabling a
malicious/compromised config (or manually edited storage) to exfiltrate a user's API key
by sending an authenticated request to an attacker-controlled URL (SSRF/credential leak).
provider-registry.mjs [270-318]

Referred Code
function resolveUrlFromProvider(provider, endpointType, config, session) {
  if (!provider) return ''

  const apiModeCustomUrl =
    endpointType === 'chat' &&
    session?.apiMode &&
    typeof session.apiMode === 'object' &&
    session.apiMode.groupName === 'customApiModelKeys'
      ? toStringOrEmpty(session.apiMode.customUrl).trim()
      : ''
  if (apiModeCustomUrl) return apiModeCustomUrl

  if (endpointType === 'completion') {
    if (provider.completionsUrl) return provider.completionsUrl
    if (provider.completionsPath) return joinUrl(provider.baseUrl, provider.completionsPath)
  } else {
    if (provider.chatCompletionsUrl) return provider.chatCompletionsUrl
    if (provider.chatCompletionsPath) return joinUrl(provider.baseUrl, provider.chatCompletionsPath)
  }

  if (provider.id === 'legacy-custom-default') {


 ... (clipped 28 lines)
Ticket Compliance
🎫 No ticket provided
  • Create ticket/issue
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
🟢
Generic: Meaningful Naming and Self-Documenting Code

Objective: Ensure all identifiers clearly express their purpose and intent, making code
self-documenting

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

🔴
Generic: Robust Error Handling and Edge Case Management

Objective: Ensure comprehensive error handling that provides meaningful context and graceful
degradation

Status:
Missing URL validation: getBalance can build a billing URL from an empty/undefined base URL
(selectedProvider?.baseUrl || config.customOpenAiApiUrl), potentially resulting in an
invalid or relative fetch without graceful handling.

Referred Code
const getBalance = async () => {
  const openAiApiUrl = selectedProvider?.baseUrl || config.customOpenAiApiUrl
  const response = await fetch(`${openAiApiUrl}/dashboard/billing/credit_grants`, {
    headers: {
      'Content-Type': 'application/json',
      Authorization: `Bearer ${selectedProviderApiKey}`,
    },
  })
  if (response.ok) setBalance((await response.json()).total_available.toFixed(2))
  else {
    const billing = await checkBilling(selectedProviderApiKey, openAiApiUrl)
    if (billing && billing.length > 2 && billing[2]) setBalance(`${billing[2].toFixed(2)}`)
    else openUrl('https://platform.openai.com/account/usage')

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Logging Practices

Objective: To ensure logs are useful for debugging and auditing without exposing sensitive
information like PII, PHI, or cardholder data.

Status:
Sensitive debug logs: The new unified core logs raw SSE messages and conversation history via console.debug,
which can include user content and other sensitive data in logs.

Referred Code
  console.debug('conversation history', { content: session.conversationRecords })
  port.postMessage({ answer: null, done: true, session: session })
}

await fetchSSE(requestUrl, {
  method: 'POST',
  signal: controller.signal,
  headers: buildHeaders(apiKey, extraHeaders),
  body: JSON.stringify(requestBody),
  onMessage(message) {
    console.debug('sse message', message)
    if (finished) return
    if (message.trim() === '[DONE]') {
      finish()
      return
    }
    let data
    try {
      data = JSON.parse(message)
    } catch (error) {
      console.debug('json error', error)


 ... (clipped 1 lines)

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Comprehensive Audit Trails

Objective: To create a detailed and reliable record of critical system actions for security analysis
and compliance.

Status:
No audit logging: The new migration path writes sensitive configuration/secrets to local storage without any
audit trail context (who/when/what/outcome), making post-incident reconstruction
difficult.

Referred Code
export async function getUserConfig() {
  const options = await Browser.storage.local.get(Object.keys(defaultConfig))
  const { migrated, dirty } = migrateUserConfig(options)
  if (dirty) {
    const payload = {
      customChatGptWebApiUrl: migrated.customChatGptWebApiUrl,
      customApiModes: migrated.customApiModes,
      customOpenAIProviders: migrated.customOpenAIProviders,
      providerSecrets: migrated.providerSecrets,
      configSchemaVersion: migrated.configSchemaVersion,
    }
    if (migrated.apiMode !== undefined) payload.apiMode = migrated.apiMode
    await Browser.storage.local.set(payload)
  }

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Error Handling

Objective: To prevent the leakage of sensitive system information through error messages while
providing sufficient detail for internal debugging.

Status:
Provider error exposed: The SSE onError path throws an Error built from resp.json() or status text, which may
propagate provider-specific details to user-facing surfaces depending on upstream
handling.

Referred Code
async onError(resp) {
  port.onMessage.removeListener(messageListener)
  port.onDisconnect.removeListener(disconnectListener)
  if (resp instanceof Error) throw resp
  const error = await resp.json().catch(() => ({}))
  throw new Error(!isEmpty(error) ? JSON.stringify(error) : `${resp.status} ${resp.statusText}`)
},

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Security-First Input Validation and Data Handling

Objective: Ensure all data inputs are validated, sanitized, and handled securely to prevent
vulnerabilities

Status:
Weak URL validation: Custom provider baseUrl/endpoint derivation relies on minimal normalization (trim/trailing
slash removal and regex heuristics) without stricter validation (scheme/host
allowlisting), which may permit unsafe or unintended request targets depending on
extension threat model.

Referred Code
function normalizeBaseUrl(value) {
  return String(value || '')
    .trim()
    .replace(/\/+$/, '')
}

function sanitizeApiModeForSave(apiMode) {
  const nextApiMode = { ...apiMode }
  if (nextApiMode.groupName !== 'customApiModelKeys') {
    nextApiMode.providerId = ''
    nextApiMode.apiKey = ''
    return nextApiMode
  }
  if (!nextApiMode.providerId) nextApiMode.providerId = LEGACY_CUSTOM_PROVIDER_ID
  return nextApiMode
}

export function ApiModes({ config, updateConfig }) {
  const { t } = useTranslation()
  const [editing, setEditing] = useState(false)
  const [editingApiMode, setEditingApiMode] = useState(defaultApiMode)


 ... (clipped 86 lines)

Learn more about managing compliance generic rules or creating your own custom rules

  • Update
Compliance status legend 🟢 - Fully Compliant
🟡 - Partial Compliant
🔴 - Not Compliant
⚪ - Requires Further Human Verification
🏷️ - Compliance label

@qodo-code-review
Copy link
Contributor

qodo-code-review bot commented Feb 24, 2026

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
High-level
The migration logic is overly complex

The migrateUserConfig function is over 300 lines long and overly complex,
creating a high risk of corrupting user data. Consider a simpler, staged
migration to improve maintainability and reduce risk.

Examples:

src/config/index.mjs [796-1083]
function migrateUserConfig(options) {
  const migrated = { ...options }
  let dirty = false

  if (migrated.customChatGptWebApiUrl === 'https://chat.openai.com') {
    migrated.customChatGptWebApiUrl = 'https://chatgpt.com'
    dirty = true
  }

  const providerSecrets =

 ... (clipped 278 lines)

Solution Walkthrough:

Before:

// src/config/index.mjs
function migrateUserConfig(options) {
  const migrated = { ...options };
  let dirty = false;

  // ... ~300 lines of interwoven logic ...

  // Migrate legacy secrets to providerSecrets
  for (const [legacyKey, providerId] of Object.entries(LEGACY_SECRET_KEY_TO_PROVIDER_ID)) { ... }

  // Normalize and deduplicate custom provider IDs
  // ... complex logic with sets and maps ...

  // Migrate customApiModes, creating new providers on the fly
  for (const apiMode of customApiModes) {
    if (apiMode.groupName !== 'customApiModelKeys') { ... } 
    else {
      // ... create new providers from customUrl ...
      // ... migrate apiMode keys to providerSecrets ...
    }
  }

  // Migrate selected apiMode
  if (migrated.apiMode) { ... }

  return { migrated, dirty };
}

After:

// A simpler, staged migration approach
// src/config/index.mjs

function migrateLegacySecrets(config) {
  // Only handles migrating top-level API keys to providerSecrets.
  // ...
  return newConfig;
}

function migrateCustomProviders(config) {
  // Only handles normalizing custom provider definitions.
  // ...
  return newConfig;
}

function migrateApiModes(config) {
  // Only handles linking apiModes to the new provider IDs.
  // May flag ambiguous cases for user review instead of auto-resolving.
  // ...
  return newConfig;
}

function migrateUserConfig(options) {
    let migrated = { ...options };
    migrated = migrateLegacySecrets(migrated);
    migrated = migrateCustomProviders(migrated);
    migrated = migrateApiModes(migrated);
    // ... other simple migrations ...
    return { migrated, dirty };
}
Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies the high complexity and risk of the monolithic migrateUserConfig function, which is critical for preventing user data loss and ensuring future maintainability.

Medium
General
Build billing URL from provider request

To correctly query billing information for all providers, derive the billing
endpoint's base URL from the fully resolved requestUrl instead of using the
provider's baseUrl.

src/popup/sections/GeneralPart.jsx [206-212]

-const openAiApiUrl = selectedProvider?.baseUrl || config.customOpenAiApiUrl
-const response = await fetch(`${openAiApiUrl}/dashboard/billing/credit_grants`, {
+// extract billing base from the provider request URL
+const billingBase = selectedProviderRequest.requestUrl.replace(
+  /(\/v1\/chat\/completions|\/v1\/completions).*$/,
+  ''
+)
+const response = await fetch(`${billingBase}/dashboard/billing/credit_grants`, {
   headers: {
     'Content-Type': 'application/json',
     Authorization: `Bearer ${selectedProviderApiKey}`,
   },
 })
  • Apply / Chat
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly points out that the billing check should derive its base URL from the resolved request URL, not the provider's base URL, to correctly handle providers with non-standard URL structures.

Medium
Remove legacy API keys after migration

After migrating legacy API keys to the new providerSecrets map, explicitly
remove the old top-level keys from browser storage to avoid redundant data.

src/config/index.mjs [1100-1101]

 if (migrated.apiMode !== undefined) payload.apiMode = migrated.apiMode
 await Browser.storage.local.set(payload)
 
+const legacyKeysToRemove = Object.keys(LEGACY_SECRET_KEY_TO_PROVIDER_ID).filter(
+  (key) => migrated[key] !== undefined,
+)
+if (legacyKeysToRemove.length > 0) {
+  await Browser.storage.local.remove(legacyKeysToRemove)
+}
+
  • Apply / Chat
Suggestion importance[1-10]: 6

__

Why: The suggestion correctly identifies that legacy API keys are not removed from storage after migration and proposes a clean solution to remove them, improving data hygiene and preventing potential confusion.

Low
Possible issue
Use customModelApiUrl for custom completions

For the legacy custom provider, ensure completion requests use
config.customModelApiUrl before falling back to config.customOpenAiApiUrl to
correctly route to custom endpoints.

src/services/apis/provider-registry.mjs [290-298]

 if (provider.id === 'legacy-custom-default') {
   if (endpointType === 'completion') {
-    return `${trimSlashes(config.customOpenAiApiUrl || 'https://api.openai.com')}/v1/completions`
+    const customUrl = toStringOrEmpty(config.customModelApiUrl).trim()
+    return customUrl
+      ? `${trimSlashes(customUrl)}/v1/completions`
+      : `${trimSlashes(config.customOpenAiApiUrl || 'https://api.openai.com')}/v1/completions`
   }
   return (
     toStringOrEmpty(config.customModelApiUrl).trim() ||
     'http://localhost:8000/v1/chat/completions'
   )
 }
  • Apply / Chat
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies that legacy completion requests were incorrectly using customOpenAiApiUrl instead of customModelApiUrl, and provides a fix to respect the user's custom endpoint for completions.

Medium
  • Update

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request consolidates OpenAI-compatible API providers (OpenAI, DeepSeek, Moonshot, ChatGLM, OpenRouter, AIML, Ollama, and custom providers) into a unified provider system with a shared execution core and provider registry. The changes enable config-driven custom provider management and implement comprehensive migration logic for backward compatibility.

Changes:

  • Introduced a unified provider registry (provider-registry.mjs) and shared OpenAI-compatible API execution core (openai-compatible-core.mjs) to consolidate previously scattered provider logic
  • Implemented comprehensive config migration system to normalize provider IDs, migrate legacy secrets to new providerSecrets map, handle custom URL to provider mapping, and ensure data consistency
  • Enhanced API Modes UI to support creating and selecting custom OpenAI-compatible providers with automatic ID generation and validation

Reviewed changes

Copilot reviewed 11 out of 11 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
src/services/apis/provider-registry.mjs New provider registry system for resolving provider configurations, secrets, and API endpoints for all OpenAI-compatible providers
src/services/apis/openai-compatible-core.mjs New unified execution path for OpenAI-compatible API requests with streaming support
src/services/apis/openai-api.mjs Refactored to use the new unified execution core, removing duplicate code and adding unified provider routing
src/services/apis/custom-api.mjs Simplified to delegate to the new shared core, removing 80+ lines of duplicate logic
src/config/index.mjs Added comprehensive migrateUserConfig function to normalize provider IDs, migrate secrets, deduplicate providers, and ensure backward compatibility
src/popup/sections/GeneralPart.jsx Updated API key input handling to use unified provider secret management via buildProviderSecretUpdate
src/popup/sections/ApiModes.jsx Enhanced to support creating and managing custom OpenAI providers with validation and automatic ID generation
src/utils/model-name-convert.mjs Added normalizeApiMode function and updated comparison logic to include providerId field
src/services/wrappers.mjs Updated to normalize apiMode when initializing sessions
src/services/init-session.mjs Updated to normalize apiMode during session initialization
src/background/index.mjs Consolidated provider routing through new isUsingOpenAICompatibleApiSession helper and unified API entry point

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

},
async onStart() {},
async onEnd() {
if (!finished) port.postMessage({ done: true })
Copy link

Copilot AI Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The onEnd handler unconditionally sends a done: true message even if the stream has already finished. This differs from other API handlers like azure-openai-api.mjs which always send the message. While this is likely not harmful, it creates an inconsistent pattern across the codebase. Consider either:

  1. Always sending the message for consistency with other handlers, or
  2. Documenting why this handler needs different behavior.
Suggested change
if (!finished) port.postMessage({ done: true })
// Always send a final done message for consistency with other API handlers
port.postMessage({ done: true })

Copilot uses AI. Check for mistakes.
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request significantly refactors the API mode handling by unifying OpenAI-compatible providers and introducing a custom provider flow. The changes centralize API logic, improve configuration management through schema versioning and migration, and enhance the UI for custom provider creation and selection. The refactoring reduces code duplication and sets a solid foundation for future extensibility. Overall, the changes are well-structured and address key areas for maintainability and user experience.

I am having trouble creating individual review comments. Click here to see my feedback.

src/background/index.mjs (437-458)

medium

The removal of the isUsingCustomModel(session) block and its associated generateAnswersWithCustomApi calls is a direct result of the unification efforts. This simplifies the executeApi function by delegating custom API handling to the new generateAnswersWithOpenAICompatibleApi function, reducing conditional logic and improving code clarity.

src/background/index.mjs (566-568)

medium

The introduction of isUsingOpenAICompatibleApiSession(session) and the call to generateAnswersWithOpenAICompatibleApi consolidates the logic for various OpenAI-compatible providers. This is a core part of the unification, making the executeApi function more streamlined and extensible. All previously separate API calls for OpenAI, Moonshot, ChatGLM, DeepSeek, Ollama, OpenRouter, and AIML are now routed through this single entry point.

src/config/index.mjs (550-556)

medium

The addition of providerId, customOpenAIProviders, providerSecrets, and configSchemaVersion to the defaultConfig is crucial for the new configuration schema and migration system. providerId allows custom API modes to link to specific providers, while customOpenAIProviders and providerSecrets manage custom provider details and their API keys. configSchemaVersion enables future-proof migration logic.

src/config/index.mjs (729-733)

medium

The getUserConfig function has been updated to incorporate the migrateUserConfig logic. This ensures that any legacy configurations are automatically migrated to the new schema upon loading, providing backward compatibility and consistency across user settings. The dirty flag correctly triggers a save if migration occurs.

src/popup/sections/ApiModes.jsx (31)

medium

The addition of providerId to defaultApiMode is necessary to support the new provider registry system. This allows API modes to be explicitly linked to a specific provider, whether built-in or custom, enabling more granular control and clearer configuration.

src/popup/sections/ApiModes.jsx (98)

medium

The useLayoutEffect now includes config.customOpenAIProviders as a dependency. This ensures that the UI re-renders and updates the list of custom providers whenever the underlying configuration for custom OpenAI providers changes, maintaining UI consistency with the backend data.

src/popup/sections/ApiModes.jsx (121-134)

medium

The persistApiMode function is a new utility that centralizes the logic for saving API mode configurations. It handles updating customApiModes and optionally customOpenAIProviders, and ensures that the currently selected API mode is also updated if it's being edited. This improves code organization and reduces duplication.

src/popup/sections/ApiModes.jsx (136-195)

medium

The onSaveEditing function has been completely refactored to support the new custom provider creation and selection flow. It now handles the creation of new providers, assigns provider IDs, and manages the clearing of API keys when switching providers. This is a critical piece of the custom provider workflow, ensuring data integrity and proper linking between API modes and providers.

src/popup/sections/ApiModes.jsx (267-288)

medium

This new UI section allows users to select an existing custom provider or create a new one. The providerSelector state manages the selection, and the options are dynamically populated from customProviders. This is a key enhancement for the custom provider workflow, making it user-friendly.

src/popup/sections/ApiModes.jsx (290-305)

medium

This new block provides input fields for creating a new custom provider, including its name and base URL. This directly supports the custom provider creation flow, allowing users to define new OpenAI-compatible endpoints directly within the UI.

src/popup/sections/GeneralPart.jsx (12)

medium

The removal of isUsingOpenAiApiModel and isUsingChatGLMApiModel from imports reflects the consolidation of these checks into the new isUsingOpenAICompatibleProvider utility. This simplifies the import list and centralizes the logic for identifying OpenAI-compatible API usage.

src/popup/sections/GeneralPart.jsx (96-105)

medium

The LEGACY_API_KEY_FIELD_BY_PROVIDER_ID constant maps provider IDs to their legacy API key field names in the configuration. This is essential for the migration and unified secret management, allowing the system to correctly identify and update API keys for various providers.

src/popup/sections/GeneralPart.jsx (107-181)

medium

The buildProviderSecretUpdate function is a critical new utility for managing API keys. It handles updating providerSecrets and also ensures that legacy API key fields are updated for backward compatibility. Furthermore, it propagates API key changes to customApiModes and the apiMode if they are currently selected, ensuring consistency across the application. This function is vital for the new unified secret management system.

src/popup/sections/GeneralPart.jsx (197-203)

medium

These new state variables and derived values (selectedProviderRequest, selectedProviderId, selectedProvider, selectedProviderApiKey, isUsingOpenAICompatibleProvider) are crucial for dynamically displaying and managing API keys for the currently selected OpenAI-compatible provider. They leverage the new provider-registry.mjs to resolve provider information.

src/popup/sections/GeneralPart.jsx (206-217)

medium

The getBalance function has been updated to use the selectedProviderApiKey and openAiApiUrl derived from the new provider registry. This ensures that balance checks are performed against the correct API key and endpoint for the currently active OpenAI-compatible provider, centralizing the logic and making it more robust.

src/popup/sections/GeneralPart.jsx (275)

medium

The condition isUsingOpenAICompatibleProvider replaces multiple individual checks for OpenAI-compatible models. This simplifies the conditional rendering logic for the API key input field, making it more concise and easier to understand.

src/popup/sections/GeneralPart.jsx (326-356)

medium

This block now dynamically renders the API key input and balance check button based on whether an isUsingOpenAICompatibleProvider is selected. The selectedProviderApiKey and selectedProviderId are used to manage the input value and conditional rendering of the 'Get'/'Balance' buttons, centralizing API key management.

src/services/apis/custom-api.mjs (1)

medium

The import statement has been simplified to only include generateAnswersWithOpenAICompatible. This is a direct consequence of unifying the OpenAI-compatible API execution into a shared core module, reducing the number of individual API service imports.

src/services/apis/custom-api.mjs (19-29)

medium

The generateAnswersWithCustomApi function has been refactored to delegate its logic to the new generateAnswersWithOpenAICompatible function. This significantly reduces code duplication and centralizes the core logic for handling OpenAI-compatible API requests, including custom ones. The allowLegacyResponseField: true is important for maintaining backward compatibility with custom APIs that might use a response field.

src/services/apis/openai-api.mjs (3-4)

medium

The import of generateAnswersWithOpenAICompatible and resolveOpenAICompatibleRequest from the new core and provider registry modules is key to unifying API handling. This allows openai-api.mjs to leverage the shared logic instead of duplicating it.

src/services/apis/openai-api.mjs (6-8)

medium

The normalizeBaseUrl function is a useful utility to ensure consistency in API endpoint URLs by trimming trailing slashes. This helps prevent issues with URL construction and improves the robustness of API requests.

src/services/apis/openai-api.mjs (10-21)

medium

The resolveModelName function centralizes the logic for determining the actual model name to be used in API requests, especially for custom models and API modes. This ensures that the correct model identifier is passed to the unified API handler.

src/services/apis/openai-api.mjs (24-40)

medium

The touchOllamaKeepAlive function is a new addition specifically for Ollama providers. It sends a small request to the Ollama endpoint to keep the model loaded, which is important for performance and responsiveness. This demonstrates how provider-specific logic can be integrated while still using a unified API flow.

src/services/apis/openai-api.mjs (48-58)

medium

The generateAnswersWithGptCompletionApi function has been refactored to use the new generateAnswersWithOpenAICompatible core function. This significantly reduces code duplication and centralizes the logic for handling completion-type API requests, improving maintainability.

src/services/apis/openai-api.mjs (69-70)

medium

The baseUrl for generateAnswersWithChatgptApiCompat now uses normalizeBaseUrl, ensuring consistent URL formatting. This is a minor but good improvement for robustness.

src/services/apis/openai-api.mjs (89-99)

medium

The generateAnswersWithChatgptApiCompat function has been refactored to use the new generateAnswersWithOpenAICompatible core function. This change eliminates duplicated SSE fetching logic and centralizes the handling of chat-completion type API requests, making the code cleaner and more maintainable.

src/services/apis/openai-api.mjs (109-133)

medium

This new function, generateAnswersWithOpenAICompatibleApi, serves as the unified entry point for all OpenAI-compatible providers. It resolves the correct provider, endpoint type, URL, API key, and model using the new provider registry, then delegates to generateAnswersWithOpenAICompatible. This is the cornerstone of the PR's unification efforts, drastically simplifying the executeApi logic in background/index.mjs.

src/services/apis/openai-compatible-core.mjs (8-14)

medium

The buildHeaders function centralizes the logic for constructing HTTP headers, including the Authorization header with the API key. This promotes consistency and reusability across all OpenAI-compatible API requests.

src/services/apis/openai-compatible-core.mjs (17-28)

medium

The buildMessageAnswer function encapsulates the logic for extracting the answer content from different API response formats (delta, content, text, and legacy response field). This abstraction makes the SSE onMessage handler cleaner and more adaptable to various provider responses.

src/services/apis/openai-compatible-core.mjs (31-33)

medium

The hasFinished function provides a clear and concise way to determine if an API response indicates the end of a stream, based on the finish_reason field. This improves readability in the SSE onMessage handler.

src/services/apis/openai-compatible-core.mjs (49-159)

medium

This new generateAnswersWithOpenAICompatible function is the core of the unified API handling. It abstracts away the details of constructing request bodies for both 'completion' and 'chat' endpoints, handling token parameters, and managing the SSE connection. This significantly reduces code duplication across different OpenAI-compatible API services and makes it easier to add new providers in the future.

src/services/apis/provider-registry.mjs (4-13)

medium

The LEGACY_KEY_BY_PROVIDER_ID constant is crucial for the migration process, mapping older configuration keys to the new provider IDs. This ensures that existing user configurations are correctly interpreted and migrated to the unified secret management system.

src/services/apis/provider-registry.mjs (15-85)

medium

The BUILTIN_PROVIDER_TEMPLATE defines a standardized structure for all built-in OpenAI-compatible providers. This template includes essential information like ID, name, base URLs, and paths, which is fundamental for the new provider registry and unified API handling. It also includes allowLegacyResponseField for backward compatibility.

src/services/apis/provider-registry.mjs (88-98)

medium

The OPENAI_COMPATIBLE_GROUP_TO_PROVIDER_ID mapping is vital for translating legacy API mode group names into the new standardized provider IDs. This enables the system to correctly identify which provider corresponds to a given API mode, facilitating the migration and unification.

src/services/apis/provider-registry.mjs (101-134)

medium

The getModelNamePresetPart and resolveProviderIdFromLegacyModelName functions are essential for backward compatibility. They allow the system to infer the correct provider ID from older, less structured model names, ensuring that existing user configurations continue to work seamlessly with the new provider registry.

src/services/apis/provider-registry.mjs (136-139)

medium

The isLegacyCompletionModelName function helps identify models that historically used the 'completion' endpoint. This is important for correctly routing requests to the appropriate endpoint type within the unified API handler.

src/services/apis/provider-registry.mjs (141-157)

medium

The toStringOrEmpty, trimSlashes, ensureLeadingSlash, and joinUrl utility functions provide robust and consistent string manipulation for URLs and paths. These are crucial for correctly constructing API endpoints, especially when dealing with user-provided custom URLs and paths.

src/services/apis/provider-registry.mjs (160-184)

medium

The buildBuiltinProviders function dynamically constructs the list of built-in providers, incorporating user-specific configuration values like customOpenAiApiUrl and ollamaEndpoint. This ensures that built-in providers are correctly configured based on user settings.

src/services/apis/provider-registry.mjs (186-201)

medium

The normalizeCustomProvider function ensures that custom provider configurations are consistently structured and validated. It assigns default values and normalizes fields like id, name, baseUrl, and paths, which is essential for reliable custom provider management.

src/services/apis/provider-registry.mjs (203-208)

medium

The getCustomOpenAIProviders function retrieves and normalizes custom provider configurations from the user's settings. This provides a clean, consistent list of custom providers for use throughout the application.

src/services/apis/provider-registry.mjs (210-212)

medium

The getAllOpenAIProviders function combines both built-in and custom providers into a single, comprehensive list. This unified list is then used by other functions to resolve provider details.

src/services/apis/provider-registry.mjs (215-228)

medium

The resolveProviderIdForSession function is crucial for determining the correct provider ID based on the current session's API mode or model name. It handles both new API mode structures and legacy model names, ensuring that the appropriate provider is identified for each request.

src/services/apis/provider-registry.mjs (230-238)

medium

The resolveEndpointTypeForSession function determines whether a session should use a 'chat' or 'completion' endpoint. This is important for correctly formatting the request body in the unified API handler, especially for legacy completion models.

src/services/apis/provider-registry.mjs (240-246)

medium

The getProviderById function retrieves a provider's full configuration by its ID. It filters out disabled providers, ensuring that only active and valid providers are used.

src/services/apis/provider-registry.mjs (248-268)

medium

The getProviderSecret function centralizes the logic for retrieving API keys. It prioritizes API keys specified directly in the apiMode, then checks the new providerSecrets map, and finally falls back to legacy API key fields. This ensures that the correct API key is used for each provider, supporting both new and old configurations.

src/services/apis/provider-registry.mjs (270-301)

medium

The resolveUrlFromProvider function constructs the full API request URL based on the provider's configuration and the endpoint type (chat or completion). It handles custom URLs specified in the API mode, as well as base URLs and paths defined in the provider template, ensuring accurate endpoint resolution.

src/services/apis/provider-registry.mjs (303-317)

medium

The resolveOpenAICompatibleRequest function is the main entry point for obtaining all necessary information to make an OpenAI-compatible API request. It orchestrates the resolution of provider ID, provider details, endpoint type, request URL, and API key, providing a complete request object to the unified API handler.

src/services/init-session.mjs (5)

medium

The import of normalizeApiMode is essential for ensuring that API mode objects are consistently structured and validated when a new session is initialized. This helps prevent unexpected behavior due to malformed API mode data.

src/services/init-session.mjs (75)

medium

The apiMode property is now explicitly normalized using normalizeApiMode during session initialization. This ensures that all API mode objects conform to a consistent structure, which is critical for the new provider registry and unified API handling.

src/services/wrappers.mjs (13)

medium

The import of normalizeApiMode is crucial here to ensure that any API mode object received or processed by the port listener is normalized. This guarantees consistency and prevents issues with potentially malformed API mode data.

src/services/wrappers.mjs (110)

medium

The session.apiMode is now explicitly normalized using normalizeApiMode within the port listener. This ensures that any API mode passed into the executeApi function is in a consistent and expected format, which is vital for the new unified API handling logic.

src/utils/model-name-convert.mjs (81-93)

medium

The new normalizeApiMode function ensures that API mode objects always have a consistent structure and default values. This is critical for reliable processing of API modes throughout the application, especially with the introduction of new fields like providerId and the migration logic.

src/utils/model-name-convert.mjs (97-98)

medium

The apiModeToModelName function now explicitly normalizes the apiMode object at the beginning. This ensures that any subsequent logic in the function operates on a well-formed API mode object, improving robustness.

src/utils/model-name-convert.mjs (111-115)

medium

The getApiModesFromConfig function now normalizes config.customApiModes before processing them. This ensures that custom API modes are always in a consistent format, which is important for accurate filtering and mapping.

src/utils/model-name-convert.mjs (132-133)

medium

The logic for Ollama model names has been updated to explicitly set modelName to ollamaModel- followed by the configured Ollama model name. This ensures consistency in how Ollama models are identified within the system, aligning with the new provider registry.

src/utils/model-name-convert.mjs (139)

medium

The normalizedCustomApiModes are now correctly filtered and included in the returned array. This ensures that all custom API modes, after normalization, are properly accounted for when retrieving the list of available API modes.

src/utils/model-name-convert.mjs (148-157)

medium

The normalizeForCompare helper function is introduced to create a consistent string representation of API mode objects for comparison. This is crucial for accurately determining if an API mode is selected, especially with the addition of new fields like providerId and the need to ignore transient fields like apiKey and customUrl during comparison.

src/utils/model-name-convert.mjs (160)

medium

The isApiModeSelected function now uses the normalizeForCompare helper to compare API mode objects. This ensures that comparisons are robust and accurate, preventing false negatives or positives due to differences in non-essential fields or object ordering.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Nitpick comments (4)
src/popup/sections/GeneralPart.jsx (3)

197-203: Redundant provider lookup — resolveOpenAICompatibleRequest already returns provider.

Per provider-registry.mjs (lines 302-317), resolveOpenAICompatibleRequest returns { providerId, provider, endpointType, requestUrl, apiKey }. The getProviderById call on line 200 duplicates the lookup that was already performed.

♻️ Suggested simplification
   const selectedProviderRequest = resolveOpenAICompatibleRequest(config, config)
   const selectedProviderId = selectedProviderRequest?.providerId || ''
-  const selectedProvider = selectedProviderRequest
-    ? getProviderById(config, selectedProviderRequest.providerId)
-    : null
+  const selectedProvider = selectedProviderRequest?.provider || null
   const selectedProviderApiKey = selectedProviderRequest?.apiKey || ''
   const isUsingOpenAICompatibleProvider = Boolean(selectedProviderRequest)

This also means getProviderById can be removed from the imports on line 29 if unused elsewhere in this file.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/popup/sections/GeneralPart.jsx` around lines 197 - 203, The code performs
a redundant provider lookup: resolveOpenAICompatibleRequest already returns a
provider object, so remove the extra getProviderById call and use the provider
from selectedProviderRequest directly. Replace usage of selectedProvider
(currently assigned via getProviderById) with selectedProviderRequest.provider,
keep selectedProviderRequest, selectedProviderApiKey and
isUsingOpenAICompatibleProvider as-is, and remove getProviderById from imports
if no other references remain in this file.

205-219: Billing URL construction assumes baseUrl does not include a path prefix.

Line 207 concatenates openAiApiUrl + '/dashboard/billing/credit_grants', and checkBilling (line 54) uses apiUrl + '/v1/dashboard/billing/...'. This works correctly only when baseUrl is a bare origin like https://api.openai.com.

Since the Get/Balance buttons are gated to selectedProviderId === 'openai' (line 337), this is safe today — but worth a brief comment in case the billing check is ever extended to other providers.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/popup/sections/GeneralPart.jsx` around lines 205 - 219, getBalance
constructs billing paths by simple string concatenation using
selectedProvider?.baseUrl or config.customOpenAiApiUrl which breaks if baseUrl
contains a path prefix; change getBalance to build the billing URL with a robust
approach (e.g., use the URL constructor or ensure path joining) instead of naive
concatenation, mirroring how checkBilling constructs URLs (apiUrl + '/v1/...');
update the code in getBalance (and note behavior in a brief comment near
getBalance and checkBilling) to use new URL('/dashboard/billing/credit_grants',
openAiApiUrl) or equivalent so it works when baseUrl includes a path, and keep
existing fallbacks (checkBilling and openUrl) unchanged.

107-181: Consider extracting buildProviderSecretUpdate to a shared utility module.

This ~75-line function contains non-trivial business logic for provider secret syncing, legacy key migration, and API mode key reconciliation. Placing it alongside the UI component makes this file harder to test in isolation and harder to reuse if other entry points need the same logic.

A module like src/services/apis/provider-secrets.mjs (or similar) would be a better home. As per coding guidelines, src/services/apis/**/*.{js,mjs}: "API client integrations should be placed in src/services/apis/" — while this isn't strictly an API client, it's tightly coupled to the provider registry's data model.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/popup/sections/GeneralPart.jsx` around lines 107 - 181, The function
buildProviderSecretUpdate contains complex provider-secret sync and migration
logic and should be moved out of the UI component into a shared service module
for testability and reuse; extract buildProviderSecretUpdate (and any helper
symbols it relies on like LEGACY_API_KEY_FIELD_BY_PROVIDER_ID and
isApiModeSelected) into a new module under src/services/apis (e.g.,
src/services/apis/provider-secrets.mjs), update the GeneralPart.jsx import to
use the new exported function, adjust exports so unit tests can import the logic
directly, and ensure any references to config shape remain unchanged so behavior
stays identical.
src/popup/sections/ApiModes.jsx (1)

42-70: normalizeBaseUrl is duplicated in openai-api.mjs.

The identical helper (strip trailing slashes from a URL string) exists at src/services/apis/openai-api.mjs lines 6-8. Consider extracting it to a shared utility to avoid drift.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/popup/sections/ApiModes.jsx` around lines 42 - 70, The helper
normalizeBaseUrl is duplicated (also present in openai-api.mjs); extract it into
a shared utility module (e.g., utils/url.js or similar), export
normalizeBaseUrl, replace the local definitions in ApiModes.jsx and
openai-api.mjs with imports of the shared function, update any imports/usages to
the new export name, and remove the duplicate implementations so both
createProviderId/normalizeProviderId callers use the single centralized
normalizeBaseUrl function.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/background/index.mjs`:
- Around line 8-9: Update the import for generateAnswersWithOpenAICompatibleApi
to include the explicit .mjs extension (match the style used for
generateAnswersWithAzureOpenaiApi); change the import source
'../services/apis/openai-api' to '../services/apis/openai-api.mjs' so the module
loader can resolve the file at runtime and remain consistent with other imports
(reference symbol: generateAnswersWithOpenAICompatibleApi).

In `@src/popup/sections/ApiModes.jsx`:
- Around line 144-147: The save handler for new providers silently returns when
providerName or providerBaseUrl is empty (code around providerSelector ===
'__new__', providerDraft, normalizeBaseUrl); update the component to perform
explicit validation: compute a boolean like isNewProviderValid =
Boolean(providerDraft.name.trim() && normalizeBaseUrl(providerDraft.baseUrl))
and use it to disable the Save button and/or set an inline validation message
state (e.g., providerError) shown near the name/baseUrl inputs; also ensure the
save handler shows that validation message when invoked with invalid input
instead of simply returning.
- Around line 337-348: When opening the editor for an apiMode in ApiModes.jsx,
validate the resolved providerId (computed in the isCustomApiMode branch)
against the current customProviders list and if it no longer exists, replace it
with LEGACY_CUSTOM_PROVIDER_ID before calling setEditingApiMode and
setProviderSelector; specifically, update the logic around
isCustomApiMode/providerId so you check customProviders for a matching
providerId and fall back to LEGACY_CUSTOM_PROVIDER_ID, then call
setEditingApiMode({...defaultApiMode, ...apiMode, providerId:
validatedProviderId}) and setProviderSelector(validatedProviderId) to avoid
persisting a stale ID on save.

In `@src/popup/sections/GeneralPart.jsx`:
- Around line 96-105: Extract the canonical provider→field mapping currently
defined as LEGACY_API_KEY_FIELD_BY_PROVIDER_ID and the inverse
LEGACY_SECRET_KEY_TO_PROVIDER_ID into a single shared constants module, export
the provider→field map (e.g., LEGACY_API_KEY_FIELD_BY_PROVIDER_ID) from that
module, and replace local copies in GeneralPart.jsx and the config module by
importing that constant; then derive the inverse mapping programmatically (e.g.,
by reversing the imported provider→field map into
LEGACY_SECRET_KEY_TO_PROVIDER_ID where needed) so future provider additions only
require updating one source of truth.

In `@src/services/apis/openai-api.mjs`:
- Around line 24-40: The touchOllamaKeepAlive function issues a fetch without a
timeout which can hang; update touchOllamaKeepAlive to create an
AbortController, pass controller.signal into fetch, start a timer (e.g.,
setTimeout) to call controller.abort() after a short configurable timeout
(milliseconds), ensure the timer is cleared on success/failure, and handle the
abort error so the caller (e.g., generateAnswersWithOpenAICompatibleApi) won't
wait indefinitely; keep the same request shape and headers but include the
signal and proper cleanup.

In `@src/services/apis/provider-registry.mjs`:
- Around line 160-183: The openai provider base URL can end with "/v1" and cause
duplicate "/v1" when later joined; in buildBuiltinProviders normalize
config.customOpenAiApiUrl by stripping any trailing "/v1" or "/v1/" before
calling trimSlashes (i.e., compute a normalized variable from
config.customOpenAiApiUrl that removes a trailing "/v1" segment if present, then
pass that into trimSlashes for the openai branch); reference
buildBuiltinProviders, config.customOpenAiApiUrl, and trimSlashes when making
this change.
- Around line 270-280: resolveUrlFromProvider currently unconditionally prefers
session.apiMode.customUrl for customApiModelKeys, which causes any provider
config (including provider.providerId) to be ignored; update the logic in
resolveUrlFromProvider to only use session.apiMode.customUrl when there is no
provider.providerId (or when an explicit override flag like
session.apiMode.forceCustomUrl is truthy), otherwise fall back to reading the
provider's configured URL from the provider/config arguments; reference
resolveUrlFromProvider, session.apiMode.customUrl, and provider.providerId when
making the change.

---

Nitpick comments:
In `@src/popup/sections/ApiModes.jsx`:
- Around line 42-70: The helper normalizeBaseUrl is duplicated (also present in
openai-api.mjs); extract it into a shared utility module (e.g., utils/url.js or
similar), export normalizeBaseUrl, replace the local definitions in ApiModes.jsx
and openai-api.mjs with imports of the shared function, update any
imports/usages to the new export name, and remove the duplicate implementations
so both createProviderId/normalizeProviderId callers use the single centralized
normalizeBaseUrl function.

In `@src/popup/sections/GeneralPart.jsx`:
- Around line 197-203: The code performs a redundant provider lookup:
resolveOpenAICompatibleRequest already returns a provider object, so remove the
extra getProviderById call and use the provider from selectedProviderRequest
directly. Replace usage of selectedProvider (currently assigned via
getProviderById) with selectedProviderRequest.provider, keep
selectedProviderRequest, selectedProviderApiKey and
isUsingOpenAICompatibleProvider as-is, and remove getProviderById from imports
if no other references remain in this file.
- Around line 205-219: getBalance constructs billing paths by simple string
concatenation using selectedProvider?.baseUrl or config.customOpenAiApiUrl which
breaks if baseUrl contains a path prefix; change getBalance to build the billing
URL with a robust approach (e.g., use the URL constructor or ensure path
joining) instead of naive concatenation, mirroring how checkBilling constructs
URLs (apiUrl + '/v1/...'); update the code in getBalance (and note behavior in a
brief comment near getBalance and checkBilling) to use new
URL('/dashboard/billing/credit_grants', openAiApiUrl) or equivalent so it works
when baseUrl includes a path, and keep existing fallbacks (checkBilling and
openUrl) unchanged.
- Around line 107-181: The function buildProviderSecretUpdate contains complex
provider-secret sync and migration logic and should be moved out of the UI
component into a shared service module for testability and reuse; extract
buildProviderSecretUpdate (and any helper symbols it relies on like
LEGACY_API_KEY_FIELD_BY_PROVIDER_ID and isApiModeSelected) into a new module
under src/services/apis (e.g., src/services/apis/provider-secrets.mjs), update
the GeneralPart.jsx import to use the new exported function, adjust exports so
unit tests can import the logic directly, and ensure any references to config
shape remain unchanged so behavior stays identical.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ab628e1 and 36ed81e.

📒 Files selected for processing (11)
  • src/background/index.mjs
  • src/config/index.mjs
  • src/popup/sections/ApiModes.jsx
  • src/popup/sections/GeneralPart.jsx
  • src/services/apis/custom-api.mjs
  • src/services/apis/openai-api.mjs
  • src/services/apis/openai-compatible-core.mjs
  • src/services/apis/provider-registry.mjs
  • src/services/init-session.mjs
  • src/services/wrappers.mjs
  • src/utils/model-name-convert.mjs

Comment on lines +8 to 9
import { generateAnswersWithOpenAICompatibleApi } from '../services/apis/openai-api'
import { generateAnswersWithAzureOpenaiApi } from '../services/apis/azure-openai-api.mjs'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, check what files exist in the openai-api path
echo "=== Checking openai-api related files ==="
fd -a 'openai-api' src/services/apis 2>/dev/null || echo "No matches with exact name"
fd 'openai.*api' src/services/apis 2>/dev/null || echo "No matches with pattern"

echo ""
echo "=== Checking all import statements for openai-api in src ==="
rg -n "openai-api" src 2>/dev/null || echo "No matches found"

echo ""
echo "=== Checking bundler/build config files ==="
fd -a 'vite.config.*|webpack.config.*|rollup.config.*|esbuild.*|parcel.*' . 2>/dev/null | head -20

echo ""
echo "=== Checking package.json for module resolution settings ==="
if [ -f "package.json" ]; then
  cat package.json | jq '.type // .exports // .main // empty' 2>/dev/null || cat package.json | grep -A5 '"type"\|"exports"\|"main"' 2>/dev/null || echo "No explicit module resolution found"
fi

echo ""
echo "=== Listing files in src/services/apis directory ==="
ls -la src/services/apis/ 2>/dev/null || echo "Directory not found"

Repository: ChatGPTBox-dev/chatGPTBox

Length of output: 2909


Add .mjs extension to openai-api import for consistency.

The file src/services/apis/openai-api.mjs exists, and every other import of it throughout the codebase uses the explicit .mjs extension (e.g., in ollama-api.mjs, openrouter-api.mjs). Line 9 of the same file already correctly imports azure-openai-api.mjs with the extension. Without explicit file extensions, the import will fail at runtime.

Fix
-import { generateAnswersWithOpenAICompatibleApi } from '../services/apis/openai-api'
+import { generateAnswersWithOpenAICompatibleApi } from '../services/apis/openai-api.mjs'
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import { generateAnswersWithOpenAICompatibleApi } from '../services/apis/openai-api'
import { generateAnswersWithAzureOpenaiApi } from '../services/apis/azure-openai-api.mjs'
import { generateAnswersWithOpenAICompatibleApi } from '../services/apis/openai-api.mjs'
import { generateAnswersWithAzureOpenaiApi } from '../services/apis/azure-openai-api.mjs'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/background/index.mjs` around lines 8 - 9, Update the import for
generateAnswersWithOpenAICompatibleApi to include the explicit .mjs extension
(match the style used for generateAnswersWithAzureOpenaiApi); change the import
source '../services/apis/openai-api' to '../services/apis/openai-api.mjs' so the
module loader can resolve the file at runtime and remain consistent with other
imports (reference symbol: generateAnswersWithOpenAICompatibleApi).

@PeterDaveHello PeterDaveHello force-pushed the refactor/openai-provider-platform branch from 36ed81e to fdc0497 Compare February 25, 2026 15:14
@PeterDaveHello
Copy link
Member Author

@codex review

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 12 out of 12 changed files in this pull request and generated no new comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (4)
src/background/index.mjs (1)

8-8: ⚠️ Potential issue | 🟠 Major

Missing .mjs extension on the openai-api import.

This was flagged in a prior review and still needs the extension for runtime resolution consistency.

-import { generateAnswersWithOpenAICompatibleApi } from '../services/apis/openai-api'
+import { generateAnswersWithOpenAICompatibleApi } from '../services/apis/openai-api.mjs'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/background/index.mjs` at line 8, The import statement for
generateAnswersWithOpenAICompatibleApi is missing the .mjs extension; update the
import in src/background/index.mjs to reference
'../services/apis/openai-api.mjs' (i.e., change the module path used where
generateAnswersWithOpenAICompatibleApi is imported) so the runtime can resolve
the module consistently.
src/popup/sections/GeneralPart.jsx (1)

96-105: Duplication of provider-to-key mapping remains — already flagged in prior review.

LEGACY_API_KEY_FIELD_BY_PROVIDER_ID here and LEGACY_SECRET_KEY_TO_PROVIDER_ID in src/config/index.mjs express the same mapping in opposite directions. A shared constants module would keep them in sync.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/popup/sections/GeneralPart.jsx` around lines 96 - 105,
LEGACY_API_KEY_FIELD_BY_PROVIDER_ID duplicates the inverse mapping
LEGACY_SECRET_KEY_TO_PROVIDER_ID; extract the canonical mapping into a new
shared constants module (e.g., export LEGACY_API_KEY_FIELD_BY_PROVIDER_ID from a
central module), import that constant into GeneralPart.jsx and update
src/config's code to derive LEGACY_SECRET_KEY_TO_PROVIDER_ID from it (use
Object.entries/Object.fromEntries to invert the mapping), and remove the
hard-coded duplicate so both modules consume the single source of truth.
src/services/apis/openai-api.mjs (1)

24-40: Missing timeout on touchOllamaKeepAlive fetch — can block indefinitely.

The await at line 129 will hang if the Ollama endpoint is unresponsive. While the .catch() prevents unhandled rejections, the promise itself never resolves until the TCP stack gives up. Add an AbortController with a reasonable timeout.

🛡️ Proposed fix
 async function touchOllamaKeepAlive(config, model, apiKey) {
+  const controller = new AbortController()
+  const timeout = setTimeout(() => controller.abort(), 5000)
   return fetch(`${normalizeBaseUrl(config.ollamaEndpoint)}/api/generate`, {
     method: 'POST',
+    signal: controller.signal,
     headers: {
       'Content-Type': 'application/json',
       ...(apiKey ? { Authorization: `Bearer ${apiKey}` } : {}),
     },
     body: JSON.stringify({
       model,
       prompt: 't',
       options: {
         num_predict: 1,
       },
       keep_alive: config.ollamaKeepAliveTime === '-1' ? -1 : config.ollamaKeepAliveTime,
     }),
-  })
+  }).finally(() => clearTimeout(timeout))
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/openai-api.mjs` around lines 24 - 40, The fetch in
touchOllamaKeepAlive can hang indefinitely; modify touchOllamaKeepAlive to
create an AbortController, pass controller.signal to fetch, and set a timer
(e.g., via setTimeout) to call controller.abort() after a reasonable timeout
(e.g., 5-10s), clearing the timer on completion; ensure the fetch call uses the
controller.signal and that any catch handles AbortError gracefully so the
calling await won't hang.
src/services/apis/provider-registry.mjs (1)

15-86: OpenAI template paths include /v1 prefix while other providers don't — potential /v1 duplication.

The OpenAI provider template defines chatCompletionsPath: '/v1/chat/completions' (line 19), while the OpenAI baseUrl is set from config.customOpenAiApiUrl in buildBuiltinProviders (line 165). If a user has stored a base URL already ending in /v1 (e.g., https://api.openai.com/v1), joinUrl will produce https://api.openai.com/v1/v1/chat/completions.

Strip a trailing /v1 from the OpenAI base URL before storing, or unify the path convention across all providers.

🛠️ Proposed fix in buildBuiltinProviders
     if (provider.id === 'openai') {
+      const rawUrl = trimSlashes(config.customOpenAiApiUrl || 'https://api.openai.com')
       return {
         ...provider,
-        baseUrl: trimSlashes(config.customOpenAiApiUrl || 'https://api.openai.com'),
+        baseUrl: rawUrl.replace(/\/v1$/i, ''),
       }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/provider-registry.mjs` around lines 15 - 86,
BUILTIN_PROVIDER_TEMPLATE currently sets OpenAI's chatCompletionsPath to
'/v1/chat/completions' which can duplicate '/v1' if buildBuiltinProviders uses a
baseUrl that already ends with '/v1'; update buildBuiltinProviders to normalize
the OpenAI base URL by stripping a trailing '/v1' or '/v1/' before saving (e.g.,
use baseUrl = config.customOpenAiApiUrl.replace(/\/v1\/?$/, '') when building
the provider entry for id 'openai'), or alternatively change the OpenAI template
path to '/chat/completions' so all providers use the same path convention;
adjust the code that creates the OpenAI provider (referencing
BUILTIN_PROVIDER_TEMPLATE and buildBuiltinProviders) to perform the
normalization.
🧹 Nitpick comments (8)
src/config/index.mjs (2)

740-751: Minor: aimlModelKeys and aimlApiModelKeys both map to 'aiml'.

ModelGroups at line 197 uses the key aimlModelKeys (not aimlApiModelKeys). The extra entry is harmless (belt-and-suspenders), but if it's intentional for forward-compatibility, a comment would help. Otherwise it's dead code.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/config/index.mjs` around lines 740 - 751, The mapping
API_MODE_GROUP_TO_PROVIDER_ID currently contains both aimlModelKeys and
aimlApiModelKeys pointing to 'aiml'; remove the redundant key aimlApiModelKeys
(or add a brief clarifying comment if the duplicate is intentional for future
compatibility) so only the canonical key used by ModelGroups (aimlModelKeys)
remains, updating the object API_MODE_GROUP_TO_PROVIDER_ID to eliminate dead
code while preserving behavior.

796-1101: Migration logic is thorough but consider extracting shared apiMode migration into a helper.

The migration for customApiModes (lines 879-970) and selectedApiMode (lines 973-1086) share significant structural similarity — both normalize provider IDs, create providers from legacy custom URLs, and migrate API keys to providerSecrets. This isn't urgent since the migration runs once per config version bump, but extracting a shared migrateApiModeEntry(apiMode, context) helper would reduce the ~100 lines of duplication and lower the risk of one branch drifting from the other in future migrations.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/config/index.mjs` around lines 796 - 1101, The customApiModes loop and
selectedApiMode handling in migrateUserConfig share repeated logic; extract that
shared logic into a helper named migrateApiModeEntry(apiMode, {providerSecrets,
customOpenAIProviders, providerIdSet, providerIdRenameLookup,
customProviderCounter, LEGACY_CUSTOM_PROVIDER_SECRET,
API_MODE_GROUP_TO_PROVIDER_ID}) that normalizes providerId, applies renames,
creates new provider entries from customUrl (updating providerIdSet,
customOpenAIProviders and customProviderCounter), moves apiKey into
providerSecrets, clears apiMode.customUrl/apiKey/providerId as appropriate, and
returns {updatedApiMode, providerIdAssignedFromLegacyCustomUrl,
customProviderCounter, dirtyFlags} so migrateUserConfig can call it for each
entry in customApiModes and once for selectedApiMode (merging returned updates
and flags) to remove the duplicated ~100 lines while preserving existing
side-effects on providerSecrets, customOpenAIProviders, providerIdSet, and
dirty/custom*Dirty flags.
src/services/apis/openai-compatible-core.mjs (2)

8-15: extraHeaders can be silently overridden when apiKey is provided.

If a caller passes Authorization inside extraHeaders and provides apiKey, the spread on line 11 sets Authorization first, then line 13 unconditionally overwrites it. This is likely the desired precedence, but it's implicit. A small comment or an explicit delete would prevent surprises for custom provider integrations that may need a non-Bearer auth header.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/openai-compatible-core.mjs` around lines 8 - 15, The
buildHeaders function currently allows an Authorization in extraHeaders to be
silently overwritten when apiKey is provided; make this precedence explicit by
removing any Authorization from extraHeaders before merging (e.g., delete
extraHeaders.Authorization) when you intend apiKey to win, or alternatively
check for an existing Authorization and only set headers.Authorization = `Bearer
${apiKey}` if none exists — update the buildHeaders function and add a brief
comment clarifying the chosen precedence so custom providers aren’t surprised.

109-117: Partial answers are silently discarded when the stream ends without a finish_reason.

When onEnd fires and finished is false (line 148), only { done: true } is posted — finish() is never called, so pushRecord is skipped and the partial answer is lost. This appears intentional based on the corresponding test (emits fallback done message when stream ends without finish reason), but it means any accumulated answer text from intermediate deltas is dropped from the conversation history.

If this is by design (don't persist incomplete answers), a brief comment would help future readers understand the intent.

     async onEnd() {
-      if (!finished) port.postMessage({ done: true })
+      // Stream ended without a finish_reason — notify the UI but do not
+      // persist the partial answer into the conversation history.
+      if (!finished) port.postMessage({ done: true })

Also applies to: 146-151

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/openai-compatible-core.mjs` around lines 109 - 117, The
onEnd handler currently posts a fallback done message without calling finish(),
so any accumulated partial answer in the answer variable is never persisted via
pushRecord; update the onEnd logic to call finish() whenever finished is false
(ensuring finish() uses pushRecord(session, question, answer) and sends the
final port.postMessage with done:true and session) so partial deltas are saved,
or if dropping partial answers is intentional add a concise comment near
finish()/onEnd explaining the design decision; reference finish, pushRecord,
answer, onEnd and port.postMessage when making the change.
src/services/apis/provider-registry.mjs (3)

249-269: getProviderSecret cascade is well-layered but the final fallback (line 268) may surface stale keys.

The priority is: (1) apiMode.apiKey for customApiModelKeys, (2) providerSecrets map, (3) legacy config key, (4) apiMode.apiKey as last resort for all groups. The last fallback means an old apiMode.apiKey value embedded in a session could override a rotated key in providerSecrets or legacy config — but only if both are empty. This is a reasonable last-resort behavior; just document that the apiMode-embedded key is the weakest source.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/provider-registry.mjs` around lines 249 - 269,
getProviderSecret currently falls back to session.apiMode.apiKey as the weakest
source after checking providerSecrets and legacy keys, which can surface stale
embedded keys; update the function (getProviderSecret) to include a concise
inline comment or docstring clarifying the priority order
(apiMode.customApiModelKeys -> providerSecrets map -> LEGACY_KEY_BY_PROVIDER_ID
lookup -> session.apiMode.apiKey as last resort) and explicitly note that the
final fallback (session.apiMode.apiKey) is the weakest/least-authoritative
source and may be stale so should not override rotated keys in providerSecrets
or legacy config.

241-247: getProviderById silently returns null for disabled providers — callers should be aware.

Line 245 returns null if provider.enabled === false. This means resolveOpenAICompatibleRequest will return null, and the caller in openai-api.mjs will throw 'Unknown OpenAI-compatible provider configuration'. The error message doesn't distinguish "provider not found" from "provider disabled," which could confuse users troubleshooting.

💡 Consider returning a richer error or distinct null reasons

One option is to have getProviderById return a status object, or have resolveOpenAICompatibleRequest produce a more descriptive error reason:

 export function resolveOpenAICompatibleRequest(config, session) {
   const providerId = resolveProviderIdForSession(session)
   if (!providerId) return null
   const provider = getProviderById(config, providerId)
-  if (!provider) return null
+  if (!provider) {
+    const allProviders = getAllOpenAIProviders(config)
+    const disabled = allProviders.find((p) => p.id === providerId && p.enabled === false)
+    if (disabled) return { error: `Provider '${providerId}' is disabled` }
+    return null
+  }

The caller would then need to check for .error — but this is optional depending on how much UX polish is desired.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/provider-registry.mjs` around lines 241 - 247,
getProviderById currently returns null for both "not found" and "disabled" cases
which makes callers like resolveOpenAICompatibleRequest unable to distinguish
why resolution failed; change getProviderById to return a richer result (e.g. {
provider } on success, { error: 'not_found' } or { error: 'disabled' } when
appropriate) or throw a descriptive error, then update
resolveOpenAICompatibleRequest (and callers such as openai-api.mjs) to handle
the new shape/error and produce a clearer message like "provider not found" vs
"provider disabled"; reference the getProviderById and
resolveOpenAICompatibleRequest symbols when making these changes.

101-134: resolveProviderIdFromLegacyModelName — thorough legacy mapping, one subtle gap.

Line 130 uses preset.startsWith('chatglm') which would also match a hypothetical model starting with chatglm that isn't actually ChatGLM-related. Given the bounded set of known model names in this extension, the practical risk is negligible, but it's worth noting the loose match.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/provider-registry.mjs` around lines 101 - 134, The loose
match in resolveProviderIdFromLegacyModelName uses preset.startsWith('chatglm')
which can incorrectly match unrelated names; tighten the condition by matching
the known ChatGLM tokens only (for example check preset === 'chatglm' OR
preset.startsWith('chatglm_') OR preset === 'chatglmApiModelKeys') so it only
identifies legitimate ChatGLM presets; update the condition near the other
provider checks (see resolveProviderIdFromLegacyModelName and
getModelNamePresetPart) to use these stricter comparisons.
src/services/apis/openai-api.mjs (1)

6-8: normalizeBaseUrl duplicates trimSlashes in provider-registry.mjs.

Both normalizeBaseUrl here and trimSlashes in provider-registry.mjs strip trailing slashes from a string. Consider importing trimSlashes (or exporting a shared helper) instead of maintaining a parallel implementation that could drift.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/openai-api.mjs` around lines 6 - 8, Replace the duplicate
trailing-slash logic in normalizeBaseUrl by reusing the existing trimSlashes
helper: remove the local normalizeBaseUrl function and import trimSlashes (from
the module that currently exports it) into this module, then call trimSlashes
wherever normalizeBaseUrl was used; alternatively, if trimSlashes is not
exported, export it from provider-registry.mjs and update imports so
openai-api.mjs uses that shared helper instead of duplicating the
implementation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/services/apis/provider-registry.mjs`:
- Around line 186-201: The fallback ID in normalizeCustomProvider currently uses
the unstable index-based `custom-provider-${index + 1}` which will change when
array order changes and break lookups (e.g., providerSecrets and
session.apiMode.providerId); change normalizeCustomProvider to produce a stable
fallback ID when provider.id is missing by deriving it deterministically (for
example, compute a short hash from stable fields like name + baseUrl) or by
using a persistent UUID assigned at provider creation time, and update any
creation flow so newly created providers get that stable id; reference
normalizeCustomProvider and ensure code paths that persist/lookup provider IDs
(providerSecrets, session.apiMode.providerId) use the new stable id scheme.

---

Duplicate comments:
In `@src/background/index.mjs`:
- Line 8: The import statement for generateAnswersWithOpenAICompatibleApi is
missing the .mjs extension; update the import in src/background/index.mjs to
reference '../services/apis/openai-api.mjs' (i.e., change the module path used
where generateAnswersWithOpenAICompatibleApi is imported) so the runtime can
resolve the module consistently.

In `@src/popup/sections/GeneralPart.jsx`:
- Around line 96-105: LEGACY_API_KEY_FIELD_BY_PROVIDER_ID duplicates the inverse
mapping LEGACY_SECRET_KEY_TO_PROVIDER_ID; extract the canonical mapping into a
new shared constants module (e.g., export LEGACY_API_KEY_FIELD_BY_PROVIDER_ID
from a central module), import that constant into GeneralPart.jsx and update
src/config's code to derive LEGACY_SECRET_KEY_TO_PROVIDER_ID from it (use
Object.entries/Object.fromEntries to invert the mapping), and remove the
hard-coded duplicate so both modules consume the single source of truth.

In `@src/services/apis/openai-api.mjs`:
- Around line 24-40: The fetch in touchOllamaKeepAlive can hang indefinitely;
modify touchOllamaKeepAlive to create an AbortController, pass controller.signal
to fetch, and set a timer (e.g., via setTimeout) to call controller.abort()
after a reasonable timeout (e.g., 5-10s), clearing the timer on completion;
ensure the fetch call uses the controller.signal and that any catch handles
AbortError gracefully so the calling await won't hang.

In `@src/services/apis/provider-registry.mjs`:
- Around line 15-86: BUILTIN_PROVIDER_TEMPLATE currently sets OpenAI's
chatCompletionsPath to '/v1/chat/completions' which can duplicate '/v1' if
buildBuiltinProviders uses a baseUrl that already ends with '/v1'; update
buildBuiltinProviders to normalize the OpenAI base URL by stripping a trailing
'/v1' or '/v1/' before saving (e.g., use baseUrl =
config.customOpenAiApiUrl.replace(/\/v1\/?$/, '') when building the provider
entry for id 'openai'), or alternatively change the OpenAI template path to
'/chat/completions' so all providers use the same path convention; adjust the
code that creates the OpenAI provider (referencing BUILTIN_PROVIDER_TEMPLATE and
buildBuiltinProviders) to perform the normalization.

---

Nitpick comments:
In `@src/config/index.mjs`:
- Around line 740-751: The mapping API_MODE_GROUP_TO_PROVIDER_ID currently
contains both aimlModelKeys and aimlApiModelKeys pointing to 'aiml'; remove the
redundant key aimlApiModelKeys (or add a brief clarifying comment if the
duplicate is intentional for future compatibility) so only the canonical key
used by ModelGroups (aimlModelKeys) remains, updating the object
API_MODE_GROUP_TO_PROVIDER_ID to eliminate dead code while preserving behavior.
- Around line 796-1101: The customApiModes loop and selectedApiMode handling in
migrateUserConfig share repeated logic; extract that shared logic into a helper
named migrateApiModeEntry(apiMode, {providerSecrets, customOpenAIProviders,
providerIdSet, providerIdRenameLookup, customProviderCounter,
LEGACY_CUSTOM_PROVIDER_SECRET, API_MODE_GROUP_TO_PROVIDER_ID}) that normalizes
providerId, applies renames, creates new provider entries from customUrl
(updating providerIdSet, customOpenAIProviders and customProviderCounter), moves
apiKey into providerSecrets, clears apiMode.customUrl/apiKey/providerId as
appropriate, and returns {updatedApiMode, providerIdAssignedFromLegacyCustomUrl,
customProviderCounter, dirtyFlags} so migrateUserConfig can call it for each
entry in customApiModes and once for selectedApiMode (merging returned updates
and flags) to remove the duplicated ~100 lines while preserving existing
side-effects on providerSecrets, customOpenAIProviders, providerIdSet, and
dirty/custom*Dirty flags.

In `@src/services/apis/openai-api.mjs`:
- Around line 6-8: Replace the duplicate trailing-slash logic in
normalizeBaseUrl by reusing the existing trimSlashes helper: remove the local
normalizeBaseUrl function and import trimSlashes (from the module that currently
exports it) into this module, then call trimSlashes wherever normalizeBaseUrl
was used; alternatively, if trimSlashes is not exported, export it from
provider-registry.mjs and update imports so openai-api.mjs uses that shared
helper instead of duplicating the implementation.

In `@src/services/apis/openai-compatible-core.mjs`:
- Around line 8-15: The buildHeaders function currently allows an Authorization
in extraHeaders to be silently overwritten when apiKey is provided; make this
precedence explicit by removing any Authorization from extraHeaders before
merging (e.g., delete extraHeaders.Authorization) when you intend apiKey to win,
or alternatively check for an existing Authorization and only set
headers.Authorization = `Bearer ${apiKey}` if none exists — update the
buildHeaders function and add a brief comment clarifying the chosen precedence
so custom providers aren’t surprised.
- Around line 109-117: The onEnd handler currently posts a fallback done message
without calling finish(), so any accumulated partial answer in the answer
variable is never persisted via pushRecord; update the onEnd logic to call
finish() whenever finished is false (ensuring finish() uses pushRecord(session,
question, answer) and sends the final port.postMessage with done:true and
session) so partial deltas are saved, or if dropping partial answers is
intentional add a concise comment near finish()/onEnd explaining the design
decision; reference finish, pushRecord, answer, onEnd and port.postMessage when
making the change.

In `@src/services/apis/provider-registry.mjs`:
- Around line 249-269: getProviderSecret currently falls back to
session.apiMode.apiKey as the weakest source after checking providerSecrets and
legacy keys, which can surface stale embedded keys; update the function
(getProviderSecret) to include a concise inline comment or docstring clarifying
the priority order (apiMode.customApiModelKeys -> providerSecrets map ->
LEGACY_KEY_BY_PROVIDER_ID lookup -> session.apiMode.apiKey as last resort) and
explicitly note that the final fallback (session.apiMode.apiKey) is the
weakest/least-authoritative source and may be stale so should not override
rotated keys in providerSecrets or legacy config.
- Around line 241-247: getProviderById currently returns null for both "not
found" and "disabled" cases which makes callers like
resolveOpenAICompatibleRequest unable to distinguish why resolution failed;
change getProviderById to return a richer result (e.g. { provider } on success,
{ error: 'not_found' } or { error: 'disabled' } when appropriate) or throw a
descriptive error, then update resolveOpenAICompatibleRequest (and callers such
as openai-api.mjs) to handle the new shape/error and produce a clearer message
like "provider not found" vs "provider disabled"; reference the getProviderById
and resolveOpenAICompatibleRequest symbols when making these changes.
- Around line 101-134: The loose match in resolveProviderIdFromLegacyModelName
uses preset.startsWith('chatglm') which can incorrectly match unrelated names;
tighten the condition by matching the known ChatGLM tokens only (for example
check preset === 'chatglm' OR preset.startsWith('chatglm_') OR preset ===
'chatglmApiModelKeys') so it only identifies legitimate ChatGLM presets; update
the condition near the other provider checks (see
resolveProviderIdFromLegacyModelName and getModelNamePresetPart) to use these
stricter comparisons.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 36ed81e and fdc0497.

📒 Files selected for processing (12)
  • src/background/index.mjs
  • src/config/index.mjs
  • src/popup/sections/ApiModes.jsx
  • src/popup/sections/GeneralPart.jsx
  • src/services/apis/custom-api.mjs
  • src/services/apis/openai-api.mjs
  • src/services/apis/openai-compatible-core.mjs
  • src/services/apis/provider-registry.mjs
  • src/services/init-session.mjs
  • src/services/wrappers.mjs
  • src/utils/model-name-convert.mjs
  • tests/unit/services/apis/openai-api-compat.test.mjs
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/services/init-session.mjs

Comment on lines +186 to +201
function normalizeCustomProvider(provider, index) {
if (!provider || typeof provider !== 'object') return null
const id = toStringOrEmpty(provider.id).trim() || `custom-provider-${index + 1}`
return {
id,
name: toStringOrEmpty(provider.name).trim() || `Custom Provider ${index + 1}`,
baseUrl: trimSlashes(provider.baseUrl),
chatCompletionsPath: ensureLeadingSlash(provider.chatCompletionsPath, DEFAULT_CHAT_PATH),
completionsPath: ensureLeadingSlash(provider.completionsPath, DEFAULT_COMPLETION_PATH),
chatCompletionsUrl: toStringOrEmpty(provider.chatCompletionsUrl).trim(),
completionsUrl: toStringOrEmpty(provider.completionsUrl).trim(),
builtin: false,
enabled: provider.enabled !== false,
allowLegacyResponseField: Boolean(provider.allowLegacyResponseField),
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Index-based fallback ID for custom providers is fragile.

Line 188 generates custom-provider-${index + 1} when provider.id is empty. Since the index depends on array position, reordering or removing providers will shift IDs, breaking providerSecrets lookups and persisted session.apiMode.providerId references. Consider generating a stable ID (e.g., hash of name+baseUrl, or a UUID at creation time) when the ID is missing.

#!/bin/bash
# Check how custom providers are created to see if IDs are assigned at creation
rg -n -C5 'customOpenAIProviders' --type=js -g '!node_modules/**'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/provider-registry.mjs` around lines 186 - 201, The fallback
ID in normalizeCustomProvider currently uses the unstable index-based
`custom-provider-${index + 1}` which will change when array order changes and
break lookups (e.g., providerSecrets and session.apiMode.providerId); change
normalizeCustomProvider to produce a stable fallback ID when provider.id is
missing by deriving it deterministically (for example, compute a short hash from
stable fields like name + baseUrl) or by using a persistent UUID assigned at
provider creation time, and update any creation flow so newly created providers
get that stable id; reference normalizeCustomProvider and ensure code paths that
persist/lookup provider IDs (providerSecrets, session.apiMode.providerId) use
the new stable id scheme.

@chatgpt-codex-connector
Copy link

Codex Review: Didn't find any major issues. Chef's kiss.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@PeterDaveHello PeterDaveHello force-pushed the refactor/openai-provider-platform branch from fdc0497 to 407d734 Compare February 25, 2026 17:42
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (4)
src/services/apis/provider-registry.mjs (2)

160-167: ⚠️ Potential issue | 🟠 Major

/v1 duplication risk when customOpenAiApiUrl ends with /v1.

If a user sets customOpenAiApiUrl to https://api.openai.com/v1, the resulting chat URL becomes https://api.openai.com/v1/v1/chat/completions because chatCompletionsPath is /v1/chat/completions. Strip a trailing /v1 from the base URL before assigning.

Fix
     if (provider.id === 'openai') {
+      const rawUrl = trimSlashes(config.customOpenAiApiUrl || 'https://api.openai.com')
       return {
         ...provider,
-        baseUrl: trimSlashes(config.customOpenAiApiUrl || 'https://api.openai.com'),
+        baseUrl: rawUrl.replace(/\/v1$/i, ''),
       }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/provider-registry.mjs` around lines 160 - 167, In
buildBuiltinProviders, when handling provider.id === 'openai' sanitize
config.customOpenAiApiUrl by stripping a trailing "/v1" or "/v1/" before
computing baseUrl to avoid duplicating the /v1 from chatCompletionsPath; update
the assignment that uses trimSlashes(config.customOpenAiApiUrl ||
'https://api.openai.com') to first normalize the URL (e.g. remove any trailing
/v1 with a replace like url.replace(/\/v1\/?$/i, '')) and then pass that
normalized value into trimSlashes so BUILTIN_PROVIDER_TEMPLATE entries (openai)
produce correct endpoints.

186-201: ⚠️ Potential issue | 🟠 Major

Index-based fallback ID (custom-provider-${index + 1}) is unstable across array mutations.

If providers are reordered or removed, the generated ID shifts, breaking providerSecrets lookups and persisted providerId references. Already flagged in a prior review — consider a stable derivation (e.g., hash of name+baseUrl, or UUID at creation time).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/provider-registry.mjs` around lines 186 - 201, The current
fallback ID in normalizeCustomProvider uses the unstable index-based
`custom-provider-${index + 1}` which shifts when the array is reordered; change
the fallback to a stable derivation instead: prefer provider.id when present,
otherwise compute a deterministic ID (e.g., hash/encode of provider.name +
provider.baseUrl or name+baseUrl+completionsPath) so IDs remain stable across
reordering and removals and continue to match providerSecrets and persisted
providerId references; update normalizeCustomProvider to generate that
deterministic ID and ensure any dependent lookups (providerSecrets, persisted
providerId) continue to use this stable identifier.
src/popup/sections/GeneralPart.jsx (1)

96-105: 🛠️ Refactor suggestion | 🟠 Major

LEGACY_API_KEY_FIELD_BY_PROVIDER_ID duplicates LEGACY_KEY_BY_PROVIDER_ID from provider-registry.

This map is identical to LEGACY_KEY_BY_PROVIDER_ID defined at src/services/apis/provider-registry.mjs lines 4-13. Export the canonical mapping from the registry and import it here to keep a single source of truth.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/popup/sections/GeneralPart.jsx` around lines 96 - 105, Replace the
duplicated LEGACY_API_KEY_FIELD_BY_PROVIDER_ID map with an import of the
canonical mapping LEGACY_KEY_BY_PROVIDER_ID from the provider registry module
and use that exported constant wherever LEGACY_API_KEY_FIELD_BY_PROVIDER_ID is
referenced; update the top of the file to import { LEGACY_KEY_BY_PROVIDER_ID }
from the provider-registry module and then remove or rename the local
LEGACY_API_KEY_FIELD_BY_PROVIDER_ID to reference LEGACY_KEY_BY_PROVIDER_ID so
there is a single source of truth.
src/background/index.mjs (1)

8-8: ⚠️ Potential issue | 🔴 Critical

Add .mjs extension to openai-api import.

This import lacks the .mjs extension, unlike all other local imports in this file (e.g., line 9 uses azure-openai-api.mjs). Without the extension, module resolution will fail at runtime.

Fix
-import { generateAnswersWithOpenAICompatibleApi } from '../services/apis/openai-api'
+import { generateAnswersWithOpenAICompatibleApi } from '../services/apis/openai-api.mjs'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/background/index.mjs` at line 8, The import for
generateAnswersWithOpenAICompatibleApi is missing the .mjs extension and will
fail module resolution at runtime; update the import statement that references
generateAnswersWithOpenAICompatibleApi to use the same pattern as other local
imports (append .mjs), e.g., import from '../services/apis/openai-api.mjs' so
the module loader can find the file.
🧹 Nitpick comments (5)
src/popup/sections/ApiModes.jsx (2)

55-69: createProviderId loop has no upper bound — theoretically unbounded.

The while (usedIds.has(nextId)) loop increments suffix indefinitely. While practically harmless (the set is small), adding a guard would prevent any risk.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/popup/sections/ApiModes.jsx` around lines 55 - 69, The createProviderId
function currently loops with while (usedIds.has(nextId)) and can run unbounded;
add a guard by defining a maxAttempts constant (e.g., 1000) and count attempts
inside the loop, breaking/throwing a clear error if attempts exceed maxAttempts;
update references within createProviderId (baseId, nextId, suffix, usedIds) to
use this attempt counter and surface a descriptive error (or fallback) so the
function cannot loop forever.

171-195: URL normalization heuristics may confuse users who enter partial paths.

If a user enters https://api.example.com/v1/chat/completions, the baseUrl is stored as empty and the full URL goes into chatCompletionsUrl. If they enter https://api.example.com, neither heuristic fires and the default paths (/v1/chat/completions, /v1/completions) from providerDraft are used. This is correct but not immediately obvious — consider adding a tooltip or brief explanation in the UI for the base URL field.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/popup/sections/ApiModes.jsx` around lines 171 - 195, The URL
normalization heuristics in ApiModes.jsx (logic using providerBaseUrl,
hasChatCompletionsEndpoint, hasV1BasePath,
providerChatCompletionsUrl/providerCompletionsUrl,
providerChatCompletionsPath/providerCompletionsPath and the resulting
createdProvider) can be confusing to users who enter full endpoint URLs vs just
a base URL; add a short tooltip/help text next to the base URL input that
explains the behavior (e.g., if you enter a full endpoint like
https://api.example.com/v1/chat/completions the base URL will be stored empty
and the full endpoint will be saved into chatCompletionsUrl, whereas entering
https://api.example.com will use default paths from providerDraft), and wire the
tooltip to the existing UI element in ApiModes.jsx so users see the explanation
during input.
src/services/apis/openai-api.mjs (1)

6-8: normalizeBaseUrl duplicates trimSlashes from provider-registry.mjs.

Both functions strip trailing slashes from a URL string. Consider importing trimSlashes from the registry to reduce duplication (or extract to a shared utility).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/openai-api.mjs` around lines 6 - 8, normalizeBaseUrl
duplicates existing logic in provider-registry.mjs (trimSlashes); remove the
duplicate by either importing trimSlashes from provider-registry.mjs into
src/services/apis/openai-api.mjs and replacing normalizeBaseUrl usage with
trimSlashes, or extract the shared trimming logic into a new utility module and
update both normalizeBaseUrl and trimSlashes call sites to use that utility;
ensure you update any references to normalizeBaseUrl accordingly and run tests
to confirm no regressions.
src/services/apis/openai-compatible-core.mjs (1)

62-65: Redundant getUserConfig() call inside the core — config fetched again from storage.

The unified entry point (generateAnswersWithOpenAICompatibleApi) already loads config and could pass it through, avoiding a second storage read and potential inconsistency. Consider accepting config as an optional parameter and falling back to getUserConfig() only when not provided.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/openai-compatible-core.mjs` around lines 62 - 65, The
function generateAnswersWithOpenAICompatibleApi currently re-calls
getUserConfig() inside the core; change its signature to accept an optional
config parameter and use that when provided, falling back to await
getUserConfig() only if config is undefined, update the internal usage where you
currently do "const config = await getUserConfig()" to use the passed-in config
variable, and adjust any callers that already have a loaded config to pass it
through (leave callers that don't untouched); reference the function name
generateAnswersWithOpenAICompatibleApi and the helper getUserConfig to locate
and update the code.
src/popup/sections/GeneralPart.jsx (1)

107-181: buildProviderSecretUpdate is dense — consider extracting the customApiModes sync into a helper.

This function handles three concerns in one: updating providerSecrets + legacy key, syncing customApiModes, and syncing config.apiMode. Splitting the apiModes synchronization (lines 128-155) into a named helper would improve readability and testability without changing behavior.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/popup/sections/GeneralPart.jsx` around lines 107 - 181, The
buildProviderSecretUpdate function is doing three things; extract the
customApiModes synchronization (the block that maps config.customApiModes and
sets customApiModesDirty) into a named helper like
syncCustomApiModes(normalizedProviderId, normalizedNextApiKey,
inheritedSecretBaselines, config.customApiModes, isApiModeSelected) that returns
either the updated array and a dirty flag or null if no change; replace the
inline mapping in buildProviderSecretUpdate with a call to that helper and only
set payload.customApiModes when the helper indicates dirty, preserving all
existing checks (groupName === 'customApiModelKeys', providerId match,
inheritedSecretBaselines logic, and use of isApiModeSelected) and ensuring the
returned array uses the exact same element transformation (apiKey cleared or set
to normalizedNextApiKey).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/services/apis/openai-api.mjs`:
- Around line 59-70: Remove the unused wrapper functions
generateAnswersWithGptCompletionApi and generateAnswersWithChatgptApi from the
module and keep only the actively used generateAnswersWithChatgptApiCompat
implementation; update any tests or exports that reference the removed names to
instead import or call generateAnswersWithChatgptApiCompat (or the appropriate
API-compatible function) so external callers (openrouter, ollama, moonshot,
deepseek, chatglm, aiml) still use the compat function; ensure module exports
are adjusted so generateAnswersWithChatgptApiCompat remains exported and there
are no leftover references to the deleted functions.

In `@src/services/apis/openai-compatible-core.mjs`:
- Around line 146-151: The onEnd handler currently only posts { done: true }
when !finished, which drops the accumulated answer and never calls pushRecord or
returns the session; update the onEnd implementation in
openai-compatible-core.mjs so that when !finished it calls finish() (or the same
code path that invokes pushRecord and completes the session), then posts the
done message and removes listeners (i.e., ensure onEnd calls finish() before
port.postMessage and before removing messageListener/disconnectListener) so
partial answers are persisted and the UI receives the session result.

In `@src/services/apis/provider-registry.mjs`:
- Around line 271-307: In resolveUrlFromProvider: the legacy-custom-default
branch uses config.customOpenAiApiUrl for completion but
config.customModelApiUrl for chat, causing inconsistent fallbacks; update the
completion fallback to use the same config.customModelApiUrl (via
toStringOrEmpty(config.customModelApiUrl).trim() or trimSlashes) and construct
the completions URL consistently (e.g. base + '/v1/completions' or default to
'http://localhost:8000/v1/completions'), using the same helpers
(trimSlashes/toStringOrEmpty) as the chat branch so both endpoint types honor
customModelApiUrl.

---

Duplicate comments:
In `@src/background/index.mjs`:
- Line 8: The import for generateAnswersWithOpenAICompatibleApi is missing the
.mjs extension and will fail module resolution at runtime; update the import
statement that references generateAnswersWithOpenAICompatibleApi to use the same
pattern as other local imports (append .mjs), e.g., import from
'../services/apis/openai-api.mjs' so the module loader can find the file.

In `@src/popup/sections/GeneralPart.jsx`:
- Around line 96-105: Replace the duplicated LEGACY_API_KEY_FIELD_BY_PROVIDER_ID
map with an import of the canonical mapping LEGACY_KEY_BY_PROVIDER_ID from the
provider registry module and use that exported constant wherever
LEGACY_API_KEY_FIELD_BY_PROVIDER_ID is referenced; update the top of the file to
import { LEGACY_KEY_BY_PROVIDER_ID } from the provider-registry module and then
remove or rename the local LEGACY_API_KEY_FIELD_BY_PROVIDER_ID to reference
LEGACY_KEY_BY_PROVIDER_ID so there is a single source of truth.

In `@src/services/apis/provider-registry.mjs`:
- Around line 160-167: In buildBuiltinProviders, when handling provider.id ===
'openai' sanitize config.customOpenAiApiUrl by stripping a trailing "/v1" or
"/v1/" before computing baseUrl to avoid duplicating the /v1 from
chatCompletionsPath; update the assignment that uses
trimSlashes(config.customOpenAiApiUrl || 'https://api.openai.com') to first
normalize the URL (e.g. remove any trailing /v1 with a replace like
url.replace(/\/v1\/?$/i, '')) and then pass that normalized value into
trimSlashes so BUILTIN_PROVIDER_TEMPLATE entries (openai) produce correct
endpoints.
- Around line 186-201: The current fallback ID in normalizeCustomProvider uses
the unstable index-based `custom-provider-${index + 1}` which shifts when the
array is reordered; change the fallback to a stable derivation instead: prefer
provider.id when present, otherwise compute a deterministic ID (e.g.,
hash/encode of provider.name + provider.baseUrl or name+baseUrl+completionsPath)
so IDs remain stable across reordering and removals and continue to match
providerSecrets and persisted providerId references; update
normalizeCustomProvider to generate that deterministic ID and ensure any
dependent lookups (providerSecrets, persisted providerId) continue to use this
stable identifier.

---

Nitpick comments:
In `@src/popup/sections/ApiModes.jsx`:
- Around line 55-69: The createProviderId function currently loops with while
(usedIds.has(nextId)) and can run unbounded; add a guard by defining a
maxAttempts constant (e.g., 1000) and count attempts inside the loop,
breaking/throwing a clear error if attempts exceed maxAttempts; update
references within createProviderId (baseId, nextId, suffix, usedIds) to use this
attempt counter and surface a descriptive error (or fallback) so the function
cannot loop forever.
- Around line 171-195: The URL normalization heuristics in ApiModes.jsx (logic
using providerBaseUrl, hasChatCompletionsEndpoint, hasV1BasePath,
providerChatCompletionsUrl/providerCompletionsUrl,
providerChatCompletionsPath/providerCompletionsPath and the resulting
createdProvider) can be confusing to users who enter full endpoint URLs vs just
a base URL; add a short tooltip/help text next to the base URL input that
explains the behavior (e.g., if you enter a full endpoint like
https://api.example.com/v1/chat/completions the base URL will be stored empty
and the full endpoint will be saved into chatCompletionsUrl, whereas entering
https://api.example.com will use default paths from providerDraft), and wire the
tooltip to the existing UI element in ApiModes.jsx so users see the explanation
during input.

In `@src/popup/sections/GeneralPart.jsx`:
- Around line 107-181: The buildProviderSecretUpdate function is doing three
things; extract the customApiModes synchronization (the block that maps
config.customApiModes and sets customApiModesDirty) into a named helper like
syncCustomApiModes(normalizedProviderId, normalizedNextApiKey,
inheritedSecretBaselines, config.customApiModes, isApiModeSelected) that returns
either the updated array and a dirty flag or null if no change; replace the
inline mapping in buildProviderSecretUpdate with a call to that helper and only
set payload.customApiModes when the helper indicates dirty, preserving all
existing checks (groupName === 'customApiModelKeys', providerId match,
inheritedSecretBaselines logic, and use of isApiModeSelected) and ensuring the
returned array uses the exact same element transformation (apiKey cleared or set
to normalizedNextApiKey).

In `@src/services/apis/openai-api.mjs`:
- Around line 6-8: normalizeBaseUrl duplicates existing logic in
provider-registry.mjs (trimSlashes); remove the duplicate by either importing
trimSlashes from provider-registry.mjs into src/services/apis/openai-api.mjs and
replacing normalizeBaseUrl usage with trimSlashes, or extract the shared
trimming logic into a new utility module and update both normalizeBaseUrl and
trimSlashes call sites to use that utility; ensure you update any references to
normalizeBaseUrl accordingly and run tests to confirm no regressions.

In `@src/services/apis/openai-compatible-core.mjs`:
- Around line 62-65: The function generateAnswersWithOpenAICompatibleApi
currently re-calls getUserConfig() inside the core; change its signature to
accept an optional config parameter and use that when provided, falling back to
await getUserConfig() only if config is undefined, update the internal usage
where you currently do "const config = await getUserConfig()" to use the
passed-in config variable, and adjust any callers that already have a loaded
config to pass it through (leave callers that don't untouched); reference the
function name generateAnswersWithOpenAICompatibleApi and the helper
getUserConfig to locate and update the code.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fdc0497 and 407d734.

📒 Files selected for processing (12)
  • src/background/index.mjs
  • src/config/index.mjs
  • src/popup/sections/ApiModes.jsx
  • src/popup/sections/GeneralPart.jsx
  • src/services/apis/custom-api.mjs
  • src/services/apis/openai-api.mjs
  • src/services/apis/openai-compatible-core.mjs
  • src/services/apis/provider-registry.mjs
  • src/services/init-session.mjs
  • src/services/wrappers.mjs
  • src/utils/model-name-convert.mjs
  • tests/unit/services/apis/openai-api-compat.test.mjs
🚧 Files skipped from review as they are similar to previous changes (5)
  • tests/unit/services/apis/openai-api-compat.test.mjs
  • src/services/wrappers.mjs
  • src/services/init-session.mjs
  • src/services/apis/custom-api.mjs
  • src/config/index.mjs

Comment on lines +146 to +151
async onStart() {},
async onEnd() {
if (!finished) port.postMessage({ done: true })
port.onMessage.removeListener(messageListener)
port.onDisconnect.removeListener(disconnectListener)
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Partial answer lost when stream ends without [DONE] or finish_reason.

When onEnd fires and !finished (e.g., the server closes the connection mid-stream), only { done: true } is posted — the accumulated answer is never persisted via pushRecord, and the session isn't returned to the UI. Consider calling finish() here instead, so the partial answer is at least saved.

Suggested fix
     async onEnd() {
-      if (!finished) port.postMessage({ done: true })
+      finish()
       port.onMessage.removeListener(messageListener)
       port.onDisconnect.removeListener(disconnectListener)
     },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async onStart() {},
async onEnd() {
if (!finished) port.postMessage({ done: true })
port.onMessage.removeListener(messageListener)
port.onDisconnect.removeListener(disconnectListener)
},
async onEnd() {
finish()
port.onMessage.removeListener(messageListener)
port.onDisconnect.removeListener(disconnectListener)
},
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/openai-compatible-core.mjs` around lines 146 - 151, The
onEnd handler currently only posts { done: true } when !finished, which drops
the accumulated answer and never calls pushRecord or returns the session; update
the onEnd implementation in openai-compatible-core.mjs so that when !finished it
calls finish() (or the same code path that invokes pushRecord and completes the
session), then posts the done message and removes listeners (i.e., ensure onEnd
calls finish() before port.postMessage and before removing
messageListener/disconnectListener) so partial answers are persisted and the UI
receives the session result.

Comment on lines 271 to 307
function resolveUrlFromProvider(provider, endpointType, config, session) {
if (!provider) return ''

const apiModeCustomUrl =
endpointType === 'chat' &&
session?.apiMode &&
typeof session.apiMode === 'object' &&
session.apiMode.groupName === 'customApiModelKeys' &&
!toStringOrEmpty(session.apiMode.providerId).trim()
? toStringOrEmpty(session.apiMode.customUrl).trim()
: ''
if (apiModeCustomUrl) return apiModeCustomUrl

if (endpointType === 'completion') {
if (provider.completionsUrl) return provider.completionsUrl
if (provider.baseUrl && provider.completionsPath) {
return joinUrl(provider.baseUrl, provider.completionsPath)
}
} else {
if (provider.chatCompletionsUrl) return provider.chatCompletionsUrl
if (provider.baseUrl && provider.chatCompletionsPath) {
return joinUrl(provider.baseUrl, provider.chatCompletionsPath)
}
}

if (provider.id === 'legacy-custom-default') {
if (endpointType === 'completion') {
return `${trimSlashes(config.customOpenAiApiUrl || 'https://api.openai.com')}/v1/completions`
}
return (
toStringOrEmpty(config.customModelApiUrl).trim() ||
'http://localhost:8000/v1/chat/completions'
)
}

return ''
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if legacy-custom-default is ever used with completion endpoint type
rg -n 'gptApiModelKeys.*legacy-custom\|legacy-custom.*completion' src/
echo "---"
# Check how resolveEndpointTypeForSession determines endpoint type
ast-grep --pattern 'function resolveEndpointTypeForSession($_) { $$$ }'

Repository: ChatGPTBox-dev/chatGPTBox

Length of output: 693


🏁 Script executed:

#!/bin/bash
# Find where legacy-custom-default provider is set or determined
rg -n 'legacy-custom-default' src/ -B 2 -A 2

echo "---"
# Find the isLegacyCompletionModelName function
ast-grep --pattern 'function isLegacyCompletionModelName($_) { $$$ }'

echo "---"
# Find where resolveUrlFromProvider is called to understand context
rg -n 'resolveUrlFromProvider' src/ -B 1 -A 1

Repository: ChatGPTBox-dev/chatGPTBox

Length of output: 6990


🏁 Script executed:

#!/bin/bash
# Check the config migration logic and how apiMode.providerId and groupName interact
rg -n 'apiMode.*providerId\|apiMode.*groupName' src/config/index.mjs -B 2 -A 2 | head -100

echo "---"
# Check if there's validation logic preventing incompatible provider/group combinations
ast-grep --pattern 'if ($apiMode || $session) { $$$ }'

Repository: ChatGPTBox-dev/chatGPTBox

Length of output: 72


🏁 Script executed:

#!/bin/bash
# Read resolveProviderIdFromSession function in full
sed -n '219,229p' src/services/apis/provider-registry.mjs

echo "---"
# Check what happens in resolveUrlFromProvider when called - look at full context
sed -n '308,320p' src/services/apis/provider-registry.mjs

echo "---"
# Search for how apiMode is created/validated
rg -n 'apiMode\s*=' src/ --type mjs -B 1 -A 3 | head -80

Repository: ChatGPTBox-dev/chatGPTBox

Length of output: 1064


🏁 Script executed:

#!/bin/bash
# Check OPENAI_COMPATIBLE_GROUP_TO_PROVIDER_ID mapping in provider-registry.mjs
rg -n 'OPENAI_COMPATIBLE_GROUP_TO_PROVIDER_ID' src/services/apis/provider-registry.mjs -A 10

echo "---"
# Check if there's any documentation or comments about customOpenAiApiUrl vs customModelApiUrl
rg -n 'customOpenAiApiUrl|customModelApiUrl' src/ -B 2 -A 2 | head -100

Repository: ChatGPTBox-dev/chatGPTBox

Length of output: 5414


resolveUrlFromProvider fallback for legacy-custom-default uses inconsistent config fields for different endpoint types.

Lines 296-304: The completion endpoint fallback uses config.customOpenAiApiUrl (defaulting to OpenAI), while the chat endpoint uses config.customModelApiUrl (defaulting to localhost). For a user with a non-OpenAI custom endpoint configured in customModelApiUrl, the completion URL would incorrectly fall back to the OpenAI endpoint instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/provider-registry.mjs` around lines 271 - 307, In
resolveUrlFromProvider: the legacy-custom-default branch uses
config.customOpenAiApiUrl for completion but config.customModelApiUrl for chat,
causing inconsistent fallbacks; update the completion fallback to use the same
config.customModelApiUrl (via toStringOrEmpty(config.customModelApiUrl).trim()
or trimSlashes) and construct the completions URL consistently (e.g. base +
'/v1/completions' or default to 'http://localhost:8000/v1/completions'), using
the same helpers (trimSlashes/toStringOrEmpty) as the chat branch so both
endpoint types honor customModelApiUrl.

@PeterDaveHello
Copy link
Member Author

@codex review

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 12 out of 12 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +63 to +66
const { controller, messageListener, disconnectListener } = setAbortController(port)
const config = await getUserConfig()

let requestBody
Copy link

Copilot AI Feb 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

generateAnswersWithOpenAICompatibleApi already receives a fully-loaded config, but generateAnswersWithOpenAICompatible calls getUserConfig() again to read token/temperature/context settings. This adds an extra storage read (and potential migration write) per request and can lead to inconsistencies if callers pass a config snapshot. Consider passing the needed config fields into generateAnswersWithOpenAICompatible (or accepting config as a parameter) instead of reloading from storage.

Copilot uses AI. Check for mistakes.
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 407d734e9e

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

const legacyValue = legacyKey ? toStringOrEmpty(config?.[legacyKey]).trim() : ''
if (legacyValue) return legacyValue

return apiModeApiKey

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Stop using session apiKey as global fallback

For non-custom providers this fallback can make settings updates ineffective: if a tab still has a stale session.apiMode.apiKey (from legacy/session state), getProviderSecret will keep returning it whenever providerSecrets and legacy config fields are empty, so clearing or rotating the provider key in settings does not take effect for that session. This regression is introduced by the unconditional return apiModeApiKey; for non-customApiModelKeys modes, key resolution should come from provider/config storage only.

Useful? React with 👍 / 👎.

@PeterDaveHello PeterDaveHello force-pushed the refactor/openai-provider-platform branch from 407d734 to 3b4527c Compare February 28, 2026 11:02
@PeterDaveHello
Copy link
Member Author

@codex review

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
src/services/apis/provider-registry.mjs (1)

143-155: ⚠️ Potential issue | 🟠 Major

Normalize provider base URLs before appending versioned paths.

If customOpenAiApiUrl or ollamaEndpoint already includes /v1, request URLs can become /v1/v1/... and fail.

🛠️ Proposed fix
 function trimSlashes(value) {
   return toStringOrEmpty(value).trim().replace(/\/+$/, '')
 }
+
+function stripTrailingV1(value) {
+  return trimSlashes(value).replace(/\/v1$/i, '')
+}
 
 function ensureLeadingSlash(value, fallback) {
   const raw = toStringOrEmpty(value).trim()
   if (!raw) return fallback
   return raw.startsWith('/') ? raw : `/${raw}`
@@
 function buildBuiltinProviders(config) {
   return BUILTIN_PROVIDER_TEMPLATE.map((provider) => {
     if (provider.id === 'openai') {
       return {
         ...provider,
-        baseUrl: trimSlashes(config.customOpenAiApiUrl || 'https://api.openai.com'),
+        baseUrl: stripTrailingV1(config.customOpenAiApiUrl || 'https://api.openai.com'),
       }
     }
     if (provider.id === 'ollama') {
       return {
         ...provider,
-        baseUrl: `${trimSlashes(config.ollamaEndpoint || 'http://127.0.0.1:11434')}/v1`,
+        baseUrl: `${stripTrailingV1(config.ollamaEndpoint || 'http://127.0.0.1:11434')}/v1`,
       }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/provider-registry.mjs` around lines 143 - 155, Normalize
and sanitize provider base URLs in buildBuiltinProviders so appended versioned
paths aren't duplicated: when building the OpenAI and Ollama entries
(provider.id === 'openai' and provider.id === 'ollama'), trim slashes and also
strip any trailing "/v1" (or leading "/v1") from config.customOpenAiApiUrl and
config.ollamaEndpoint before composing baseUrl; continue to append the intended
"/v1" for Ollama only after this normalization. Use the existing trimSlashes
helper as a starting point and perform an extra step to remove a trailing "/v1"
segment so URLs like ".../v1" don't become ".../v1/v1".
🧹 Nitpick comments (3)
src/config/index.mjs (1)

861-1057: Consider extracting shared migration steps to reduce drift risk.

The customApiModes loop and migrated.apiMode branch duplicate provider-id assignment and secret-migration logic; a shared helper would make future changes safer.

♻️ Refactor sketch
+function migrateApiModeProviderState({
+  apiMode,
+  providerSecrets,
+  customOpenAIProviders,
+  providerIdSet,
+  providerIdRenameLookup,
+  customProviderCounter,
+  legacyCustomProviderSecret,
+  isSelected = false,
+}) {
+  // unify providerId normalization, legacy customUrl promotion, and apiKey -> providerSecrets migration
+  // return { apiMode, providerSecrets, customOpenAIProviders, customProviderCounter, dirtyFlags }
+}
-
-// duplicated logic in customApiModes loop
-// ...
-
-// duplicated logic in selectedApiMode block
-// ...
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/config/index.mjs` around lines 861 - 1057, The customApiModes loop and
migrated.apiMode branch duplicate provider-id assignment and secret-migration
logic; extract a shared helper (e.g., migrateApiModeEntry(entry, {
providerSecrets, customOpenAIProviders, providerIdSet, providerIdRenameLookup,
normalizeText, normalizeProviderId, ensureUniqueProviderId,
API_MODE_GROUP_TO_PROVIDER_ID, legacyCustomProviderSecret, customProviderCounter
})) that encapsulates: normalizing/renaming providerId, assigning provider from
customUrl (creating provider entries and updating
customOpenAIProviders/providerIdSet/customProviderCounter), moving apiKey into
providerSecrets (setting dirty flags), and clearing legacy fields
(apiKey/customUrl/providerId) while returning flags (e.g.,
providerIdAssignedFromLegacyCustomUrl, entryDirty, any changes to counters/sets)
so both the customApiModes loop and the migrated.apiMode branch call this helper
and update local dirty variables (dirty, customApiModesDirty,
customProvidersDirty, selectedApiModeDirty) accordingly.
tests/unit/services/apis/provider-registry.test.mjs (1)

1-35: Add regression coverage for provider URL normalization.

Given provider routing is now centralized, it’s worth adding tests for /v1-suffixed base URLs to prevent /v1/v1 regressions.

🧪 Test extension sketch
 import assert from 'node:assert/strict'
 import { test } from 'node:test'
-import { resolveEndpointTypeForSession } from '../../../../src/services/apis/provider-registry.mjs'
+import {
+  resolveEndpointTypeForSession,
+  resolveOpenAICompatibleRequest,
+} from '../../../../src/services/apis/provider-registry.mjs'
@@
 test('resolveEndpointTypeForSession falls back to legacy modelName when apiMode is missing', () => {
   const session = {
     modelName: 'gptApiInstruct-text-davinci-003',
   }
 
   assert.equal(resolveEndpointTypeForSession(session), 'completion')
 })
+
+test('resolveOpenAICompatibleRequest handles customOpenAiApiUrl ending with /v1', () => {
+  const config = {
+    customOpenAiApiUrl: 'https://api.openai.com/v1',
+    providerSecrets: {},
+    customOpenAIProviders: [],
+  }
+  const session = {
+    apiMode: { groupName: 'chatgptApiModelKeys', itemName: 'chatgptApi35' },
+  }
+  const request = resolveOpenAICompatibleRequest(config, session)
+  assert.equal(request.requestUrl, 'https://api.openai.com/v1/chat/completions')
+})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/services/apis/provider-registry.test.mjs` around lines 1 - 35, Add
regression tests in tests/unit/services/apis/provider-registry.test.mjs that
cover provider base URL normalization to prevent double "/v1" (e.g., baseUrl
"https://api.example.com" vs "https://api.example.com/v1"). Locate the provider
routing/normalization logic in src/services/apis/provider-registry.mjs (same
module as resolveEndpointTypeForSession) and add tests that instantiate
sessions/providers with baseUrl values both with and without a trailing "/v1"
and assert the resolved provider endpoint (the final routed URL/path) does not
contain duplicated "/v1" segments; mirror the style of the existing
resolveEndpointTypeForSession tests for consistency. Ensure cases include:
baseUrl ending with "/v1", baseUrl without "/v1", and a baseUrl that already has
a trailing slash.
src/popup/sections/GeneralPart.jsx (1)

199-213: Verify trailing slash handling in baseUrl.

The billing URL is constructed as ${openAiApiUrl}/dashboard/billing/credit_grants. If selectedProvider.baseUrl includes a trailing slash, this could result in a double-slash (//dashboard). While most servers handle this gracefully, consider normalizing:

🛠️ Suggested defensive normalization
 const getBalance = async () => {
-  const openAiApiUrl = selectedProvider?.baseUrl || config.customOpenAiApiUrl
+  const openAiApiUrl = (selectedProvider?.baseUrl || config.customOpenAiApiUrl || '').replace(/\/+$/, '')
   const response = await fetch(`${openAiApiUrl}/dashboard/billing/credit_grants`, {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/popup/sections/GeneralPart.jsx` around lines 199 - 213, The getBalance
function builds openAiApiUrl from selectedProvider?.baseUrl or
config.customOpenAiApiUrl and may produce a double slash when concatenating
"/dashboard/billing/credit_grants"; normalize the base URL by trimming any
trailing slash before appending the path (e.g., remove trailing '/' from
selectedProvider.baseUrl or config.customOpenAiApiUrl) so the final fetch URL is
always `${normalizedBaseUrl}/dashboard/billing/credit_grants`; update getBalance
to derive a normalizedOpenAiApiUrl variable and use that in the fetch and in the
call to checkBilling/openUrl.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/services/apis/openai-api.mjs`:
- Around line 59-70: Remove the dead wrapper functions
generateAnswersWithGptCompletionApi and generateAnswersWithChatgptApi from
src/services/apis/openai-api.mjs: delete both function declarations and their
exports, and ensure no other module imports or references them; rely on the
unified generateAnswersWithOpenAICompatibleApi (the active replacement) instead,
so if there is an export list or default export that included the removed names,
update it to only export generateAnswersWithOpenAICompatibleApi and any other
live symbols.

In `@tests/unit/services/apis/openai-api-compat.test.mjs`:
- Around line 82-118: Update this test to reflect the fixed behavior where onEnd
calls finish(): after invoking generateAnswersWithChatgptApiCompat with the
mocked SSE that ends without finish_reason, assert that the final posted message
includes answer: null, done: true, and the session object (i.e., { answer: null,
done: true, session }), verify that session.conversationRecords contains the
partial answer ("Partial"), and keep the existing assertion that an intermediate
message with done:false and answer:"Partial" was posted; locate this behavior
around generateAnswersWithChatgptApiCompat, onEnd/finish(),
session.conversationRecords, and port.postedMessages to change the expected
final message and add the conversationRecords check.

---

Duplicate comments:
In `@src/services/apis/provider-registry.mjs`:
- Around line 143-155: Normalize and sanitize provider base URLs in
buildBuiltinProviders so appended versioned paths aren't duplicated: when
building the OpenAI and Ollama entries (provider.id === 'openai' and provider.id
=== 'ollama'), trim slashes and also strip any trailing "/v1" (or leading "/v1")
from config.customOpenAiApiUrl and config.ollamaEndpoint before composing
baseUrl; continue to append the intended "/v1" for Ollama only after this
normalization. Use the existing trimSlashes helper as a starting point and
perform an extra step to remove a trailing "/v1" segment so URLs like ".../v1"
don't become ".../v1/v1".

---

Nitpick comments:
In `@src/config/index.mjs`:
- Around line 861-1057: The customApiModes loop and migrated.apiMode branch
duplicate provider-id assignment and secret-migration logic; extract a shared
helper (e.g., migrateApiModeEntry(entry, { providerSecrets,
customOpenAIProviders, providerIdSet, providerIdRenameLookup, normalizeText,
normalizeProviderId, ensureUniqueProviderId, API_MODE_GROUP_TO_PROVIDER_ID,
legacyCustomProviderSecret, customProviderCounter })) that encapsulates:
normalizing/renaming providerId, assigning provider from customUrl (creating
provider entries and updating
customOpenAIProviders/providerIdSet/customProviderCounter), moving apiKey into
providerSecrets (setting dirty flags), and clearing legacy fields
(apiKey/customUrl/providerId) while returning flags (e.g.,
providerIdAssignedFromLegacyCustomUrl, entryDirty, any changes to counters/sets)
so both the customApiModes loop and the migrated.apiMode branch call this helper
and update local dirty variables (dirty, customApiModesDirty,
customProvidersDirty, selectedApiModeDirty) accordingly.

In `@src/popup/sections/GeneralPart.jsx`:
- Around line 199-213: The getBalance function builds openAiApiUrl from
selectedProvider?.baseUrl or config.customOpenAiApiUrl and may produce a double
slash when concatenating "/dashboard/billing/credit_grants"; normalize the base
URL by trimming any trailing slash before appending the path (e.g., remove
trailing '/' from selectedProvider.baseUrl or config.customOpenAiApiUrl) so the
final fetch URL is always
`${normalizedBaseUrl}/dashboard/billing/credit_grants`; update getBalance to
derive a normalizedOpenAiApiUrl variable and use that in the fetch and in the
call to checkBilling/openUrl.

In `@tests/unit/services/apis/provider-registry.test.mjs`:
- Around line 1-35: Add regression tests in
tests/unit/services/apis/provider-registry.test.mjs that cover provider base URL
normalization to prevent double "/v1" (e.g., baseUrl "https://api.example.com"
vs "https://api.example.com/v1"). Locate the provider routing/normalization
logic in src/services/apis/provider-registry.mjs (same module as
resolveEndpointTypeForSession) and add tests that instantiate sessions/providers
with baseUrl values both with and without a trailing "/v1" and assert the
resolved provider endpoint (the final routed URL/path) does not contain
duplicated "/v1" segments; mirror the style of the existing
resolveEndpointTypeForSession tests for consistency. Ensure cases include:
baseUrl ending with "/v1", baseUrl without "/v1", and a baseUrl that already has
a trailing slash.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 407d734 and 3b4527c.

📒 Files selected for processing (36)
  • src/_locales/de/main.json
  • src/_locales/en/main.json
  • src/_locales/es/main.json
  • src/_locales/fr/main.json
  • src/_locales/in/main.json
  • src/_locales/it/main.json
  • src/_locales/ja/main.json
  • src/_locales/ko/main.json
  • src/_locales/pt/main.json
  • src/_locales/ru/main.json
  • src/_locales/tr/main.json
  • src/_locales/zh-hans/main.json
  • src/_locales/zh-hant/main.json
  • src/background/index.mjs
  • src/config/index.mjs
  • src/config/openai-provider-mappings.mjs
  • src/popup/sections/ApiModes.jsx
  • src/popup/sections/GeneralPart.jsx
  • src/popup/sections/api-modes-provider-utils.mjs
  • src/services/apis/aiml-api.mjs
  • src/services/apis/chatglm-api.mjs
  • src/services/apis/custom-api.mjs
  • src/services/apis/deepseek-api.mjs
  • src/services/apis/moonshot-api.mjs
  • src/services/apis/ollama-api.mjs
  • src/services/apis/openai-api.mjs
  • src/services/apis/openai-compatible-core.mjs
  • src/services/apis/openrouter-api.mjs
  • src/services/apis/provider-registry.mjs
  • src/services/init-session.mjs
  • src/services/wrappers.mjs
  • src/utils/model-name-convert.mjs
  • tests/unit/config/migrate-user-config.test.mjs
  • tests/unit/popup/api-modes-provider-utils.test.mjs
  • tests/unit/services/apis/openai-api-compat.test.mjs
  • tests/unit/services/apis/provider-registry.test.mjs
💤 Files with no reviewable changes (6)
  • src/services/apis/ollama-api.mjs
  • src/services/apis/deepseek-api.mjs
  • src/services/apis/aiml-api.mjs
  • src/services/apis/chatglm-api.mjs
  • src/services/apis/moonshot-api.mjs
  • src/services/apis/openrouter-api.mjs
🚧 Files skipped from review as they are similar to previous changes (2)
  • src/services/apis/custom-api.mjs
  • src/services/wrappers.mjs

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 36 out of 36 changed files in this pull request and generated 2 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@qodo-code-review
Copy link
Contributor

PR Compliance Guide 🔍

Below is a summary of compliance checks for this PR:

Security Compliance
Credential exfiltration

Description: The new balance-check flow builds openAiApiUrl from selectedProvider?.baseUrl (which may
be user-/config-controlled for custom providers) and sends Authorization: Bearer
${selectedProviderApiKey} to ${openAiApiUrl}/dashboard/billing/credit_grants, which can
exfiltrate API keys to an attacker-controlled host if a malicious provider/base URL is
imported or selected.
GeneralPart.jsx [111-135]

Referred Code
const selectedProviderSession =
  config.apiMode && typeof config.apiMode === 'object'
    ? { apiMode: config.apiMode }
    : { modelName: config.modelName }
const selectedProviderRequest = resolveOpenAICompatibleRequest(config, selectedProviderSession)
const selectedProviderId = selectedProviderRequest?.providerId || ''
const selectedProvider = selectedProviderRequest
  ? getProviderById(config, selectedProviderRequest.providerId)
  : null
const selectedProviderApiKey = selectedProviderRequest?.apiKey || ''
const isUsingOpenAICompatibleProvider = Boolean(selectedProviderRequest)

const getBalance = async () => {
  const openAiApiUrl = selectedProvider?.baseUrl || config.customOpenAiApiUrl
  const response = await fetch(`${openAiApiUrl}/dashboard/billing/credit_grants`, {
    headers: {
      'Content-Type': 'application/json',
      Authorization: `Bearer ${selectedProviderApiKey}`,
    },
  })
  if (response.ok) setBalance((await response.json()).total_available.toFixed(2))


 ... (clipped 4 lines)
SSRF via custom URL

Description: resolveUrlFromProvider allows custom/provider-supplied endpoint URLs (including http://
and localhost/private-network targets) to be used for background fetch requests, creating
a realistic SSRF-style primitive against local services if an attacker can
influence/import config or trick a user into selecting a malicious custom provider URL.
provider-registry.mjs [277-317]

Referred Code
function resolveUrlFromProvider(provider, endpointType, config, session) {
  if (!provider) return ''

  const apiModeCustomUrl =
    endpointType === 'chat' &&
    session?.apiMode &&
    typeof session.apiMode === 'object' &&
    session.apiMode.groupName === 'customApiModelKeys' &&
    !toStringOrEmpty(session.apiMode.providerId).trim()
      ? toStringOrEmpty(session.apiMode.customUrl).trim()
      : ''
  if (apiModeCustomUrl) return apiModeCustomUrl

  if (endpointType === 'completion') {
    if (provider.completionsUrl) return provider.completionsUrl
    if (provider.baseUrl && provider.completionsPath) {
      return joinUrl(provider.baseUrl, provider.completionsPath)
    }
  } else {
    if (provider.chatCompletionsUrl) return provider.chatCompletionsUrl
    if (provider.baseUrl && provider.chatCompletionsPath) {


 ... (clipped 20 lines)
Ticket Compliance
🎫 No ticket provided
  • Create ticket/issue
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
🟢
Generic: Meaningful Naming and Self-Documenting Code

Objective: Ensure all identifiers clearly express their purpose and intent, making code
self-documenting

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

🔴
Generic: Robust Error Handling and Edge Case Management

Objective: Ensure comprehensive error handling that provides meaningful context and graceful
degradation

Status:
Missing fetch handling: The new getBalance implementation performs network fetch/checkBilling calls without a
surrounding try/catch, so network failures can surface as unhandled promise rejections and
provide no graceful UI degradation.

Referred Code
const getBalance = async () => {
  const openAiApiUrl = selectedProvider?.baseUrl || config.customOpenAiApiUrl
  const response = await fetch(`${openAiApiUrl}/dashboard/billing/credit_grants`, {
    headers: {
      'Content-Type': 'application/json',
      Authorization: `Bearer ${selectedProviderApiKey}`,
    },
  })
  if (response.ok) setBalance((await response.json()).total_available.toFixed(2))
  else {
    const billing = await checkBilling(selectedProviderApiKey, openAiApiUrl)
    if (billing && billing.length > 2 && billing[2]) setBalance(`${billing[2].toFixed(2)}`)
    else openUrl('https://platform.openai.com/account/usage')
  }

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Comprehensive Audit Trails

Objective: To create a detailed and reliable record of critical system actions for security analysis
and compliance.

Status:
No audit trail: The PR introduces sensitive configuration migration and secret consolidation logic but
does not add any explicit audit-trail logging (e.g., who/when/what/outcome) for these
critical actions in the code shown.

Referred Code
function migrateUserConfig(options) {
  const migrated = { ...options }
  let dirty = false

  if (migrated.customChatGptWebApiUrl === 'https://chat.openai.com') {
    migrated.customChatGptWebApiUrl = 'https://chatgpt.com'
    dirty = true
  }

  const hasProviderSecretsRecord = isPlainObject(migrated.providerSecrets)
  const providerSecrets = hasProviderSecretsRecord ? { ...migrated.providerSecrets } : {}
  if (!hasProviderSecretsRecord) {
    dirty = true
  }
  for (const [legacyKey, providerId] of Object.entries(LEGACY_SECRET_KEY_TO_PROVIDER_ID)) {
    const legacyKeyValue = normalizeText(migrated[legacyKey])
    const existingProviderSecret = normalizeText(providerSecrets[providerId])
    if (legacyKeyValue && !existingProviderSecret) {
      providerSecrets[providerId] = legacyKeyValue
      dirty = true
    }


 ... (clipped 379 lines)

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Error Handling

Objective: To prevent the leakage of sensitive system information through error messages while
providing sufficient detail for internal debugging.

Status:
Error propagation path: The new generateAnswersWithOpenAICompatibleApi throws a runtime Error on unknown provider
configuration, but it is unclear from the diff whether this message (or any downstream
stack traces) can be surfaced directly to end users.

Referred Code
export async function generateAnswersWithOpenAICompatibleApi(port, question, session, config) {
  const request = resolveOpenAICompatibleRequest(config, session)
  if (!request) {
    throw new Error('Unknown OpenAI-compatible provider configuration')
  }

  const model = resolveModelName(session, config)
  await generateAnswersWithOpenAICompatible({
    port,
    question,
    session,
    endpointType: request.endpointType,
    requestUrl: request.requestUrl,
    model,
    apiKey: request.apiKey,
    provider: request.providerId,
    allowLegacyResponseField: request.provider.allowLegacyResponseField,
  })

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Logging Practices

Objective: To ensure logs are useful for debugging and auditing without exposing sensitive
information like PII, PHI, or cardholder data.

Status:
Logs error object: The PR adds console.warn('Ollama keep_alive request failed:', error) which may
log verbose error objects; confirming this cannot include sensitive details (e.g., secrets
embedded in error messages) requires verification of runtime error shapes.

Referred Code
if (request.providerId === 'ollama') {
  await touchOllamaKeepAlive(config, model, request.apiKey).catch((error) => {
    console.warn('Ollama keep_alive request failed:', error)
  })
}

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Security-First Input Validation and Data Handling

Objective: Ensure all data inputs are validated, sanitized, and handled securely to prevent
vulnerabilities

Status:
URL validation depends: The new custom provider editor relies on parseChatCompletionsEndpointUrl for validating
user-provided endpoint URLs, but that function’s guarantees (scheme enforcement, localhost
restrictions, etc.) are not visible in this diff and must be verified.

Referred Code
const onSaveProviderEditing = (event) => {
  event.preventDefault()
  const providerName = providerDraft.name.trim()
  const parsedEndpoint = parseChatCompletionsEndpointUrl(providerDraft.apiUrl)
  const nextProviderDraftValidation = {
    name: !providerName,
    apiUrl: !parsedEndpoint.valid,
  }
  if (nextProviderDraftValidation.name || nextProviderDraftValidation.apiUrl) {
    setProviderDraftValidation(nextProviderDraftValidation)
    if (nextProviderDraftValidation.name) {
      providerNameInputRef.current?.focus()
    } else {
      providerBaseUrlInputRef.current?.focus()
    }
    return
  }
  setProviderDraftValidation(defaultProviderDraftValidation)

  if (providerEditingId) {
    if (pendingNewProvider && pendingNewProvider.id === providerEditingId) {


 ... (clipped 46 lines)

Learn more about managing compliance generic rules or creating your own custom rules

Compliance status legend 🟢 - Fully Compliant
🟡 - Partial Compliant
🔴 - Not Compliant
⚪ - Requires Further Human Verification
🏷️ - Compliance label

@qodo-code-review
Copy link
Contributor

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
High-level
Decompose the monolithic configuration migration

The migrateUserConfig function in src/config/index.mjs is monolithic and
complex, handling numerous migration tasks. It should be decomposed into
smaller, single-purpose functions to improve readability, testability, and
robustness.

Examples:

src/config/index.mjs [806-1157]
function migrateUserConfig(options) {
  const migrated = { ...options }
  let dirty = false

  if (migrated.customChatGptWebApiUrl === 'https://chat.openai.com') {
    migrated.customChatGptWebApiUrl = 'https://chatgpt.com'
    dirty = true
  }

  const hasProviderSecretsRecord = isPlainObject(migrated.providerSecrets)

 ... (clipped 342 lines)

Solution Walkthrough:

Before:

function migrateUserConfig(options) {
  const migrated = { ...options };
  let dirty = false;

  // ... logic for migrating legacy secrets ...

  // ... logic for normalizing and migrating custom providers ...

  // ... logic for migrating custom API modes ...

  // ... logic for migrating the selected API mode ...

  // ... logic for reverse-syncing secrets ...

  return { migrated, dirty };
}

After:

function migrateLegacySecrets(config, providerSecrets) { ... }
function migrateCustomProviders(config, providerIdSet) { ... }
function migrateApiModes(config, providers, providerSecrets) { ... }
function migrateSelectedApiMode(config, providers, providerSecrets) { ... }

function migrateUserConfig(options) {
  let { migrated, dirty } = { migrated: { ...options }, dirty: false };

  ({ migrated, dirty } = migrateLegacySecrets(migrated, dirty));
  ({ migrated, dirty } = migrateCustomProviders(migrated, dirty));
  ({ migrated, dirty } = migrateApiModes(migrated, dirty));
  ({ migrated, dirty } = migrateSelectedApiMode(migrated, dirty));
  // ... other migration steps ...

  return { migrated, dirty };
}
Suggestion importance[1-10]: 9

__

Why: The suggestion correctly identifies that migrateUserConfig is a monolithic and overly complex function responsible for critical user data migration, and breaking it down would significantly improve maintainability and reduce the risk of bugs.

High
Possible issue
Save partial answers on stream abort

Call finish() within the onEnd callback to save partial answers when a stream is
aborted, preventing data loss.

src/services/apis/openai-compatible-core.mjs [147-151]

 async onEnd() {
-  if (!finished) port.postMessage({ done: true })
+  if (!finished) finish()
   port.onMessage.removeListener(messageListener)
   port.onDisconnect.removeListener(disconnectListener)
 },
  • Apply / Chat
Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies a data loss scenario where partially received answers are not saved if the stream is aborted, and provides a valid fix.

Medium
Use custom model URL for completions

Update the legacy provider to use the custom model URL for completion endpoints
instead of the default OpenAI URL.

src/services/apis/provider-registry.mjs [302-314]

 if (provider.id === 'legacy-custom-default') {
   if (endpointType === 'completion') {
     const baseUrl = normalizeBaseUrlWithoutVersionSuffix(
-      config.customOpenAiApiUrl,
-      'https://api.openai.com',
+      config.customModelApiUrl,
+      'http://localhost:8000/v1'
     )
-    return `${baseUrl}/v1/completions`
+    return `${baseUrl}/completions`
   }
   return (
     toStringOrEmpty(config.customModelApiUrl).trim() ||
     'http://localhost:8000/v1/chat/completions'
   )
 }
  • Apply / Chat
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies that the legacy provider was incorrectly using the OpenAI URL for completions instead of the custom model URL, and provides a logical fix.

Medium
General
Use deep equality for object comparison

Replace unreliable JSON.stringify comparisons with a robust deep equality check
to avoid unnecessary storage writes.

src/config/index.mjs [1168-1176]

-if (JSON.stringify(options.customApiModes) !== JSON.stringify(migrated.customApiModes)) {
+if (!isEqual(options.customApiModes, migrated.customApiModes)) {
   payload.customApiModes = migrated.customApiModes
 }
 if (
-  JSON.stringify(options.customOpenAIProviders) !==
-  JSON.stringify(migrated.customOpenAIProviders)
+  !isEqual(options.customOpenAIProviders, migrated.customOpenAIProviders)
 ) {
   payload.customOpenAIProviders = migrated.customOpenAIProviders
 }
  • Apply / Chat
Suggestion importance[1-10]: 6

__

Why: The suggestion correctly points out the unreliability of JSON.stringify for object comparison and proposes a more robust deep equality check, preventing unnecessary storage writes.

Low
Clear custom fields for non-custom modes

Clear obsolete customUrl and customName fields when saving a non-custom API mode
to prevent persisting stale data.

src/popup/sections/ApiModes.jsx [50-59]

 function sanitizeApiModeForSave(apiMode) {
   const nextApiMode = { ...apiMode }
   if (nextApiMode.groupName !== 'customApiModelKeys') {
     nextApiMode.providerId = ''
     nextApiMode.apiKey = ''
+    nextApiMode.customUrl = ''
+    nextApiMode.customName = ''
     return nextApiMode
   }
   if (!nextApiMode.providerId) nextApiMode.providerId = LEGACY_CUSTOM_PROVIDER_ID
   return nextApiMode
 }
  • Apply / Chat
Suggestion importance[1-10]: 5

__

Why: This is a good suggestion for data hygiene, as it clears obsolete custom fields when saving a non-custom API mode, preventing stale data from being persisted.

Low
  • More

@chatgpt-codex-connector
Copy link

Codex Review: Didn't find any major issues. More of your lovely PRs please.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (5)
src/background/index.mjs (1)

8-8: ⚠️ Potential issue | 🟠 Major

Add .mjs extension to openai-api import.

The import is missing the explicit .mjs extension, which will cause a runtime resolution failure. All other imports in this file and throughout the codebase use explicit extensions.

-import { generateAnswersWithOpenAICompatibleApi } from '../services/apis/openai-api'
+import { generateAnswersWithOpenAICompatibleApi } from '../services/apis/openai-api.mjs'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/background/index.mjs` at line 8, The import of
generateAnswersWithOpenAICompatibleApi is missing the explicit .mjs extension
and will fail at runtime; update the import statement that references
generateAnswersWithOpenAICompatibleApi (in this module) to include the .mjs
extension (e.g., '../services/apis/openai-api.mjs') so it matches the rest of
the codebase's explicit-extension pattern and resolves correctly.
src/services/apis/provider-registry.mjs (2)

302-313: ⚠️ Potential issue | 🟡 Minor

Legacy custom completion fallback should honor customModelApiUrl too.

At Lines 304-307, completion fallback uses config.customOpenAiApiUrl, while chat fallback at Lines 311-312 uses config.customModelApiUrl. This inconsistency can route legacy completion traffic to the wrong endpoint.

🔧 Proposed fix
   if (provider.id === 'legacy-custom-default') {
     if (endpointType === 'completion') {
-      const baseUrl = normalizeBaseUrlWithoutVersionSuffix(
-        config.customOpenAiApiUrl,
-        'https://api.openai.com',
-      )
-      return `${baseUrl}/v1/completions`
+      const customChatUrl = toStringOrEmpty(config.customModelApiUrl).trim()
+      if (customChatUrl) {
+        return customChatUrl.replace(/\/chat\/completions$/i, '/completions')
+      }
+      return 'http://localhost:8000/v1/completions'
     }
     return (
       toStringOrEmpty(config.customModelApiUrl).trim() ||
       'http://localhost:8000/v1/chat/completions'
     )
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/provider-registry.mjs` around lines 302 - 313, The
completion branch for provider.id === 'legacy-custom-default' currently uses
config.customOpenAiApiUrl (via normalizeBaseUrlWithoutVersionSuffix) while the
chat branch uses config.customModelApiUrl, causing inconsistent routing; update
the completion branch in the same conditional (the block checking provider.id
=== 'legacy-custom-default' and endpointType === 'completion') to prefer
config.customModelApiUrl (using toStringOrEmpty(...).trim() or the existing
localhost default) and only fall back to
normalizeBaseUrlWithoutVersionSuffix(config.customOpenAiApiUrl,
'https://api.openai.com') if customModelApiUrl is empty, keeping the helper
normalizeBaseUrlWithoutVersionSuffix and existing default URL behavior.

193-197: ⚠️ Potential issue | 🟠 Major

Use a stable fallback ID for custom providers, not array index.

At Line 195, custom-provider-${index + 1} changes when providers are reordered/removed, which can break providerSecrets and persisted apiMode.providerId references.

🔧 Proposed fix
 function normalizeCustomProvider(provider, index) {
   if (!provider || typeof provider !== 'object') return null
-  const id = toStringOrEmpty(provider.id).trim() || `custom-provider-${index + 1}`
+  const fallbackSeed = [
+    toStringOrEmpty(provider.name).trim(),
+    trimSlashes(provider.baseUrl),
+    toStringOrEmpty(provider.chatCompletionsUrl).trim(),
+  ]
+    .filter(Boolean)
+    .join('-')
+  const id =
+    normalizeProviderId(provider.id) ||
+    normalizeProviderId(fallbackSeed) ||
+    `custom-provider-${index + 1}`
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/provider-registry.mjs` around lines 193 - 197, The fallback
ID for custom providers in normalizeCustomProvider currently uses the array
index (`custom-provider-${index + 1}`), which is unstable; replace it with a
stable, deterministic fallback derived from the provider's content (e.g., use
provider.name/provider.label/provider.url when present, or compute a short
stable hash of JSON.stringify(provider)) so IDs won't change when providers are
reordered or removed; update normalizeCustomProvider to produce a URL-safe
trimmed id and ensure compatibility with existing consumers like providerSecrets
and apiMode.providerId.
src/services/apis/openai-compatible-core.mjs (1)

147-151: ⚠️ Potential issue | 🟠 Major

Finalize partial streams in onEnd to avoid dropping accumulated answer.

At Line 148, unfinished streams only emit { done: true } and skip pushRecord/session completion flow.

🔧 Proposed fix
     async onEnd() {
-      if (!finished) port.postMessage({ done: true })
+      if (!finished) finish()
       port.onMessage.removeListener(messageListener)
       port.onDisconnect.removeListener(disconnectListener)
     },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/openai-compatible-core.mjs` around lines 147 - 151, The
onEnd handler currently only posts { done: true } when finished is false, which
drops any partially accumulated answer; update the async onEnd() in
openai-compatible-core.mjs to first finalize the partial stream by invoking the
same pushRecord/session completion flow used for finished streams (e.g., call
pushRecord with the accumulated chunk/state and mark session complete) before
posting { done: true }, then still remove messageListener and disconnectListener
from port; ensure you reference the existing finished, port, pushRecord,
messageListener and disconnectListener symbols so the partial-response path
mirrors the normal completion path.
src/popup/sections/ApiModes.jsx (1)

430-440: ⚠️ Potential issue | 🟡 Minor

Validate edited providerId against current providers before setting editor state.

At Lines 431-439, a removed provider ID can be carried into editingApiMode/providerSelector, then re-saved as stale state.

🔧 Proposed fix
                     const isCustomApiMode = apiMode.groupName === 'customApiModelKeys'
-                    const providerId = isCustomApiMode
-                      ? apiMode.providerId || LEGACY_CUSTOM_PROVIDER_ID
-                      : ''
+                    const hasProvider = customProviders.some(
+                      (provider) => provider.id === apiMode.providerId,
+                    )
+                    const providerId = isCustomApiMode
+                      ? hasProvider
+                        ? apiMode.providerId
+                        : LEGACY_CUSTOM_PROVIDER_ID
+                      : ''
                     setEditingApiMode({
                       ...defaultApiMode,
                       ...apiMode,
                       providerId,
                     })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/popup/sections/ApiModes.jsx` around lines 430 - 440, When preparing the
editing state for an API mode, validate the computed providerId against the
current providers before calling setEditingApiMode and setProviderSelector:
compute isCustomApiMode and derive providerId as before, then check that
providerId exists in the active providers collection (e.g., providers or
providersById) and if it does not, replace it with LEGACY_CUSTOM_PROVIDER_ID (or
'' per existing logic), then call setEditingApiMode({...defaultApiMode,
...apiMode, providerId: validatedProviderId}) and
setProviderSelector(validatedProviderId || LEGACY_CUSTOM_PROVIDER_ID) and
setProviderDraft(defaultProviderDraft) so a removed/stale providerId cannot be
persisted.
🧹 Nitpick comments (2)
src/popup/sections/api-modes-provider-utils.mjs (1)

5-10: Consider extracting shared normalization utilities.

normalizeProviderId is duplicated between this file and src/config/index.mjs (lines 739-744). Consider extracting to a shared utility module to reduce duplication and ensure consistent behavior.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/popup/sections/api-modes-provider-utils.mjs` around lines 5 - 10,
normalizeProviderId is duplicated; extract the normalization logic into a single
shared utility function (e.g., export normalizeProviderId or normalizeTextId)
and replace the duplicate implementations in both the current module and the
other module where the same code exists; update both modules to import the
shared function, keep the behavior identical (normalizeText, toLowerCase,
replace non-alphanumerics with '-', trim leading/trailing '-' ), and remove the
old duplicate function definitions (ensure exported name matches imports).
tests/unit/services/apis/openai-api-compat.test.mjs (1)

82-118: Test documents fallback behavior where partial answer is not recorded.

This test verifies that when the SSE stream ends without a finish_reason, the final message is { done: true } without the session object. This means session.conversationRecords is not updated with the partial answer ("Partial").

If this is intentional (treating incomplete streams as failures), the behavior is documented. If partial answers should be preserved for user visibility, consider updating onEnd in the core to call finish() to record the partial answer before emitting the done signal.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/services/apis/openai-api-compat.test.mjs` around lines 82 - 118,
The test shows that when the SSE stream ends without a finish_reason the code
emits { done: true } but does not persist the partial answer; update the SSE end
handling in generateAnswersWithChatgptApiCompat (the onEnd handler) to check for
any buffered partial content (e.g., currentAnswer / accumulated delta) and call
finish() to append that partial answer to session.conversationRecords before
posting the final { done: true } message; ensure you reference and update the
same session object so the test expecting no session on final message will
change accordingly if you want to preserve partials.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/popup/sections/ApiModes.jsx`:
- Line 372: Replace the double-quoted string literal used for the placeholder
prop in the JSX element inside the ApiModes component with a single-quoted
string to match the project style guide; locate the JSX element that sets
placeholder="https://api.example.com/v1/chat/completions" (within ApiModes.jsx)
and change it to use single quotes for the string literal.

In `@src/popup/sections/GeneralPart.jsx`:
- Around line 123-135: The getBalance function's billing fallback treats 0 as
falsy and redirects to usage; update the conditional that checks billing[2]
(inside the else branch after checkBilling(selectedProviderApiKey,
openAiApiUrl)) to explicitly test for null/undefined (e.g., billing &&
billing.length > 2 && billing[2] !== null && billing[2] !== undefined) so zero
is accepted, then call setBalance with billing[2].toFixed(2) (or
String(Number(billing[2]).toFixed(2))) instead of falling back to openUrl;
ensure references are to getBalance, checkBilling, setBalance, openUrl,
selectedProviderApiKey and openAiApiUrl.

In `@src/popup/sections/provider-secret-utils.mjs`:
- Around line 5-16: The code currently trims providerId into
normalizedProviderId but doesn't reject whitespace-only IDs, causing an empty
string key to be added to providerSecrets; update the logic in the module where
providerId is handled (symbols: providerId, normalizedProviderId, payload,
providerSecrets, previousProviderSecret) to trim providerId first and if
normalizedProviderId === '' return {} (or otherwise skip building the payload)
before computing previousProviderSecret or constructing payload so no
empty-string property is written to providerSecrets.

In `@src/services/apis/openai-api.mjs`:
- Around line 24-31: touchOllamaKeepAlive builds a relative/invalid URL when
config.ollamaEndpoint is empty or ends with a /v1 suffix; update the call to
normalizeBaseUrl inside touchOllamaKeepAlive to supply a safe default base (e.g.
"http://localhost:11434") and ensure the normalized value strips any trailing
"/v1" or trailing slashes before appending "/api/generate". In short: modify
touchOllamaKeepAlive to call normalizeBaseUrl(config.ollamaEndpoint ||
DEFAULT_OLLAMA_ENDPOINT) and ensure the returned base has no "/v1" or trailing
slash so fetch(`${base}/api/generate`, ...) always yields a valid absolute URL
(adjust or add a small normalization helper if needed).

---

Duplicate comments:
In `@src/background/index.mjs`:
- Line 8: The import of generateAnswersWithOpenAICompatibleApi is missing the
explicit .mjs extension and will fail at runtime; update the import statement
that references generateAnswersWithOpenAICompatibleApi (in this module) to
include the .mjs extension (e.g., '../services/apis/openai-api.mjs') so it
matches the rest of the codebase's explicit-extension pattern and resolves
correctly.

In `@src/popup/sections/ApiModes.jsx`:
- Around line 430-440: When preparing the editing state for an API mode,
validate the computed providerId against the current providers before calling
setEditingApiMode and setProviderSelector: compute isCustomApiMode and derive
providerId as before, then check that providerId exists in the active providers
collection (e.g., providers or providersById) and if it does not, replace it
with LEGACY_CUSTOM_PROVIDER_ID (or '' per existing logic), then call
setEditingApiMode({...defaultApiMode, ...apiMode, providerId:
validatedProviderId}) and setProviderSelector(validatedProviderId ||
LEGACY_CUSTOM_PROVIDER_ID) and setProviderDraft(defaultProviderDraft) so a
removed/stale providerId cannot be persisted.

In `@src/services/apis/openai-compatible-core.mjs`:
- Around line 147-151: The onEnd handler currently only posts { done: true }
when finished is false, which drops any partially accumulated answer; update the
async onEnd() in openai-compatible-core.mjs to first finalize the partial stream
by invoking the same pushRecord/session completion flow used for finished
streams (e.g., call pushRecord with the accumulated chunk/state and mark session
complete) before posting { done: true }, then still remove messageListener and
disconnectListener from port; ensure you reference the existing finished, port,
pushRecord, messageListener and disconnectListener symbols so the
partial-response path mirrors the normal completion path.

In `@src/services/apis/provider-registry.mjs`:
- Around line 302-313: The completion branch for provider.id ===
'legacy-custom-default' currently uses config.customOpenAiApiUrl (via
normalizeBaseUrlWithoutVersionSuffix) while the chat branch uses
config.customModelApiUrl, causing inconsistent routing; update the completion
branch in the same conditional (the block checking provider.id ===
'legacy-custom-default' and endpointType === 'completion') to prefer
config.customModelApiUrl (using toStringOrEmpty(...).trim() or the existing
localhost default) and only fall back to
normalizeBaseUrlWithoutVersionSuffix(config.customOpenAiApiUrl,
'https://api.openai.com') if customModelApiUrl is empty, keeping the helper
normalizeBaseUrlWithoutVersionSuffix and existing default URL behavior.
- Around line 193-197: The fallback ID for custom providers in
normalizeCustomProvider currently uses the array index (`custom-provider-${index
+ 1}`), which is unstable; replace it with a stable, deterministic fallback
derived from the provider's content (e.g., use
provider.name/provider.label/provider.url when present, or compute a short
stable hash of JSON.stringify(provider)) so IDs won't change when providers are
reordered or removed; update normalizeCustomProvider to produce a URL-safe
trimmed id and ensure compatibility with existing consumers like providerSecrets
and apiMode.providerId.

---

Nitpick comments:
In `@src/popup/sections/api-modes-provider-utils.mjs`:
- Around line 5-10: normalizeProviderId is duplicated; extract the normalization
logic into a single shared utility function (e.g., export normalizeProviderId or
normalizeTextId) and replace the duplicate implementations in both the current
module and the other module where the same code exists; update both modules to
import the shared function, keep the behavior identical (normalizeText,
toLowerCase, replace non-alphanumerics with '-', trim leading/trailing '-' ),
and remove the old duplicate function definitions (ensure exported name matches
imports).

In `@tests/unit/services/apis/openai-api-compat.test.mjs`:
- Around line 82-118: The test shows that when the SSE stream ends without a
finish_reason the code emits { done: true } but does not persist the partial
answer; update the SSE end handling in generateAnswersWithChatgptApiCompat (the
onEnd handler) to check for any buffered partial content (e.g., currentAnswer /
accumulated delta) and call finish() to append that partial answer to
session.conversationRecords before posting the final { done: true } message;
ensure you reference and update the same session object so the test expecting no
session on final message will change accordingly if you want to preserve
partials.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2cdb121 and 8ea5824.

📒 Files selected for processing (38)
  • src/_locales/de/main.json
  • src/_locales/en/main.json
  • src/_locales/es/main.json
  • src/_locales/fr/main.json
  • src/_locales/in/main.json
  • src/_locales/it/main.json
  • src/_locales/ja/main.json
  • src/_locales/ko/main.json
  • src/_locales/pt/main.json
  • src/_locales/ru/main.json
  • src/_locales/tr/main.json
  • src/_locales/zh-hans/main.json
  • src/_locales/zh-hant/main.json
  • src/background/index.mjs
  • src/config/index.mjs
  • src/config/openai-provider-mappings.mjs
  • src/popup/sections/ApiModes.jsx
  • src/popup/sections/GeneralPart.jsx
  • src/popup/sections/api-modes-provider-utils.mjs
  • src/popup/sections/provider-secret-utils.mjs
  • src/services/apis/aiml-api.mjs
  • src/services/apis/chatglm-api.mjs
  • src/services/apis/custom-api.mjs
  • src/services/apis/deepseek-api.mjs
  • src/services/apis/moonshot-api.mjs
  • src/services/apis/ollama-api.mjs
  • src/services/apis/openai-api.mjs
  • src/services/apis/openai-compatible-core.mjs
  • src/services/apis/openrouter-api.mjs
  • src/services/apis/provider-registry.mjs
  • src/services/init-session.mjs
  • src/services/wrappers.mjs
  • src/utils/model-name-convert.mjs
  • tests/unit/config/migrate-user-config.test.mjs
  • tests/unit/popup/api-modes-provider-utils.test.mjs
  • tests/unit/popup/provider-secret-utils.test.mjs
  • tests/unit/services/apis/openai-api-compat.test.mjs
  • tests/unit/services/apis/provider-registry.test.mjs
💤 Files with no reviewable changes (6)
  • src/services/apis/ollama-api.mjs
  • src/services/apis/openrouter-api.mjs
  • src/services/apis/moonshot-api.mjs
  • src/services/apis/aiml-api.mjs
  • src/services/apis/deepseek-api.mjs
  • src/services/apis/chatglm-api.mjs
🚧 Files skipped from review as they are similar to previous changes (16)
  • tests/unit/popup/provider-secret-utils.test.mjs
  • src/services/init-session.mjs
  • src/config/openai-provider-mappings.mjs
  • src/_locales/zh-hans/main.json
  • src/_locales/ko/main.json
  • src/_locales/de/main.json
  • src/_locales/ja/main.json
  • src/_locales/ru/main.json
  • src/_locales/zh-hant/main.json
  • src/_locales/tr/main.json
  • src/_locales/in/main.json
  • src/_locales/it/main.json
  • src/_locales/es/main.json
  • src/services/wrappers.mjs
  • tests/unit/popup/api-modes-provider-utils.test.mjs
  • src/_locales/en/main.json

type="text"
ref={providerBaseUrlInputRef}
value={providerDraft.apiUrl}
placeholder="https://api.example.com/v1/chat/completions"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use single quotes for string literals in JSX props.

At Line 372, the placeholder uses double quotes.

As per coding guidelines **/*.{js,jsx,mjs,ts,tsx}: Use single quotes instead of double quotes in imports and string literals.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/popup/sections/ApiModes.jsx` at line 372, Replace the double-quoted
string literal used for the placeholder prop in the JSX element inside the
ApiModes component with a single-quoted string to match the project style guide;
locate the JSX element that sets
placeholder="https://api.example.com/v1/chat/completions" (within ApiModes.jsx)
and change it to use single quotes for the string literal.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 38 out of 38 changed files in this pull request and generated 2 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 60 to 67
const config = await getUserConfig()
const prompt =
(await getCompletionPromptBase()) +
getConversationPairs(
session.conversationRecords.slice(-config.maxConversationContextLength),
true,
) +
`Human: ${question}\nAI: `
const apiUrl = config.customOpenAiApiUrl

let answer = ''
let finished = false
const finish = () => {
finished = true
pushRecord(session, question, answer)
console.debug('conversation history', { content: session.conversationRecords })
port.postMessage({ answer: null, done: true, session: session })
}
await fetchSSE(`${apiUrl}/v1/completions`, {
method: 'POST',
signal: controller.signal,
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${apiKey}`,
},
body: JSON.stringify({
prompt: prompt,
model,
stream: true,
max_tokens: config.maxResponseTokenLength,
temperature: config.temperature,
stop: '\nHuman',
}),
onMessage(message) {
console.debug('sse message', message)
if (finished) return
if (message.trim() === '[DONE]') {
finish()
return
}
let data
try {
data = JSON.parse(message)
} catch (error) {
console.debug('json error', error)
return
}

answer += data.choices[0].text
port.postMessage({ answer: answer, done: false, session: null })

if (data.choices[0]?.finish_reason) {
finish()
return
}
},
async onStart() {},
async onEnd() {
port.postMessage({ done: true })
port.onMessage.removeListener(messageListener)
port.onDisconnect.removeListener(disconnectListener)
},
async onError(resp) {
port.onMessage.removeListener(messageListener)
port.onDisconnect.removeListener(disconnectListener)
if (resp instanceof Error) throw resp
const error = await resp.json().catch(() => ({}))
throw new Error(!isEmpty(error) ? JSON.stringify(error) : `${resp.status} ${resp.statusText}`)
},
await generateAnswersWithOpenAICompatible({
port,
question,
session,
endpointType: 'completion',
requestUrl: `${normalizeBaseUrl(config.customOpenAiApiUrl)}/v1/completions`,
model: getModelValue(session),
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

generateAnswersWithGptCompletionApi builds the request URL by appending /v1/completions to customOpenAiApiUrl. If a user has configured customOpenAiApiUrl with a /v1 suffix (common for OpenAI-compatible bases), this becomes /v1/v1/completions and will fail. Please normalize the base URL to remove a trailing /v1 (similar to normalizeBaseUrlWithoutVersionSuffix in the provider registry) before appending the path.

Copilot uses AI. Check for mistakes.
Comment on lines 430 to 440
const isCustomApiMode = apiMode.groupName === 'customApiModelKeys'
const providerId = isCustomApiMode
? apiMode.providerId || LEGACY_CUSTOM_PROVIDER_ID
: ''
setEditingApiMode({
...defaultApiMode,
...apiMode,
providerId,
})
setProviderSelector(providerId || LEGACY_CUSTOM_PROVIDER_ID)
setProviderDraft(defaultProviderDraft)
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When opening the editor for an existing custom API mode, providerId is accepted as-is and fed into the controlled <select value={providerSelector}>. If the config references a providerId that no longer exists in customOpenAIProviders (e.g., after manual config edits or partial migrations), the select will have a value with no matching option and saving will persist the stale ID. Consider validating providerId against customProviders here and falling back to LEGACY_CUSTOM_PROVIDER_ID when it’s missing.

Copilot uses AI. Check for mistakes.
@PeterDaveHello PeterDaveHello force-pushed the refactor/openai-provider-platform branch from 8ea5824 to 410bf96 Compare March 1, 2026 10:17
@PeterDaveHello
Copy link
Member Author

@codex review

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/services/init-session.mjs (1)

66-75: ⚠️ Potential issue | 🟡 Minor

Use normalized apiMode consistently when computing aiName.

aiName is computed from raw apiMode, while Line 75 stores normalizeApiMode(apiMode). For malformed-but-truthy input, these can diverge.

💡 Suggested fix
 export function initSession({
   question = null,
   conversationRecords = [],
   sessionName = null,
   modelName = null,
   autoClean = false,
   apiMode = null,
   extraCustomModelName = '',
 } = {}) {
+  const normalizedApiMode = normalizeApiMode(apiMode)
+
   return {
@@
-    aiName:
-      modelName || apiMode
+    aiName:
+      modelName || normalizedApiMode
         ? modelNameToDesc(
-            apiMode ? apiModeToModelName(apiMode) : modelName,
+            normalizedApiMode ? apiModeToModelName(normalizedApiMode) : modelName,
             t,
             extraCustomModelName,
           )
         : null,
     modelName,
-    apiMode: normalizeApiMode(apiMode),
+    apiMode: normalizedApiMode,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/init-session.mjs` around lines 66 - 75, aiName is derived using
the raw apiMode while later code stores normalizeApiMode(apiMode), which can
cause divergence for malformed-but-truthy inputs; change the aiName computation
to use the normalized value. Call normalizeApiMode(apiMode) once into a local
variable (e.g., normalizedApiMode) and use that when deciding aiName and when
passing into modelNameToDesc/apiModeToModelName (along with existing symbols
modelName, modelNameToDesc, apiModeToModelName, extraCustomModelName, t),
ensuring apiMode and modelName branches both use the normalized value.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/utils/model-name-convert.mjs`:
- Around line 81-93: normalizeApiMode currently preserves whitespace in
providerId which can break provider resolution; update the providerId assignment
in normalizeApiMode to trim string values safely—e.g., replace "providerId:
apiMode.providerId || ''" with logic that checks for a string and returns
apiMode.providerId.trim() (falling back to '' for falsy/non-string values) so
legacy/external values like " myproxy " normalize to "myproxy".

---

Outside diff comments:
In `@src/services/init-session.mjs`:
- Around line 66-75: aiName is derived using the raw apiMode while later code
stores normalizeApiMode(apiMode), which can cause divergence for
malformed-but-truthy inputs; change the aiName computation to use the normalized
value. Call normalizeApiMode(apiMode) once into a local variable (e.g.,
normalizedApiMode) and use that when deciding aiName and when passing into
modelNameToDesc/apiModeToModelName (along with existing symbols modelName,
modelNameToDesc, apiModeToModelName, extraCustomModelName, t), ensuring apiMode
and modelName branches both use the normalized value.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2cdb121 and 410bf96.

📒 Files selected for processing (40)
  • src/_locales/de/main.json
  • src/_locales/en/main.json
  • src/_locales/es/main.json
  • src/_locales/fr/main.json
  • src/_locales/in/main.json
  • src/_locales/it/main.json
  • src/_locales/ja/main.json
  • src/_locales/ko/main.json
  • src/_locales/pt/main.json
  • src/_locales/ru/main.json
  • src/_locales/tr/main.json
  • src/_locales/zh-hans/main.json
  • src/_locales/zh-hant/main.json
  • src/background/index.mjs
  • src/config/index.mjs
  • src/config/openai-provider-mappings.mjs
  • src/popup/sections/ApiModes.jsx
  • src/popup/sections/GeneralPart.jsx
  • src/popup/sections/api-modes-provider-utils.mjs
  • src/popup/sections/general-balance-utils.mjs
  • src/popup/sections/provider-secret-utils.mjs
  • src/services/apis/aiml-api.mjs
  • src/services/apis/chatglm-api.mjs
  • src/services/apis/custom-api.mjs
  • src/services/apis/deepseek-api.mjs
  • src/services/apis/moonshot-api.mjs
  • src/services/apis/ollama-api.mjs
  • src/services/apis/openai-api.mjs
  • src/services/apis/openai-compatible-core.mjs
  • src/services/apis/openrouter-api.mjs
  • src/services/apis/provider-registry.mjs
  • src/services/init-session.mjs
  • src/services/wrappers.mjs
  • src/utils/model-name-convert.mjs
  • tests/unit/config/migrate-user-config.test.mjs
  • tests/unit/popup/api-modes-provider-utils.test.mjs
  • tests/unit/popup/general-balance-utils.test.mjs
  • tests/unit/popup/provider-secret-utils.test.mjs
  • tests/unit/services/apis/openai-api-compat.test.mjs
  • tests/unit/services/apis/provider-registry.test.mjs
💤 Files with no reviewable changes (6)
  • src/services/apis/moonshot-api.mjs
  • src/services/apis/deepseek-api.mjs
  • src/services/apis/aiml-api.mjs
  • src/services/apis/openrouter-api.mjs
  • src/services/apis/ollama-api.mjs
  • src/services/apis/chatglm-api.mjs
✅ Files skipped from review due to trivial changes (1)
  • tests/unit/popup/api-modes-provider-utils.test.mjs
🚧 Files skipped from review as they are similar to previous changes (8)
  • src/_locales/es/main.json
  • src/config/openai-provider-mappings.mjs
  • src/_locales/ja/main.json
  • src/_locales/en/main.json
  • src/_locales/pt/main.json
  • tests/unit/popup/provider-secret-utils.test.mjs
  • src/_locales/in/main.json
  • src/_locales/ko/main.json

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 40 out of 40 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 410bf9666c

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +152 to +156
groupName: normalized.groupName,
itemName: normalized.itemName,
isCustom: normalized.isCustom,
customName: normalized.customName,
providerId: normalized.providerId,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Compare full API mode identity when checking selection

isApiModeSelected now equates modes using only groupName, itemName, isCustom, customName, and providerId, so distinct rows that differ on other persisted fields (notably active) are treated as the same mode. In ApiModes, this can clear the current selection when disabling a duplicate row that shares those fields, even if another equivalent active row still exists, which unexpectedly switches users away from their chosen mode.

Useful? React with 👍 / 👎.

@PeterDaveHello
Copy link
Member Author

@codex review

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
src/popup/sections/ApiModes.jsx (1)

381-381: ⚠️ Potential issue | 🟡 Minor

Use single quotes for this JSX string literal.

Please switch the placeholder string at Line 381 to single quotes to match project style.

✏️ Suggested fix
-            placeholder="https://api.example.com/v1/chat/completions"
+            placeholder='https://api.example.com/v1/chat/completions'

As per coding guidelines **/*.{js,jsx,mjs,ts,tsx}: Use single quotes instead of double quotes in imports and string literals.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/popup/sections/ApiModes.jsx` at line 381, Change the JSX string literal
for the placeholder prop to use single quotes to match project style; locate the
placeholder="https://api.example.com/v1/chat/completions" attribute in the
ApiModes.jsx component (the input/textarea JSX element that sets placeholder)
and replace the double-quoted string with a single-quoted string literal so the
placeholder prop uses single quotes.
🧹 Nitpick comments (2)
tests/unit/popup/api-modes-provider-utils.test.mjs (1)

18-41: Add a regression test for non-HTTP(S) endpoint schemes.

Given URL validation is central here, add a case like ftp://.../chat/completions to assert valid === false.

🧪 Suggested test addition
 test('parseChatCompletionsEndpointUrl rejects non-chat endpoint url', () => {
   const parsed = parseChatCompletionsEndpointUrl('https://api.example.com/v1')
   assert.equal(parsed.valid, false)
 })

+test('parseChatCompletionsEndpointUrl rejects non-http(s) endpoint url', () => {
+  const parsed = parseChatCompletionsEndpointUrl('ftp://api.example.com/v1/chat/completions')
+  assert.equal(parsed.valid, false)
+})
+
 test('parseChatCompletionsEndpointUrl keeps query string when deriving completions endpoint', () => {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/popup/api-modes-provider-utils.test.mjs` around lines 18 - 41, Add
a regression test that ensures parseChatCompletionsEndpointUrl treats
non-HTTP(S) schemes as invalid: create a new test (e.g., similar to
"parseChatCompletionsEndpointUrl rejects non-chat endpoint url") that calls
parseChatCompletionsEndpointUrl with an FTP URL like
"ftp://example.com/v1/chat/completions" and asserts parsed.valid === false;
place it alongside the existing tests in
tests/unit/popup/api-modes-provider-utils.test.mjs so the parser's scheme
validation is covered.
src/utils/model-name-convert.mjs (1)

110-126: Harden getApiModesFromConfig when activeApiModes is missing.

config.activeApiModes.map(...) can still throw on malformed/legacy config. Guarding it like customApiModes makes this path more resilient.

🛡️ Proposed robustness patch
 export function getApiModesFromConfig(config, onlyActive) {
   const normalizedCustomApiModes = (
     Array.isArray(config.customApiModes) ? config.customApiModes : []
   )
     .map((apiMode) => normalizeApiMode(apiMode))
     .filter((apiMode) => apiMode && apiMode.groupName && apiMode.itemName)

   const stringApiModes = normalizedCustomApiModes
@@
-  const originalApiModes = config.activeApiModes
+  const activeApiModes = Array.isArray(config.activeApiModes) ? config.activeApiModes : []
+  const originalApiModes = activeApiModes
     .map((modelName) => {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/model-name-convert.mjs` around lines 110 - 126,
getApiModesFromConfig currently assumes config.activeApiModes is an array and
calls config.activeApiModes.map(...), which can throw for missing or malformed
configs; change the code to guard this path the same way customApiModes is
guarded by using a normalizedActiveApiModes variable (e.g. const
normalizedActiveApiModes = Array.isArray(config.activeApiModes) ?
config.activeApiModes : []) and then call .map(...) on normalizedActiveApiModes
(and any downstream logic that expects activeApiModes) so getApiModesFromConfig
handles legacy/malformed configs safely.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/popup/sections/api-modes-provider-utils.mjs`:
- Around line 48-77: The parseChatCompletionsEndpointUrl function currently
accepts any absolute URL scheme; update it to reject non-HTTP(S) schemes by
checking parsedUrl.protocol after creating the URL (in
parseChatCompletionsEndpointUrl) and return { valid: false, chatCompletionsUrl:
'', completionsUrl: '' } if protocol is not 'http:' or 'https:' so only
http/https endpoints pass UI validation.

---

Duplicate comments:
In `@src/popup/sections/ApiModes.jsx`:
- Line 381: Change the JSX string literal for the placeholder prop to use single
quotes to match project style; locate the
placeholder="https://api.example.com/v1/chat/completions" attribute in the
ApiModes.jsx component (the input/textarea JSX element that sets placeholder)
and replace the double-quoted string with a single-quoted string literal so the
placeholder prop uses single quotes.

---

Nitpick comments:
In `@src/utils/model-name-convert.mjs`:
- Around line 110-126: getApiModesFromConfig currently assumes
config.activeApiModes is an array and calls config.activeApiModes.map(...),
which can throw for missing or malformed configs; change the code to guard this
path the same way customApiModes is guarded by using a normalizedActiveApiModes
variable (e.g. const normalizedActiveApiModes =
Array.isArray(config.activeApiModes) ? config.activeApiModes : []) and then call
.map(...) on normalizedActiveApiModes (and any downstream logic that expects
activeApiModes) so getApiModesFromConfig handles legacy/malformed configs
safely.

In `@tests/unit/popup/api-modes-provider-utils.test.mjs`:
- Around line 18-41: Add a regression test that ensures
parseChatCompletionsEndpointUrl treats non-HTTP(S) schemes as invalid: create a
new test (e.g., similar to "parseChatCompletionsEndpointUrl rejects non-chat
endpoint url") that calls parseChatCompletionsEndpointUrl with an FTP URL like
"ftp://example.com/v1/chat/completions" and asserts parsed.valid === false;
place it alongside the existing tests in
tests/unit/popup/api-modes-provider-utils.test.mjs so the parser's scheme
validation is covered.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 410bf96 and ddd41e8.

📒 Files selected for processing (7)
  • src/popup/sections/ApiModes.jsx
  • src/popup/sections/api-modes-provider-utils.mjs
  • src/popup/sections/provider-secret-utils.mjs
  • src/utils/model-name-convert.mjs
  • tests/unit/popup/api-modes-provider-utils.test.mjs
  • tests/unit/popup/provider-secret-utils.test.mjs
  • tests/unit/utils/model-name-convert.test.mjs
🚧 Files skipped from review as they are similar to previous changes (2)
  • tests/unit/popup/provider-secret-utils.test.mjs
  • src/popup/sections/provider-secret-utils.mjs

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 41 out of 41 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 340 to 345
>
<option value={LEGACY_CUSTOM_PROVIDER_ID}>{t('Custom')}</option>
{effectiveProviders.map((provider) => (
<option key={provider.id} value={provider.id}>
{provider.name}
</option>
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Custom Provider" was added to locale files, but the UI still renders the legacy label t('Custom') for the default provider option. This leaves the new i18n key unused across all locales. Consider either switching this label to t('Custom Provider') (if that’s the intended UX) or removing the unused locale entries to avoid drift.

Copilot uses AI. Check for mistakes.
@chatgpt-codex-connector
Copy link

Codex Review: Didn't find any major issues. What shall we delve into next?

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Consolidate OpenAI-compatible API request handling into a single shared
module and route background dispatch through provider registry lookup.
This removes duplicated streaming/parsing logic from openai-api and
custom-api while keeping existing behavior.

Add config migration to preserve existing API keys and custom mode
entries by mapping them into providerSecrets and custom provider
records. Keep legacy fallbacks for apiMode customUrl/custom apiKey to
avoid user-visible regressions during rollout.

Normalize apiMode objects at runtime and compare selection using stable
identity fields so migrated and legacy session data continue to match
correctly.
Split provider management from API mode saving so users can add and edit
custom OpenAI-compatible providers in a dedicated editor.

Require a full chat-completions endpoint URL for provider setup and
derive the paired completions endpoint with shared popup utilities
backed by unit tests.

Keep API mode provider binding explicit at save time and preserve
provider-secret synchronization in General settings for backward
compatibility.

Add provider-related locale keys across supported locales.
@PeterDaveHello PeterDaveHello force-pushed the refactor/openai-provider-platform branch from ddd41e8 to ac3a38b Compare March 1, 2026 17:09
@PeterDaveHello
Copy link
Member Author

@codex review

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 41 out of 41 changed files in this pull request and generated 3 comments.

Comments suppressed due to low confidence (1)

src/services/apis/openai-api.mjs:107

  • generateAnswersWithChatgptApi is exported but appears to have no runtime callers in src/ (only tests import it). If it’s kept solely for tests/legacy compatibility, consider either removing it or clearly documenting it as legacy/test-only to reduce API surface and future drift with generateAnswersWithOpenAICompatibleApi.
export async function generateAnswersWithChatgptApi(port, question, session, apiKey) {
  const config = await getUserConfig()
  const openAiBaseUrl = normalizeBaseUrlWithoutVersionSuffix(
    config.customOpenAiApiUrl,
    'https://api.openai.com',
  )
  return generateAnswersWithChatgptApiCompat(
    `${openAiBaseUrl}/v1`,
    port,
    question,
    session,
    apiKey,
    {},
    'openai',
  )
}

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +84 to +120
test('generateAnswersWithChatgptApiCompat emits fallback done message when stream ends without finish reason', async (t) => {
t.mock.method(console, 'debug', () => {})
setStorage({
maxConversationContextLength: 3,
maxResponseTokenLength: 256,
temperature: 0.25,
})

const session = {
modelName: 'chatgptApi4oMini',
conversationRecords: [],
isRetry: false,
}
const port = createFakePort()

t.mock.method(globalThis, 'fetch', async () =>
createMockSseResponse(['data: {"choices":[{"delta":{"content":"Partial"}}]}\n\n']),
)

await generateAnswersWithChatgptApiCompat(
'https://api.example.com/v1',
port,
'CurrentQ',
session,
'sk-test',
)

assert.equal(
port.postedMessages.some((message) => message.done === false && message.answer === 'Partial'),
true,
)
assert.equal(
port.postedMessages.some((message) => message.done === true && message.session === session),
false,
)
assert.deepEqual(port.postedMessages.at(-1), { done: true })
})
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test currently locks in the inconsistent fallback behavior where a stream ending without finish_reason emits only { done: true } and does not update session.conversationRecords. If the intent is “stream completion signaling consistency” (and to preserve partial answers), this test should be updated to expect the unified done payload { answer: null, done: true, session } and to assert that the partial answer is pushed to session.conversationRecords.

Copilot uses AI. Check for mistakes.
Comment on lines 69 to 84
export async function generateAnswersWithGptCompletionApi(port, question, session, apiKey) {
const { controller, messageListener, disconnectListener } = setAbortController(port)
const model = getModelValue(session)

const config = await getUserConfig()
const prompt =
(await getCompletionPromptBase()) +
getConversationPairs(
session.conversationRecords.slice(-config.maxConversationContextLength),
true,
) +
`Human: ${question}\nAI: `
const apiUrl = config.customOpenAiApiUrl

let answer = ''
let finished = false
const finish = () => {
finished = true
pushRecord(session, question, answer)
console.debug('conversation history', { content: session.conversationRecords })
port.postMessage({ answer: null, done: true, session: session })
}
await fetchSSE(`${apiUrl}/v1/completions`, {
method: 'POST',
signal: controller.signal,
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${apiKey}`,
},
body: JSON.stringify({
prompt: prompt,
model,
stream: true,
max_tokens: config.maxResponseTokenLength,
temperature: config.temperature,
stop: '\nHuman',
}),
onMessage(message) {
console.debug('sse message', message)
if (finished) return
if (message.trim() === '[DONE]') {
finish()
return
}
let data
try {
data = JSON.parse(message)
} catch (error) {
console.debug('json error', error)
return
}

answer += data.choices[0].text
port.postMessage({ answer: answer, done: false, session: null })

if (data.choices[0]?.finish_reason) {
finish()
return
}
},
async onStart() {},
async onEnd() {
port.postMessage({ done: true })
port.onMessage.removeListener(messageListener)
port.onDisconnect.removeListener(disconnectListener)
},
async onError(resp) {
port.onMessage.removeListener(messageListener)
port.onDisconnect.removeListener(disconnectListener)
if (resp instanceof Error) throw resp
const error = await resp.json().catch(() => ({}))
throw new Error(!isEmpty(error) ? JSON.stringify(error) : `${resp.status} ${resp.statusText}`)
},
const openAiBaseUrl = normalizeBaseUrlWithoutVersionSuffix(
config.customOpenAiApiUrl,
'https://api.openai.com',
)
await generateAnswersWithOpenAICompatible({
port,
question,
session,
endpointType: 'completion',
requestUrl: `${openAiBaseUrl}/v1/completions`,
model: getModelValue(session),
apiKey,
})
}
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

generateAnswersWithGptCompletionApi is exported but appears to have no runtime callers in src/ (only tests import it). If this wrapper is no longer part of the supported internal API (now that background execution routes through generateAnswersWithOpenAICompatibleApi), consider removing it to avoid maintaining multiple entry points for the same behavior.

Copilot uses AI. Check for mistakes.
export async function generateAnswersWithOpenAICompatibleApi(port, question, session, config) {
const request = resolveOpenAICompatibleRequest(config, session)
if (!request) {
throw new Error('Unknown OpenAI-compatible provider configuration')
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The thrown error message Unknown OpenAI-compatible provider configuration is likely surfaced directly to users via handlePortError(err.message), but it doesn’t include enough context to diagnose what went wrong (e.g., resolved providerId, apiMode.groupName/itemName, modelName). Consider including a small amount of safe diagnostic context in the message to make support/debugging easier.

Suggested change
throw new Error('Unknown OpenAI-compatible provider configuration')
const apiMode = (config && config.apiMode) || {}
const details = []
if (apiMode.groupName) details.push(`group=${apiMode.groupName}`)
if (apiMode.itemName) details.push(`item=${apiMode.itemName}`)
const sessionModel = session && (session.model || session.modelName)
if (sessionModel) details.push(`model=${sessionModel}`)
const context = details.length ? ` (${details.join(', ')})` : ''
throw new Error(`Unknown OpenAI-compatible provider configuration${context}`)

Copilot uses AI. Check for mistakes.
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: ac3a38b356

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

if (!providerId) return null
let resolvedProviderId = providerId
let provider = getProviderById(config, providerId)
if (!provider && session?.apiMode?.groupName === 'customApiModelKeys') {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Fall back to custom provider matching on ID collisions

When a legacy custom provider ID collides with a builtin ID (for example openai renamed to openai-2 during migration), a stale custom session that still carries apiMode.providerId = "openai" is resolved to the builtin OpenAI provider because getProviderById succeeds and the custom fallback path is skipped. In that case customUrl and the migrated custom provider secret are ignored, so requests are sent to the wrong endpoint with the wrong key. This is especially visible for resumed/legacy sessions where provider IDs were renamed but session payloads were not.

Useful? React with 👍 / 👎.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
src/services/apis/provider-registry.mjs (2)

193-199: ⚠️ Potential issue | 🟠 Major

Index-based fallback provider ID is still unstable.

At Line 195, falling back to custom-provider-${index + 1} can change IDs when provider order changes, which can break providerSecrets and persisted providerId references.

🔧 Proposed fix
 function normalizeCustomProvider(provider, index) {
   if (!provider || typeof provider !== 'object') return null
-  const id = toStringOrEmpty(provider.id).trim() || `custom-provider-${index + 1}`
+  const id =
+    normalizeProviderId(provider.id) ||
+    normalizeProviderId(provider.name) ||
+    normalizeProviderId(provider.chatCompletionsUrl || provider.baseUrl) ||
+    `custom-provider-${index + 1}`
   return {
     id,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/provider-registry.mjs` around lines 193 - 199, The fallback
that sets id to `custom-provider-${index + 1}` in normalizeCustomProvider is
unstable because reordering changes IDs and breaks providerSecrets/persisted
providerId references; replace the index-based fallback with a deterministic
stable ID generator: when provider.id is missing, compute a stable id (e.g., a
namespaced prefix plus a short hash derived from immutable provider attributes
like baseUrl and name or the JSON of the provider object) inside
normalizeCustomProvider so the same provider yields the same id even if order
changes, and ensure the generated id format matches existing provider id
expectations.

255-263: ⚠️ Potential issue | 🟠 Major

Mode-level apiKey still overrides provider secret for provider-bound custom modes.

At Line 261, getProviderSecret returns session.apiMode.apiKey for customApiModelKeys even when providerId is set. That contradicts the provider-secret single-source migration path and can keep stale mode keys active after provider edits.

🔧 Proposed fix
-  if (session?.apiMode?.groupName === 'customApiModelKeys' && apiModeApiKey) {
+  if (
+    session?.apiMode?.groupName === 'customApiModelKeys' &&
+    !toStringOrEmpty(session?.apiMode?.providerId).trim() &&
+    apiModeApiKey
+  ) {
     return apiModeApiKey
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/provider-registry.mjs` around lines 255 - 263,
getProviderSecret currently returns session.apiMode.apiKey for groupName
'customApiModelKeys' even when a providerId is present, letting mode-level keys
override provider secrets; update getProviderSecret so the branch that returns
apiModeApiKey only runs when providerId is not set (i.e., only use
session.apiMode.apiKey for customApiModelKeys if providerId is falsy),
preserving provider-bound secret precedence and preventing stale mode keys from
being used after provider edits.
src/services/apis/openai-compatible-core.mjs (1)

146-149: ⚠️ Potential issue | 🟠 Major

Finalize unexpected stream ends to preserve partial output and session state.

At Line 147, unfinished streams emit only { done: true }, skipping finish(). That drops pushRecord(...) and the final { answer: null, done: true, session } payload on unexpected EOF.

🔧 Proposed fix (preserve manual-stop behavior)
     async onEnd() {
-      if (!finished) port.postMessage({ done: true })
+      if (!finished && !controller.signal.aborted) {
+        finish()
+      } else if (!finished) {
+        port.postMessage({ done: true })
+      }
       port.onMessage.removeListener(messageListener)
       port.onDisconnect.removeListener(disconnectListener)
     },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/openai-compatible-core.mjs` around lines 146 - 149, The
onEnd() handler for the stream currently posts only { done: true } when finished
is false and skips calling finish(), which prevents pushRecord(...) and the
final { answer: null, done: true, session } payload from being emitted; update
the onEnd() implementation in the onEnd function so that when finished is false
it calls finish() (or invokes the same code path finish() uses to flush partial
output and call pushRecord(...)) before posting messages, and still removes the
message and disconnect listeners via
port.onMessage.removeListener(messageListener) and
port.onDisconnect.removeListener(disconnectListener) to preserve manual-stop
behavior and session state.
🧹 Nitpick comments (1)
src/_locales/in/main.json (1)

141-141: Consider adding politeness marker for consistency.

The translation omits "Please" (which would be "Silakan" in Indonesian). Other similar prompt strings in this file include the politeness marker (e.g., lines 46-48 use "Silakan masuk...").

💡 Suggested refinement
-  "Please enter a full Chat Completions URL": "Masukkan URL Chat Completions lengkap",
+  "Please enter a full Chat Completions URL": "Silakan masukkan URL Chat Completions lengkap",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/_locales/in/main.json` at line 141, The translation for the key "Please
enter a full Chat Completions URL" is missing the politeness marker "Silakan";
update its Indonesian value to include it (e.g., change "Masukkan URL Chat
Completions lengkap" to "Silakan masukkan URL Chat Completions lengkap") so it
matches the tone used by other prompt strings in the file.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/services/apis/openai-compatible-core.mjs`:
- Around line 17-20: In buildMessageAnswer, the current early return of
data.response overwrites prior chunks for streamed legacy providers; instead,
when allowLegacyResponseField is true and data?.response is a string, append
(concatenate) data.response to the existing answer buffer (ensure answer is
initialized to an empty string when undefined) and then return the accumulated
answer; adjust function logic in buildMessageAnswer to mutate/extend answer
rather than replacing it when handling legacy response chunks.

---

Duplicate comments:
In `@src/services/apis/openai-compatible-core.mjs`:
- Around line 146-149: The onEnd() handler for the stream currently posts only {
done: true } when finished is false and skips calling finish(), which prevents
pushRecord(...) and the final { answer: null, done: true, session } payload from
being emitted; update the onEnd() implementation in the onEnd function so that
when finished is false it calls finish() (or invokes the same code path finish()
uses to flush partial output and call pushRecord(...)) before posting messages,
and still removes the message and disconnect listeners via
port.onMessage.removeListener(messageListener) and
port.onDisconnect.removeListener(disconnectListener) to preserve manual-stop
behavior and session state.

In `@src/services/apis/provider-registry.mjs`:
- Around line 193-199: The fallback that sets id to `custom-provider-${index +
1}` in normalizeCustomProvider is unstable because reordering changes IDs and
breaks providerSecrets/persisted providerId references; replace the index-based
fallback with a deterministic stable ID generator: when provider.id is missing,
compute a stable id (e.g., a namespaced prefix plus a short hash derived from
immutable provider attributes like baseUrl and name or the JSON of the provider
object) inside normalizeCustomProvider so the same provider yields the same id
even if order changes, and ensure the generated id format matches existing
provider id expectations.
- Around line 255-263: getProviderSecret currently returns
session.apiMode.apiKey for groupName 'customApiModelKeys' even when a providerId
is present, letting mode-level keys override provider secrets; update
getProviderSecret so the branch that returns apiModeApiKey only runs when
providerId is not set (i.e., only use session.apiMode.apiKey for
customApiModelKeys if providerId is falsy), preserving provider-bound secret
precedence and preventing stale mode keys from being used after provider edits.

---

Nitpick comments:
In `@src/_locales/in/main.json`:
- Line 141: The translation for the key "Please enter a full Chat Completions
URL" is missing the politeness marker "Silakan"; update its Indonesian value to
include it (e.g., change "Masukkan URL Chat Completions lengkap" to "Silakan
masukkan URL Chat Completions lengkap") so it matches the tone used by other
prompt strings in the file.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ddd41e8 and ac3a38b.

📒 Files selected for processing (41)
  • src/_locales/de/main.json
  • src/_locales/en/main.json
  • src/_locales/es/main.json
  • src/_locales/fr/main.json
  • src/_locales/in/main.json
  • src/_locales/it/main.json
  • src/_locales/ja/main.json
  • src/_locales/ko/main.json
  • src/_locales/pt/main.json
  • src/_locales/ru/main.json
  • src/_locales/tr/main.json
  • src/_locales/zh-hans/main.json
  • src/_locales/zh-hant/main.json
  • src/background/index.mjs
  • src/config/index.mjs
  • src/config/openai-provider-mappings.mjs
  • src/popup/sections/ApiModes.jsx
  • src/popup/sections/GeneralPart.jsx
  • src/popup/sections/api-modes-provider-utils.mjs
  • src/popup/sections/general-balance-utils.mjs
  • src/popup/sections/provider-secret-utils.mjs
  • src/services/apis/aiml-api.mjs
  • src/services/apis/chatglm-api.mjs
  • src/services/apis/custom-api.mjs
  • src/services/apis/deepseek-api.mjs
  • src/services/apis/moonshot-api.mjs
  • src/services/apis/ollama-api.mjs
  • src/services/apis/openai-api.mjs
  • src/services/apis/openai-compatible-core.mjs
  • src/services/apis/openrouter-api.mjs
  • src/services/apis/provider-registry.mjs
  • src/services/init-session.mjs
  • src/services/wrappers.mjs
  • src/utils/model-name-convert.mjs
  • tests/unit/config/migrate-user-config.test.mjs
  • tests/unit/popup/api-modes-provider-utils.test.mjs
  • tests/unit/popup/general-balance-utils.test.mjs
  • tests/unit/popup/provider-secret-utils.test.mjs
  • tests/unit/services/apis/openai-api-compat.test.mjs
  • tests/unit/services/apis/provider-registry.test.mjs
  • tests/unit/utils/model-name-convert.test.mjs
💤 Files with no reviewable changes (6)
  • src/services/apis/chatglm-api.mjs
  • src/services/apis/ollama-api.mjs
  • src/services/apis/aiml-api.mjs
  • src/services/apis/openrouter-api.mjs
  • src/services/apis/deepseek-api.mjs
  • src/services/apis/moonshot-api.mjs
🚧 Files skipped from review as they are similar to previous changes (13)
  • src/_locales/pt/main.json
  • src/_locales/en/main.json
  • src/_locales/ja/main.json
  • src/config/openai-provider-mappings.mjs
  • src/_locales/zh-hant/main.json
  • tests/unit/popup/general-balance-utils.test.mjs
  • tests/unit/utils/model-name-convert.test.mjs
  • src/services/init-session.mjs
  • tests/unit/config/migrate-user-config.test.mjs
  • src/_locales/tr/main.json
  • src/_locales/ru/main.json
  • src/_locales/zh-hans/main.json
  • src/_locales/fr/main.json

Comment on lines +17 to +20
function buildMessageAnswer(answer, data, allowLegacyResponseField) {
if (allowLegacyResponseField && typeof data?.response === 'string' && data.response) {
return data.response
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Accumulate legacy response chunks instead of replacing the answer buffer.

At Line 19, returning data.response overwrites prior chunks. For streamed legacy providers, this can persist only the tail fragment.

🔧 Proposed fix
 function buildMessageAnswer(answer, data, allowLegacyResponseField) {
-  if (allowLegacyResponseField && typeof data?.response === 'string' && data.response) {
-    return data.response
+  if (allowLegacyResponseField && typeof data?.response === 'string') {
+    return answer + data.response
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/openai-compatible-core.mjs` around lines 17 - 20, In
buildMessageAnswer, the current early return of data.response overwrites prior
chunks for streamed legacy providers; instead, when allowLegacyResponseField is
true and data?.response is a string, append (concatenate) data.response to the
existing answer buffer (ensure answer is initialized to an empty string when
undefined) and then return the accumulated answer; adjust function logic in
buildMessageAnswer to mutate/extend answer rather than replacing it when
handling legacy response chunks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants