Skip to content

[GitHub] github:copilot is a routing alias — no model control or visibility#897

Draft
LoackyBit wants to merge 17 commits intoGitlawb:mainfrom
LoackyBit:main
Draft

[GitHub] github:copilot is a routing alias — no model control or visibility#897
LoackyBit wants to merge 17 commits intoGitlawb:mainfrom
LoackyBit:main

Conversation

@LoackyBit
Copy link
Copy Markdown

@LoackyBit LoackyBit commented Apr 25, 2026

AI-assisted: This PR was created with AI assistance. Human review and testing recommended.

Summary

  • What changed:
    • Replaced the hardcoded github:copilot alias with real model IDs in configs.ts (e.g. claude-sonnet-4.6, gpt-5-mini).
    • Switched GitHub model discovery from the public models.github.ai/catalog/models endpoint to the authenticated Copilot /models endpoint, so the model list reflects the user's actual plan.
    • Added dynamic routing: newer models (openai/*, gpt-5*, *codex*) are routed to the Azure inference endpoint with API version 2026-03-10; legacy models continue to use api.githubcopilot.com.
    • Implemented a session-level blocklist (markGithubModelUnsupported) to hide models that return 400 model_not_supported at runtime.
    • Added githubRateLimit.ts to capture x-ratelimit-* headers; startup screen and cost-tracker now display the resolved model ID and remaining quota.
    • Fixed auth bug where Azure-routed models failed with a malformed Authorization header.
  • Why it changed:
    • The old approach hardcoded or guessed the model list, making it permanently out of sync with the user's Copilot plan.
    • Users had no control over which model was actually called and no visibility into rate limits.

Testing

  • bun run build
  • bun run smoke
  • providerConfig.github.test.ts (25 tests passed)
  • githubRateLimit.test.ts (7 tests passed)
  • Verified model.github.test.ts and modelStrings.github.test.ts

⚠️ Open question blocking merge (#964)

The current filter for the model picker is:

data.filter(m => m.policy?.state !== "disabled")

model_picker_enabled is parsed but not used as a filter condition. This needs a decision:

Option Pros Cons
Add model_picker_enabled === true Safer; aligns with GitHub's schema; hides unintended models May hide models that still work in practice
Keep current (ignore model_picker_enabled) More permissive; more models visible now Exposes models GitHub doesn't intend to surface; may break later

Should we optimize for strict alignment with GitHub's intended picker behavior, or broader compatibility with models that currently work? vote here: #964

Resolving this is required before merging. cc @Meetpatel006

Close #822

LoackyBit and others added 3 commits April 24, 2026 23:54
…rate-limit (Gitlawb#822)

Add GITHUB_MODEL env var support with higher priority than OPENAI_MODEL
(backward-compatible fallback). Map default copilot alias to "auto" on
the Copilot API for the ~10% discount. Show the resolved model name in
the startup screen instead of the raw "github:copilot" alias.

Parse x-ratelimit-* headers from GitHub API responses and surface them
in the session cost summary at exit.

Closes Gitlawb#822
…able and migrate from copilot aliases to explicit models
[Feature] Add GITHUB_MODEL env var, resolved model display, and rate-limit surfacing (Gitlawb#822)
@LoackyBit
Copy link
Copy Markdown
Author

After looking into it more, I realized this is more complex than expected and requires deeper expertise in the provider/model handling. I don’t have the technical background to fix it properly. I’ll leave everything as is in case it can be useful to someone else.

Copy link
Copy Markdown
Author

@LoackyBit LoackyBit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ This commit is considered obsolete.

Commit 1c38084 was built on the first version of issue #822 (opened 3 days ago), which in its original formulation described the issue differently than it is today.

The first version of the issue proposed the main direction of adding a GITHUB_MODEL environment variable, surfacing the resolved model, and optional parsing of rate-limit/quota headers—an approach more geared toward explicit configuration via env vars.

The current issue description has been updated and focuses on the root cause in src/utils/model/configs.ts, with a more precise technical analysis of the hardcoded github:copilot alias.

As a result, the changes introduced by this commit reflect the original, outdated vision of the issue, potentially making it misaligned with the current scope and direction of the expected fix. It is recommended that you reevaluate your changes against the current version of the issue before merging.

Edited: repository cleaned up - old files removed, only new issue-related files remain.

@Meetpatel006
Copy link
Copy Markdown
Contributor

@LoackyBit , This feature is mostly built by me, so for this i am read to help if you need to.

@LoackyBit
Copy link
Copy Markdown
Author

@Meetpatel006 I created the PR using AI, as mentioned in the first comment. Initially, I thought I wasn't able to solve the problem, but in the latest commits, I should have successfully connected the GitHub API. That said, I think a review from you is necessary at this point.

So far, I'm having this problem:

  • the list of models is loaded, but some are inaccessible, returning:
API Error: 400 {"error":{"message":"The requested model is not supported.","code":"model_not_supported","param":"model","type":"invalid_request_error"}}

In my case, one that works is openai/gpt-5-mini

@Meetpatel006
Copy link
Copy Markdown
Contributor

@Meetpatel006 I created the PR using AI, as mentioned in the first comment. Initially, I thought I wasn't able to solve the problem, but in the latest commits, I should have successfully connected the GitHub API. That said, I think a review from you is necessary at this point.

So far, I'm having this problem:

  • the list of models is loaded, but some are inaccessible, returning:
API Error: 400 {"error":{"message":"The requested model is not supported.","code":"model_not_supported","param":"model","type":"invalid_request_error"}}

In my case, one that works is openai/gpt-5-mini

Can you tell me which copilot plan are you using, and for this 400 issue which model are you try

@LoackyBit
Copy link
Copy Markdown
Author

LoackyBit commented Apr 26, 2026

I also think the code doesn't use the correct list of models. The one it uses is actually the list from the GitHub catalog, when instead it should use the list of models available based on each user's plan.

I looked into how opencode correctly fetches the model list filtered by user plan — and the fix is simpler than expected.

The current problem is that fetchGithubModels calls https://models.github.ai/catalog/models, which returns all models from the public GitHub Marketplace, regardless of the user's Copilot plan.

How opencode handles this instead:
It calls the same /models endpoint, but:

  1. Uses the authenticated Copilot base URL (e.g. https://api.githubcopilot.com) with the user's Copilot token in the headers
  2. Filters the response with these two conditions:
data.data.filter(
  (m) => m.model_picker_enabled && m.policy?.state !== "disabled"
)

The key field is policy.state: when a model is not available for the user's plan, the Copilot API marks it with policy.state = "disabled". This field does not exist in the public catalog response, so filtering by plan is simply impossible when calling /catalog/models.

Proposed fix:

  • Replace the call to https://models.github.ai/catalog/models with a call to ${copilotBaseURL}/models using the user's Copilot token (already available in the existing Copilot auth flow)
  • Add a filter for model_picker_enabled && policy?.state !== "disabled" before populating the model picker

@LoackyBit
Copy link
Copy Markdown
Author

Can you tell me which copilot plan are you using, and for this 400 issue which model are you try

I have the Education plan and was trying to use the GPT-5.5 mini model.

- Implement dynamic fetching of GitHub models with caching.
- Add tests for GitHub model options and fetching logic.
- Enhance model picker to support GitHub models.
- Introduce error handling for unsupported GitHub models.

Co-authored-by: Copilot <copilot@github.com>
@LoackyBit
Copy link
Copy Markdown
Author

LoackyBit commented Apr 26, 2026

Commit d1c1829 summary:

  • Switches GitHub model discovery to the authenticated Copilot /models endpoint (adds required Copilot headers, removes the invalid X-GitHub-Api-Version), filters by model_picker_enabled and policy state, adds caching/prefetch/in-flight dedupe, makes the /model picker await a refresh, routes unqualified GitHub model IDs to the Copilot inference endpoint, and implements a runtime blocklist to hide models that return model_not_supported.

Remaining issue (models that still fail at inference):

  • some models appears in the /models list but the inference endpoint returns 400 model_not_supported.

@jatmn
Copy link
Copy Markdown
Collaborator

jatmn commented Apr 27, 2026

I believe this will be impacted by #910

@Meetpatel006
Copy link
Copy Markdown
Contributor

Commit d1c1829 summary:

  • Switches GitHub model discovery to the authenticated Copilot /models endpoint (adds required Copilot headers, removes the invalid X-GitHub-Api-Version), filters by model_picker_enabled and policy state, adds caching/prefetch/in-flight dedupe, makes the /model picker await a refresh, routes unqualified GitHub model IDs to the Copilot inference endpoint, and implements a runtime blocklist to hide models that return model_not_supported.

Remaining issue (models that still fail at inference):

  • some models appears in the /models list but the inference endpoint returns 400 model_not_supported.

Here i thinks the main issue is for github model is, for the new model like gpt codex model its endpoint is this and for older model like gpt 4o and other older model endpoint is and also ypu can find this reference im code also, so can you check this is issue

And for more help checkout this official docs by github

https://docs.github.com/en/rest/models/inference?apiVersion=2026-03-10#run-an-inference-request

https://docs.github.com/en/rest/about-the-rest-api/api-versions?apiVersion=2026-03-10

@LoackyBit
Copy link
Copy Markdown
Author

@Meetpatel006 I can't see the endpoints in the comment

@Meetpatel006
Copy link
Copy Markdown
Contributor

@LoackyBit ,

I tried your PR locally; it has some issues you need to address with the whole test suite, and also in my case it did not even properly build, so can you check for it?

  • exact fallback block is highlighted exists in only one place: openaiShim.ts at line 1925-1928.
  • that block retries from /chat/completions to /responses when GitHub returns a 400 with specific error text.
  • same logic elsewhere is not the same fallback block; it is transport-driven routing:
    • transport selection in providerConfig.ts
    • /responses call execution in codexShim.ts

@LoackyBit
Copy link
Copy Markdown
Author

Thanks for the detailed feedback @Meetpatel006

Here's a summary of the last commit:

  1. Build & Tests: Confirmed that the project builds correctly (bun run build) and added a comprehensive test suite in providerConfig.github.test.ts to verify the routing logic.
  2. Dynamic Routing & Versioning: Following your suggestion, I implemented a dynamic routing system:
    • Newer models (openai/*, gpt-5*, *codex*) are now automatically routed to the Azure inference endpoint (models.inference.ai.azure.com/v1) using the 2026-03-10 API version.
    • Legacy models (like Claude 3.5 Sonnet or older GPT-4o aliases) continue to use the standard Copilot endpoint (api.githubcopilot.com).
  3. Auth Fix for Azure Routing: Fixed a bug where models routed to the Azure endpoint would fail with an Authorization header is badly formatted error. The shim now correctly falls back to the GitHub token for these requests.
  4. Handling Unsupported Models (Plan-Awareness): I noticed that the GitHub /models endpoint sometimes returns models that are marked as "enabled" but actually result in a 400 model_not_supported error depending on the user's plan.
    • To handle this, I've implemented a session-level blocklist (markGithubModelUnsupported). If a model returns a model_not_supported error, it is now dynamically hidden from the /model picker for the rest of the session.
    • Note: This is currently a runtime workaround because the model list provided by the API doesn't seem to reflect plan-specific restrictions accurately. Ideally, this should be handled upstream or via a formal capability check, but this provides a much smoother UX in the meantime.
  5. Endpoint Resolution: Refactored resolveProviderRequest in providerConfig.ts to handle the base URL resolution more explicitly based on the model name, making the transport-driven routing more robust.

I've also ensured that the ModelPicker UI doesn't "re-offer" a failed model if it's currently selected. Let me know if this version works better for you

@LoackyBit
Copy link
Copy Markdown
Author

LoackyBit commented Apr 27, 2026

I've identified the reason why the GitHub Copilot model list wasn't working in openclaude: instead of asking Copilot directly "What models does this user have access to?", previous attempts tried to guess or hardcode the answer using environment variables and static lists, a fragile method that was always out of sync with the user's actual plan.

The solution uses the opencode approach, which simply calls the official Copilot API at startup, gets the actual model list for that specific user, and uses it. Some models remain unsupported, despite the list being correct, but this isn't a major problem. At least the model list is now correct.

CleanShot 2026-04-27 at 19 55 50@2x

@LoackyBit LoackyBit marked this pull request as ready for review April 27, 2026 17:49
Copilot AI review requested due to automatic review settings April 27, 2026 17:49
@Meetpatel006
Copy link
Copy Markdown
Contributor

I've identified the reason why the GitHub Copilot model list wasn't working in openclaude: instead of asking Copilot directly "What models does this user have access to?", previous attempts tried to guess or hardcode the answer using environment variables and static lists, a fragile method that was always out of sync with the user's actual plan.

The solution uses the opencode approach, which simply calls the official Copilot API at startup, gets the actual model list for that specific user, and uses it. Some models remain unsupported, despite the list being correct, but this isn't a major problem. At least the model list is now correct.

CleanShot 2026-04-27 at 19 55 50@2x

Looks fine to me, i can check locally and tell you any suggestions

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.

@LoackyBit LoackyBit requested a review from Copilot April 27, 2026 18:57
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 24 out of 24 changed files in this pull request and generated 11 comments.

Comments suppressed due to low confidence (1)

src/utils/model/openaiContextWindows.ts:24

  • GitHub Copilot/GitHub provider is treated as an OpenAI-compatible provider in src/utils/context.ts, but OPENAI_CONTEXT_WINDOWS no longer contains entries for Copilot namespaced models (e.g., github:copilot / github:copilot:). When users pick Claude/Gemini/etc via GitHub’s catalog, getContextWindowForModel will fall back to the conservative 128k default and spam warnings, and may compact too aggressively. Consider restoring GitHub entries here or resolving context window/max output tokens from the cached GitHub model catalog limits when provider=github.
const OPENAI_CONTEXT_WINDOWS: Record<string, number> = {
  // OpenAI
  'gpt-5.5':               1_050_000,
  'gpt-5.5-mini':            400_000,
  'gpt-5.5-nano':            400_000,
  'gpt-5.4':               1_050_000,
  'gpt-5.4-mini':            400_000,
  'gpt-5.4-nano':            400_000,
  'gpt-4o':                   128_000,
  'gpt-4o-mini':              128_000,
  'gpt-4.1':                  1_047_576,

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread src/main.tsx Outdated
Comment thread src/utils/model/githubModels.ts Outdated
Comment thread src/github-copilot/models.ts Outdated
Comment thread src/components/ProviderManager.tsx
Comment thread src/services/api/providerConfig.ts
Comment thread src/utils/model/configs.ts
Comment thread .env.example
Comment thread .env.example
Comment thread src/services/api/providerConfig.ts Outdated
Comment thread prompt.txt Outdated
Copilot AI review requested due to automatic review settings April 28, 2026 11:08
@LoackyBit LoackyBit force-pushed the main branch 2 times, most recently from d3b94b9 to 6d02223 Compare April 28, 2026 11:42
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 21 out of 21 changed files in this pull request and generated 6 comments.

Comments suppressed due to low confidence (2)

src/utils/model/modelStrings.ts:33

  • providerKey now uses the github column directly, but ALL_MODEL_CONFIGS[*].github is still set to 'github:copilot' for every tier (see src/utils/model/configs.ts, e.g. CLAUDE_SONNET_4_6_CONFIG.github). With this change, GitHub tier defaults collapse back to the same alias rather than resolving to tier-specific concrete model IDs. Either update configs.ts to contain the intended real GitHub model IDs per tier, or keep the previous fallback (provider === 'github' ? 'openai' : provider) until those mappings exist.
function getBuiltinModelStrings(provider: APIProvider): ModelStrings {
  // Codex piggybacks on the OpenAI provider transport for Anthropic tier aliases.
  // Reuse OpenAI mappings so model string lookups never return undefined.
  // GitHub has its own real Copilot model IDs in configs.ts — use the 'github' column directly.
  const providerKey = provider === 'codex' ? 'openai' : provider
  const out = {} as ModelStrings
  for (const key of MODEL_KEYS) {
    out[key] = ALL_MODEL_CONFIGS[key][providerKey]
  }

src/utils/model/openaiContextWindows.ts:24

  • OPENAI_CONTEXT_WINDOWS no longer includes an entry for github:copilot, but GitHub mode still defaults to that model string in several places. This will trigger the [context] Warning: model "github:copilot" not in context window table path and fall back to the generic 128k window on every run. Either keep a github:copilot context-window entry (to avoid warning spam / improve accuracy) or ensure GitHub requests are always resolved to a concrete model ID that exists in this table before context-window lookup.
const OPENAI_CONTEXT_WINDOWS: Record<string, number> = {
  // OpenAI
  'gpt-5.5':               1_050_000,
  'gpt-5.5-mini':            400_000,
  'gpt-5.5-nano':            400_000,
  'gpt-5.4':               1_050_000,
  'gpt-5.4-mini':            400_000,
  'gpt-5.4-nano':            400_000,
  'gpt-4o':                   128_000,
  'gpt-4o-mini':              128_000,
  'gpt-4.1':                  1_047_576,

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread src/services/api/providerConfig.ts
Comment thread src/services/api/openaiShim.ts
Comment thread src/services/api/openaiShim.ts
Comment thread src/components/StartupScreen.ts
Comment thread src/services/api/client.ts
Comment thread src/utils/model/modelOptions.ts
Copy link
Copy Markdown
Contributor

@Meetpatel006 Meetpatel006 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@LoackyBit

Bro i check pr locally, it has one issue with typo for file src\services\api\providerConfig.ts line 553. Change it to options?.apiFormat instead of requestedApiFormat

second issue is for src\components\StartupScreen.ts. In the import getSettings_DEPRECATED you need to change it to getSettingsForSource so it removes the deprecated import from settings and after this line for the model.js import. You need to add getPublicModelDisplayName. This will display the model name instead of their model ID.

for each model you need to change the code like

 const settings = getSettingsForSource('userSettings') || {}
 const model = modelOverride || settings.model || process.env.GEMINI_MODEL || 'gemini-2.0-flash'

and also modify the line number 168 and 177

  • Follow-up work:
    • fix the test command
      • bun run test:provider
      • bun test --max-concurrency=1

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 22 out of 22 changed files in this pull request and generated 9 comments.

Comments suppressed due to low confidence (1)

src/utils/model/openaiContextWindows.ts:28

  • OPENAI_CONTEXT_WINDOWS no longer contains any GitHub Copilot / GitHub model IDs, but GitHub mode still routes through getOpenAIContextWindow() (see src/utils/context.ts:89-104). This will cause repeated warnings and a fallback to 128k for GitHub models (including github:copilot:* variants), potentially breaking compaction thresholds. Either restore the GitHub entries here, or introduce a dynamic context-window source from the fetched GitHub model catalog.
const OPENAI_CONTEXT_WINDOWS: Record<string, number> = {
  // OpenAI
  'gpt-5.5':               1_050_000,
  'gpt-5.5-mini':            400_000,
  'gpt-5.5-nano':            400_000,
  'gpt-5.4':               1_050_000,
  'gpt-5.4-mini':            400_000,
  'gpt-5.4-nano':            400_000,
  'gpt-4o':                   128_000,
  'gpt-4o-mini':              128_000,
  'gpt-4.1':                  1_047_576,
  'gpt-4.1-mini':             1_047_576,
  'gpt-4.1-nano':             1_047_576,
  'gpt-4-turbo':              128_000,
  'gpt-4':                     8_192,

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 116 to +121
if (useGithub) {
const model = modelOverride || process.env.OPENAI_MODEL || 'github:copilot'
const baseUrl =
process.env.OPENAI_BASE_URL || 'https://api.githubcopilot.com'
return { name: 'GitHub Copilot', model, baseUrl, isLocal: false }
const rawModel = process.env.OPENAI_MODEL?.trim() || 'github:copilot'
const resolvedRequest = resolveProviderRequest({
model: rawModel,
baseUrl: process.env.OPENAI_BASE_URL,
})
Copy link

Copilot AI Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In GitHub mode, the startup screen ignores modelOverride, settings.model, and the new GITHUB_MODEL env var, and instead reads OPENAI_MODEL directly. This can display the wrong model (and will always show the fallback when ProviderManager no longer sets OPENAI_MODEL). Prefer the same resolution order used elsewhere: modelOverride || settings.model || process.env.GITHUB_MODEL || process.env.OPENAI_MODEL || 'github:copilot', then pass that through resolveProviderRequest.

Copilot uses AI. Check for mistakes.
Comment on lines +561 to 562
const resolvedModel = descriptor.baseModel

Copy link

Copilot AI Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

resolveProviderRequest() now sets resolvedModel = descriptor.baseModel without any GitHub-specific normalization. This means inputs like github:gpt-4o / github:openai/gpt-4.1 will be sent to the API verbatim, even though earlier code (and tests) treated the github: prefix as a user-facing convenience that must be stripped. Reintroduce normalization for GitHub mode (at least stripping a leading github: namespace and handling the copilot alias), and add back targeted tests for these cases.

Suggested change
const resolvedModel = descriptor.baseModel
const normalizeGithubResolvedModel = (model: string): string => {
const withoutGithubNamespace = model.replace(/^github:/i, '')
return /^copilot$/i.test(withoutGithubNamespace) ? 'gpt-4o' : withoutGithubNamespace
}
const resolvedModel = isGithubMode
? normalizeGithubResolvedModel(descriptor.baseModel)
: descriptor.baseModel

Copilot uses AI. Check for mistakes.
Comment on lines 1351 to +1359
const request = resolveProviderRequest({ model: self.providerOverride?.model ?? params.model, baseUrl: self.providerOverride?.baseURL, reasoningEffortOverride: self.reasoningEffort })

const isGithub = isGithubModelsMode()
const response = await self._doRequest(request, params, options)
httpResponse = response

// Capture GitHub rate-limit headers from every response
if (isGithubModelsMode()) {
updateGithubRateLimit(response.headers as unknown as Headers)
Copy link

Copilot AI Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isGithub is declared but never used; the rate-limit capture block re-calls isGithubModelsMode() instead. This will trip unused-variable checks in stricter TS/lint configs and is easy to fix by either removing the variable or using it in the conditional.

Copilot uses AI. Check for mistakes.
Comment on lines +1781 to +1806
const getGithubEndpointType = (baseUrl: string): 'copilot' | 'models' | 'other' => {
try {
const hostname = new URL(baseUrl).hostname.toLowerCase()
if (
hostname === 'models.github.ai' ||
hostname.endsWith('.github.ai') ||
hostname === 'models.inference.ai.azure.com'
) {
return 'models'
}
} catch {
const normalizedBaseUrl = baseUrl.toLowerCase()
if (
normalizedBaseUrl.includes('models.github.ai') ||
normalizedBaseUrl.includes('.github.ai') ||
normalizedBaseUrl.includes('models.inference.ai.azure.com')
) {
return 'models'
}
}

return isGithubCopilot ? 'copilot' : 'other'
}

const githubEndpointType = isGithub ? getGithubEndpointType(request.baseUrl) : 'other'
if (githubEndpointType === 'models') {
Copy link

Copilot AI Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The inline getGithubEndpointType duplicates providerConfig.getGithubEndpointType, has broken indentation/trailing whitespace, and collapses all non-models GitHub URLs into 'copilot' (so a custom base URL will get Copilot headers even when it shouldn’t). Prefer reusing the shared helper and preserving a distinct 'custom'/fallback path to avoid sending Copilot-specific headers to arbitrary endpoints.

Copilot uses AI. Check for mistakes.
Comment on lines +429 to +434
const apiProvider = getAPIProvider()
const injectClientRequestId =
getAPIProvider() === 'firstParty' && isFirstPartyAnthropicBaseUrl()
return (input, init) => {
apiProvider === 'firstParty' && isFirstPartyAnthropicBaseUrl()
const isGithub = apiProvider === 'github'

return async (input, init) => {
Copy link

Copilot AI Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isGithub is computed but unused, and the wrapper was changed to async + await without adding any post-processing. This adds overhead and can trip unused-variable linting. Either remove apiProvider/isGithub if not needed, or use this hook point to implement the intended GitHub-specific behavior (e.g. response header capture) and keep it async.

Copilot uses AI. Check for mistakes.
Comment on lines 259 to 266
function getGithubProviderModel(
processEnv: NodeJS.ProcessEnv = process.env,
): string {
if (isEnvTruthy(processEnv.CLAUDE_CODE_USE_GITHUB)) {
return processEnv.OPENAI_MODEL?.trim() || GITHUB_PROVIDER_DEFAULT_MODEL
return ''
}
return GITHUB_PROVIDER_DEFAULT_MODEL
return ''
}
Copy link

Copilot AI Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getGithubProviderModel() now always returns an empty string, so GitHub provider summaries never show the configured model (and the function becomes dead code). If the intent is to hide the model, consider inlining/removing this helper; otherwise, return the resolved model from GITHUB_MODEL/OPENAI_MODEL/settings so the summary remains informative.

Copilot uses AI. Check for mistakes.
Comment on lines +28 to 32
// GitHub has its own real Copilot model IDs in configs.ts — use the 'github' column directly.
const providerKey = provider === 'codex' ? 'openai' : provider
const out = {} as ModelStrings
for (const key of MODEL_KEYS) {
out[key] = ALL_MODEL_CONFIGS[key][providerKey]
Copy link

Copilot AI Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change switches GitHub model-string lookup to use the github column in ALL_MODEL_CONFIGS, but src/utils/model/configs.ts still has github: 'github:copilot' for the built-in configs (e.g. configs.ts:31+). As a result, GitHub model string resolution will still collapse to the alias and won’t provide the promised model transparency/control. Either update the GitHub column entries to real model IDs, or keep the previous fallback behavior until configs are migrated.

Copilot uses AI. Check for mistakes.
Comment thread src/utils/model/model.ts
Comment on lines 118 to 122
const isOpenAIShimProvider =
provider === 'openai' ||
provider === 'codex' ||
provider === 'github' ||
provider === 'nvidia-nim' ||
provider === 'minimax' ||
Copy link

Copilot AI Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getUserSpecifiedModelSetting() no longer treats the GitHub provider as an OpenAI-shim provider, so GitHub sessions ignore env-based model selection (including the newly introduced GITHUB_MODEL) and fall back to settings.model/default. This makes GITHUB_MODEL ineffective for the main loop model. Include provider === 'github' in this branch and prefer process.env.GITHUB_MODEL (with OPENAI_MODEL as legacy fallback) when GitHub mode is active.

Copilot uses AI. Check for mistakes.
Comment thread src/utils/model/model.ts
@@ -343,7 +326,6 @@ export function getDefaultMainLoopModelSetting(): ModelName | ModelAlias {
const settings = getSettings_DEPRECATED() || {}
return (
normalizeModelSetting(settings.model) ||
Copy link

Copilot AI Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The GitHub branch comment says "check settings.model first, then env", but the implementation only checks settings and then hardcodes 'github:copilot'. This prevents GITHUB_MODEL (or legacy OPENAI_MODEL) from influencing the default model selection when running with CLAUDE_CODE_USE_GITHUB=1. Consider adding process.env.GITHUB_MODEL (and possibly process.env.OPENAI_MODEL for backward compatibility) to this priority chain.

Suggested change
normalizeModelSetting(settings.model) ||
normalizeModelSetting(settings.model) ||
normalizeModelSetting(process.env.GITHUB_MODEL) ||
normalizeModelSetting(process.env.OPENAI_MODEL) ||

Copilot uses AI. Check for mistakes.
@LoackyBit
Copy link
Copy Markdown
Author

model_picker_enabled indicates whether a GitHub model is intended to be selectable in the model picker, not just enabled by policy.
The current code filters only by policy.state !== 'disabled' and ignores model_picker_enabled, which can expose models that are not meant for end-user selection.

Option Pros Cons
Keep model_picker_enabled === true in the filter Safer long-term; aligns with GitHub's schema; avoids exposing hidden models; cleaner UX because users only see models intended to be selectable. Some working models might be hidden if GitHub marks them as non-pickable even though they still work today.
Leave model_picker_enabled out of the filter More permissive; more models remain visible and usable right now. May expose models GitHub does not intend users to select; those models could break or become restricted later; inconsistent with the schema because the field is parsed but ignored.

What do we want to optimize for here: strict alignment with GitHub's intended picker behavior, or broader compatibility with models that currently work in practice?

@Meetpatel006
Copy link
Copy Markdown
Contributor

I can tell you after short time,

Copy link
Copy Markdown
Contributor

@Meetpatel006 Meetpatel006 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@LoackyBit ,
sorry for that but you need to enforce it so the new code is looks like below snippet in models.ts

.filter(model => model.model_picker_enabled && model.policy?.state !== 'disabled')

And also, you need to modify the test file accordingly.

I check your PR locally; if we did not enforce it, then it shows some deprecated and also duplicate items, so enforcing is a safe option.

@LoackyBit
Copy link
Copy Markdown
Author

@LoackyBit , sorry for that but you need to enforce it so the new code is looks like below snippet in models.ts

.filter(model => model.model_picker_enabled && model.policy?.state !== 'disabled')

And also, you need to modify the test file accordingly.

I check your PR locally; if we did not enforce it, then it shows some deprecated and also duplicate items, so enforcing is a safe option.

Hey @Meetpatel006, I checked this locally too and I'd like to clarify a few things before we enforce that filter.

Regarding the duplicates: what you may be seeing as duplicates (e.g. search agent A, search agent B, search agent C) are actually distinct models; they are not the same model listed multiple times. They share a naming pattern, but each has a different model ID and serves a different purpose.

Regarding policy?.state !== 'disabled': filtering out disabled models makes sense at first glance, but this would also hide embedding models and search/retrieval models that are technically not usable as primary chat models, yet are perfectly valid for sub-agent use cases. OpenClaude (or a user) could configure these as sub-agents: for example, a dedicated search sub-agent or an embedding sub-agent and those roles require the model to be visible in the catalog even if model_picker_enabled is false or the policy state is disabled for direct use.

My suggestion is to remove the filter entirely and expose the full catalog as-is. OpenClaude's users are developers: they need to know every available model exists, including embedding models and search/retrieval models. Hiding them just because they are not meant for the standard chat picker takes that choice away from the developer entirely.

The GitHub catalog is the source of truth. We should expose it as-is and trust the developer to pick what they need.

@Meetpatel006
Copy link
Copy Markdown
Contributor

@LoackyBit

Regarding the duplicates: what you may be seeing as duplicates (e.g. search agent A, search agent B, search agent C) are actually distinct models; they are not the same model listed multiple times. They share a naming pattern, but each has a different model ID and serves a different purpose.

i have seen like this.

image

And also this duplicates model is just different model ID with different version

image image

this is when I select the search agent A and other search agents agent (all three are just doing the search on the codebase; it's not LLM).

image

and for this

My suggestion is to remove the filter entirely and expose the full catalog as-is. OpenClaude's users are developers: they need to know every available model exists, including embedding models and search/retrieval models. Hiding them just because they are not meant for the standard chat picker takes that choice away from the developer entirely.

The GitHub catalog is the source of truth. We should expose it as-is and trust the developer to pick what they need.

i try it with embedding model; it gives me 400 error and for using it, we need to expose the endpoint from opencode

image

@Meetpatel006
Copy link
Copy Markdown
Contributor

My suggestion is to remove the filter entirely and expose the full catalog as-is. OpenClaude's users are developers: they need to know every available model exists, including embedding models and search/retrieval models. Hiding them just because they are not meant for the standard chat picker takes that choice away from the developer entirely.

what's your opinion on this @Vasanthdev2004 @gnanam1990

We do not remove the filter entirely; it’s the only safety gate we have. If we want full catalog visibility, add an opt‑in path rather than changing the default behavior.

@jatmn
Copy link
Copy Markdown
Collaborator

jatmn commented Apr 30, 2026

@Meetpatel006 i would try locally to rebase this in #910 and see what happens.
910 already changes quite a bit how models are exposed via /model

@Meetpatel006
Copy link
Copy Markdown
Contributor

@Meetpatel006 i would try locally to rebase this in #910 and see what happens.
910 already changes quite a bit how models are exposed via /model

Okays, let's see what heppen

@LoackyBit
Copy link
Copy Markdown
Author

If we want full catalog visibility, let's add an explicit activation path rather than modify the default behavior.

@Meetpatel006 seems like a good compromise. However, we need to wait for #910 to be completed before proceeding. Despite this, on second thought, I agree with you. The only thing I would have liked was to use search and embedding models for the subagents; it would have been very useful, but that could be discussed in another PR. If you agree, I'll close the discussion.

@Vasanthdev2004
Copy link
Copy Markdown
Collaborator

My take: keep the default /model picker safe and user-runnable.

So I agree with keeping the filter as the default path:

model.model_picker_enabled && model.policy?.state !== 'disabled'

Embedding/search/retrieval models are interesting for future sub-agent or advanced routing work, but exposing the full catalog in the normal picker means users can select models that immediately fail as primary chat models. That makes the UX noisier and harder to support.

If we want full catalog visibility later, I would treat that as a separate opt-in/advanced mode after #910 settles the provider/model exposure shape. For this PR, defaulting to the safe picker behavior is the right direction from my side.

@Meetpatel006
Copy link
Copy Markdown
Contributor

If we want full catalog visibility, let's add an explicit activation path rather than modify the default behavior.

@Meetpatel006 seems like a good compromise. However, we need to wait for #910 to be completed before proceeding. Despite this, on second thought, I agree with you. The only thing I would have liked was to use search and embedding models for the subagents; it would have been very useful, but that could be discussed in another PR. If you agree, I'll close the discussion.

you can close the discussion, i am fine with it

@LoackyBit LoackyBit marked this pull request as draft May 2, 2026 15:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[GitHub] github:copilot is a routing alias — no model control or visibility

5 participants