[GitHub] github:copilot is a routing alias — no model control or visibility#897
[GitHub] github:copilot is a routing alias — no model control or visibility#897LoackyBit wants to merge 17 commits intoGitlawb:mainfrom
Conversation
…rate-limit (Gitlawb#822) Add GITHUB_MODEL env var support with higher priority than OPENAI_MODEL (backward-compatible fallback). Map default copilot alias to "auto" on the Copilot API for the ~10% discount. Show the resolved model name in the startup screen instead of the raw "github:copilot" alias. Parse x-ratelimit-* headers from GitHub API responses and surface them in the session cost summary at exit. Closes Gitlawb#822
…able and migrate from copilot aliases to explicit models
[Feature] Add GITHUB_MODEL env var, resolved model display, and rate-limit surfacing (Gitlawb#822)
|
After looking into it more, I realized this is more complex than expected and requires deeper expertise in the provider/model handling. I don’t have the technical background to fix it properly. I’ll leave everything as is in case it can be useful to someone else. |
There was a problem hiding this comment.
Commit 1c38084 was built on the first version of issue #822 (opened 3 days ago), which in its original formulation described the issue differently than it is today.
The first version of the issue proposed the main direction of adding a GITHUB_MODEL environment variable, surfacing the resolved model, and optional parsing of rate-limit/quota headers—an approach more geared toward explicit configuration via env vars.
The current issue description has been updated and focuses on the root cause in src/utils/model/configs.ts, with a more precise technical analysis of the hardcoded github:copilot alias.
As a result, the changes introduced by this commit reflect the original, outdated vision of the issue, potentially making it misaligned with the current scope and direction of the expected fix. It is recommended that you reevaluate your changes against the current version of the issue before merging.
Edited: repository cleaned up - old files removed, only new issue-related files remain.
…efetching and provider integration
feat: implement dynamic model fetching from GitHub Models API with pr…
|
@LoackyBit , This feature is mostly built by me, so for this i am read to help if you need to. |
|
@Meetpatel006 I created the PR using AI, as mentioned in the first comment. Initially, I thought I wasn't able to solve the problem, but in the latest commits, I should have successfully connected the GitHub API. That said, I think a review from you is necessary at this point. So far, I'm having this problem:
In my case, one that works is |
Can you tell me which copilot plan are you using, and for this 400 issue which model are you try |
|
I also think the code doesn't use the correct list of models. The one it uses is actually the list from the GitHub catalog, when instead it should use the list of models available based on each user's plan. I looked into how opencode correctly fetches the model list filtered by user plan — and the fix is simpler than expected. The current problem is that How opencode handles this instead:
data.data.filter(
(m) => m.model_picker_enabled && m.policy?.state !== "disabled"
)The key field is Proposed fix:
|
I have the Education plan and was trying to use the |
- Implement dynamic fetching of GitHub models with caching. - Add tests for GitHub model options and fetching logic. - Enhance model picker to support GitHub models. - Introduce error handling for unsupported GitHub models. Co-authored-by: Copilot <copilot@github.com>
|
Commit d1c1829 summary:
Remaining issue (models that still fail at inference):
|
|
I believe this will be impacted by #910 |
Here i thinks the main issue is for github model is, for the new model like gpt codex model its endpoint is this And for more help checkout this official docs by github https://docs.github.com/en/rest/models/inference?apiVersion=2026-03-10#run-an-inference-request https://docs.github.com/en/rest/about-the-rest-api/api-versions?apiVersion=2026-03-10 |
|
@Meetpatel006 I can't see the endpoints in the comment |
|
I tried your PR locally; it has some issues you need to address with the whole test suite, and also in my case it did not even properly build, so can you check for it?
|
…model-specific unsupported state management
|
Thanks for the detailed feedback @Meetpatel006 Here's a summary of the last commit:
I've also ensured that the |
… GitHub API via Zod-validated schema
|
I've identified the reason why the GitHub Copilot model list wasn't working in openclaude: instead of asking Copilot directly "What models does this user have access to?", previous attempts tried to guess or hardcode the answer using environment variables and static lists, a fragile method that was always out of sync with the user's actual plan. The solution uses the opencode approach, which simply calls the official Copilot API at startup, gets the actual model list for that specific user, and uses it. Some models remain unsupported, despite the list being correct, but this isn't a major problem. At least the model list is now correct.
|
Looks fine to me, i can check locally and tell you any suggestions |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 24 out of 24 changed files in this pull request and generated 11 comments.
Comments suppressed due to low confidence (1)
src/utils/model/openaiContextWindows.ts:24
- GitHub Copilot/GitHub provider is treated as an OpenAI-compatible provider in src/utils/context.ts, but OPENAI_CONTEXT_WINDOWS no longer contains entries for Copilot namespaced models (e.g., github:copilot / github:copilot:). When users pick Claude/Gemini/etc via GitHub’s catalog, getContextWindowForModel will fall back to the conservative 128k default and spam warnings, and may compact too aggressively. Consider restoring GitHub entries here or resolving context window/max output tokens from the cached GitHub model catalog limits when provider=github.
const OPENAI_CONTEXT_WINDOWS: Record<string, number> = {
// OpenAI
'gpt-5.5': 1_050_000,
'gpt-5.5-mini': 400_000,
'gpt-5.5-nano': 400_000,
'gpt-5.4': 1_050_000,
'gpt-5.4-mini': 400_000,
'gpt-5.4-nano': 400_000,
'gpt-4o': 128_000,
'gpt-4o-mini': 128_000,
'gpt-4.1': 1_047_576,
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
d3b94b9 to
6d02223
Compare
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 21 out of 21 changed files in this pull request and generated 6 comments.
Comments suppressed due to low confidence (2)
src/utils/model/modelStrings.ts:33
providerKeynow uses thegithubcolumn directly, butALL_MODEL_CONFIGS[*].githubis still set to'github:copilot'for every tier (seesrc/utils/model/configs.ts, e.g.CLAUDE_SONNET_4_6_CONFIG.github). With this change, GitHub tier defaults collapse back to the same alias rather than resolving to tier-specific concrete model IDs. Either updateconfigs.tsto contain the intended real GitHub model IDs per tier, or keep the previous fallback (provider === 'github' ? 'openai' : provider) until those mappings exist.
function getBuiltinModelStrings(provider: APIProvider): ModelStrings {
// Codex piggybacks on the OpenAI provider transport for Anthropic tier aliases.
// Reuse OpenAI mappings so model string lookups never return undefined.
// GitHub has its own real Copilot model IDs in configs.ts — use the 'github' column directly.
const providerKey = provider === 'codex' ? 'openai' : provider
const out = {} as ModelStrings
for (const key of MODEL_KEYS) {
out[key] = ALL_MODEL_CONFIGS[key][providerKey]
}
src/utils/model/openaiContextWindows.ts:24
OPENAI_CONTEXT_WINDOWSno longer includes an entry forgithub:copilot, but GitHub mode still defaults to that model string in several places. This will trigger the[context] Warning: model "github:copilot" not in context window tablepath and fall back to the generic 128k window on every run. Either keep agithub:copilotcontext-window entry (to avoid warning spam / improve accuracy) or ensure GitHub requests are always resolved to a concrete model ID that exists in this table before context-window lookup.
const OPENAI_CONTEXT_WINDOWS: Record<string, number> = {
// OpenAI
'gpt-5.5': 1_050_000,
'gpt-5.5-mini': 400_000,
'gpt-5.5-nano': 400_000,
'gpt-5.4': 1_050_000,
'gpt-5.4-mini': 400_000,
'gpt-5.4-nano': 400_000,
'gpt-4o': 128_000,
'gpt-4o-mini': 128_000,
'gpt-4.1': 1_047_576,
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Meetpatel006
left a comment
There was a problem hiding this comment.
Bro i check pr locally, it has one issue with typo for file src\services\api\providerConfig.ts line 553. Change it to options?.apiFormat instead of requestedApiFormat
second issue is for src\components\StartupScreen.ts. In the import getSettings_DEPRECATED you need to change it to getSettingsForSource so it removes the deprecated import from settings and after this line for the model.js import. You need to add getPublicModelDisplayName. This will display the model name instead of their model ID.
for each model you need to change the code like
const settings = getSettingsForSource('userSettings') || {}
const model = modelOverride || settings.model || process.env.GEMINI_MODEL || 'gemini-2.0-flash'and also modify the line number 168 and 177
- Follow-up work:
- fix the test command
- bun run test:provider
- bun test --max-concurrency=1
- fix the test command
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 22 out of 22 changed files in this pull request and generated 9 comments.
Comments suppressed due to low confidence (1)
src/utils/model/openaiContextWindows.ts:28
OPENAI_CONTEXT_WINDOWSno longer contains any GitHub Copilot / GitHub model IDs, but GitHub mode still routes throughgetOpenAIContextWindow()(seesrc/utils/context.ts:89-104). This will cause repeated warnings and a fallback to 128k for GitHub models (includinggithub:copilot:*variants), potentially breaking compaction thresholds. Either restore the GitHub entries here, or introduce a dynamic context-window source from the fetched GitHub model catalog.
const OPENAI_CONTEXT_WINDOWS: Record<string, number> = {
// OpenAI
'gpt-5.5': 1_050_000,
'gpt-5.5-mini': 400_000,
'gpt-5.5-nano': 400_000,
'gpt-5.4': 1_050_000,
'gpt-5.4-mini': 400_000,
'gpt-5.4-nano': 400_000,
'gpt-4o': 128_000,
'gpt-4o-mini': 128_000,
'gpt-4.1': 1_047_576,
'gpt-4.1-mini': 1_047_576,
'gpt-4.1-nano': 1_047_576,
'gpt-4-turbo': 128_000,
'gpt-4': 8_192,
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if (useGithub) { | ||
| const model = modelOverride || process.env.OPENAI_MODEL || 'github:copilot' | ||
| const baseUrl = | ||
| process.env.OPENAI_BASE_URL || 'https://api.githubcopilot.com' | ||
| return { name: 'GitHub Copilot', model, baseUrl, isLocal: false } | ||
| const rawModel = process.env.OPENAI_MODEL?.trim() || 'github:copilot' | ||
| const resolvedRequest = resolveProviderRequest({ | ||
| model: rawModel, | ||
| baseUrl: process.env.OPENAI_BASE_URL, | ||
| }) |
There was a problem hiding this comment.
In GitHub mode, the startup screen ignores modelOverride, settings.model, and the new GITHUB_MODEL env var, and instead reads OPENAI_MODEL directly. This can display the wrong model (and will always show the fallback when ProviderManager no longer sets OPENAI_MODEL). Prefer the same resolution order used elsewhere: modelOverride || settings.model || process.env.GITHUB_MODEL || process.env.OPENAI_MODEL || 'github:copilot', then pass that through resolveProviderRequest.
| const resolvedModel = descriptor.baseModel | ||
|
|
There was a problem hiding this comment.
resolveProviderRequest() now sets resolvedModel = descriptor.baseModel without any GitHub-specific normalization. This means inputs like github:gpt-4o / github:openai/gpt-4.1 will be sent to the API verbatim, even though earlier code (and tests) treated the github: prefix as a user-facing convenience that must be stripped. Reintroduce normalization for GitHub mode (at least stripping a leading github: namespace and handling the copilot alias), and add back targeted tests for these cases.
| const resolvedModel = descriptor.baseModel | |
| const normalizeGithubResolvedModel = (model: string): string => { | |
| const withoutGithubNamespace = model.replace(/^github:/i, '') | |
| return /^copilot$/i.test(withoutGithubNamespace) ? 'gpt-4o' : withoutGithubNamespace | |
| } | |
| const resolvedModel = isGithubMode | |
| ? normalizeGithubResolvedModel(descriptor.baseModel) | |
| : descriptor.baseModel |
| const request = resolveProviderRequest({ model: self.providerOverride?.model ?? params.model, baseUrl: self.providerOverride?.baseURL, reasoningEffortOverride: self.reasoningEffort }) | ||
|
|
||
| const isGithub = isGithubModelsMode() | ||
| const response = await self._doRequest(request, params, options) | ||
| httpResponse = response | ||
|
|
||
| // Capture GitHub rate-limit headers from every response | ||
| if (isGithubModelsMode()) { | ||
| updateGithubRateLimit(response.headers as unknown as Headers) |
There was a problem hiding this comment.
isGithub is declared but never used; the rate-limit capture block re-calls isGithubModelsMode() instead. This will trip unused-variable checks in stricter TS/lint configs and is easy to fix by either removing the variable or using it in the conditional.
| const getGithubEndpointType = (baseUrl: string): 'copilot' | 'models' | 'other' => { | ||
| try { | ||
| const hostname = new URL(baseUrl).hostname.toLowerCase() | ||
| if ( | ||
| hostname === 'models.github.ai' || | ||
| hostname.endsWith('.github.ai') || | ||
| hostname === 'models.inference.ai.azure.com' | ||
| ) { | ||
| return 'models' | ||
| } | ||
| } catch { | ||
| const normalizedBaseUrl = baseUrl.toLowerCase() | ||
| if ( | ||
| normalizedBaseUrl.includes('models.github.ai') || | ||
| normalizedBaseUrl.includes('.github.ai') || | ||
| normalizedBaseUrl.includes('models.inference.ai.azure.com') | ||
| ) { | ||
| return 'models' | ||
| } | ||
| } | ||
|
|
||
| return isGithubCopilot ? 'copilot' : 'other' | ||
| } | ||
|
|
||
| const githubEndpointType = isGithub ? getGithubEndpointType(request.baseUrl) : 'other' | ||
| if (githubEndpointType === 'models') { |
There was a problem hiding this comment.
The inline getGithubEndpointType duplicates providerConfig.getGithubEndpointType, has broken indentation/trailing whitespace, and collapses all non-models GitHub URLs into 'copilot' (so a custom base URL will get Copilot headers even when it shouldn’t). Prefer reusing the shared helper and preserving a distinct 'custom'/fallback path to avoid sending Copilot-specific headers to arbitrary endpoints.
| const apiProvider = getAPIProvider() | ||
| const injectClientRequestId = | ||
| getAPIProvider() === 'firstParty' && isFirstPartyAnthropicBaseUrl() | ||
| return (input, init) => { | ||
| apiProvider === 'firstParty' && isFirstPartyAnthropicBaseUrl() | ||
| const isGithub = apiProvider === 'github' | ||
|
|
||
| return async (input, init) => { |
There was a problem hiding this comment.
isGithub is computed but unused, and the wrapper was changed to async + await without adding any post-processing. This adds overhead and can trip unused-variable linting. Either remove apiProvider/isGithub if not needed, or use this hook point to implement the intended GitHub-specific behavior (e.g. response header capture) and keep it async.
| function getGithubProviderModel( | ||
| processEnv: NodeJS.ProcessEnv = process.env, | ||
| ): string { | ||
| if (isEnvTruthy(processEnv.CLAUDE_CODE_USE_GITHUB)) { | ||
| return processEnv.OPENAI_MODEL?.trim() || GITHUB_PROVIDER_DEFAULT_MODEL | ||
| return '' | ||
| } | ||
| return GITHUB_PROVIDER_DEFAULT_MODEL | ||
| return '' | ||
| } |
There was a problem hiding this comment.
getGithubProviderModel() now always returns an empty string, so GitHub provider summaries never show the configured model (and the function becomes dead code). If the intent is to hide the model, consider inlining/removing this helper; otherwise, return the resolved model from GITHUB_MODEL/OPENAI_MODEL/settings so the summary remains informative.
| // GitHub has its own real Copilot model IDs in configs.ts — use the 'github' column directly. | ||
| const providerKey = provider === 'codex' ? 'openai' : provider | ||
| const out = {} as ModelStrings | ||
| for (const key of MODEL_KEYS) { | ||
| out[key] = ALL_MODEL_CONFIGS[key][providerKey] |
There was a problem hiding this comment.
This change switches GitHub model-string lookup to use the github column in ALL_MODEL_CONFIGS, but src/utils/model/configs.ts still has github: 'github:copilot' for the built-in configs (e.g. configs.ts:31+). As a result, GitHub model string resolution will still collapse to the alias and won’t provide the promised model transparency/control. Either update the GitHub column entries to real model IDs, or keep the previous fallback behavior until configs are migrated.
| const isOpenAIShimProvider = | ||
| provider === 'openai' || | ||
| provider === 'codex' || | ||
| provider === 'github' || | ||
| provider === 'nvidia-nim' || | ||
| provider === 'minimax' || |
There was a problem hiding this comment.
getUserSpecifiedModelSetting() no longer treats the GitHub provider as an OpenAI-shim provider, so GitHub sessions ignore env-based model selection (including the newly introduced GITHUB_MODEL) and fall back to settings.model/default. This makes GITHUB_MODEL ineffective for the main loop model. Include provider === 'github' in this branch and prefer process.env.GITHUB_MODEL (with OPENAI_MODEL as legacy fallback) when GitHub mode is active.
| @@ -343,7 +326,6 @@ export function getDefaultMainLoopModelSetting(): ModelName | ModelAlias { | |||
| const settings = getSettings_DEPRECATED() || {} | |||
| return ( | |||
| normalizeModelSetting(settings.model) || | |||
There was a problem hiding this comment.
The GitHub branch comment says "check settings.model first, then env", but the implementation only checks settings and then hardcodes 'github:copilot'. This prevents GITHUB_MODEL (or legacy OPENAI_MODEL) from influencing the default model selection when running with CLAUDE_CODE_USE_GITHUB=1. Consider adding process.env.GITHUB_MODEL (and possibly process.env.OPENAI_MODEL for backward compatibility) to this priority chain.
| normalizeModelSetting(settings.model) || | |
| normalizeModelSetting(settings.model) || | |
| normalizeModelSetting(process.env.GITHUB_MODEL) || | |
| normalizeModelSetting(process.env.OPENAI_MODEL) || |
|
model_picker_enabled indicates whether a GitHub model is intended to be selectable in the model picker, not just enabled by policy.
What do we want to optimize for here: strict alignment with GitHub's intended picker behavior, or broader compatibility with models that currently work in practice? |
|
I can tell you after short time, |
Meetpatel006
left a comment
There was a problem hiding this comment.
@LoackyBit ,
sorry for that but you need to enforce it so the new code is looks like below snippet in models.ts
.filter(model => model.model_picker_enabled && model.policy?.state !== 'disabled')And also, you need to modify the test file accordingly.
I check your PR locally; if we did not enforce it, then it shows some deprecated and also duplicate items, so enforcing is a safe option.
Hey @Meetpatel006, I checked this locally too and I'd like to clarify a few things before we enforce that filter. Regarding the duplicates: what you may be seeing as duplicates (e.g. search agent A, search agent B, search agent C) are actually distinct models; they are not the same model listed multiple times. They share a naming pattern, but each has a different model ID and serves a different purpose. Regarding My suggestion is to remove the filter entirely and expose the full catalog as-is. OpenClaude's users are developers: they need to know every available model exists, including embedding models and search/retrieval models. Hiding them just because they are not meant for the standard chat picker takes that choice away from the developer entirely. The GitHub catalog is the source of truth. We should expose it as-is and trust the developer to pick what they need. |
what's your opinion on this @Vasanthdev2004 @gnanam1990 We do not remove the filter entirely; it’s the only safety gate we have. If we want full catalog visibility, add an opt‑in path rather than changing the default behavior. |
|
@Meetpatel006 i would try locally to rebase this in #910 and see what happens. |
Okays, let's see what heppen |
@Meetpatel006 seems like a good compromise. However, we need to wait for #910 to be completed before proceeding. Despite this, on second thought, I agree with you. The only thing I would have liked was to use search and embedding models for the subagents; it would have been very useful, but that could be discussed in another PR. If you agree, I'll close the discussion. |
|
My take: keep the default So I agree with keeping the filter as the default path: model.model_picker_enabled && model.policy?.state !== 'disabled'Embedding/search/retrieval models are interesting for future sub-agent or advanced routing work, but exposing the full catalog in the normal picker means users can select models that immediately fail as primary chat models. That makes the UX noisier and harder to support. If we want full catalog visibility later, I would treat that as a separate opt-in/advanced mode after #910 settles the provider/model exposure shape. For this PR, defaulting to the safe picker behavior is the right direction from my side. |
you can close the discussion, i am fine with it |






Summary
github:copilotalias with real model IDs inconfigs.ts(e.g.claude-sonnet-4.6,gpt-5-mini).models.github.ai/catalog/modelsendpoint to the authenticated Copilot/modelsendpoint, so the model list reflects the user's actual plan.openai/*,gpt-5*,*codex*) are routed to the Azure inference endpoint with API version2026-03-10; legacy models continue to useapi.githubcopilot.com.markGithubModelUnsupported) to hide models that return400 model_not_supportedat runtime.githubRateLimit.tsto capturex-ratelimit-*headers; startup screen and cost-tracker now display the resolved model ID and remaining quota.Authorizationheader.Testing
bun run buildbun run smokeproviderConfig.github.test.ts(25 tests passed)githubRateLimit.test.ts(7 tests passed)model.github.test.tsandmodelStrings.github.test.tsThe current filter for the model picker is:
model_picker_enabledis parsed but not used as a filter condition. This needs a decision:model_picker_enabled === truemodel_picker_enabled)Should we optimize for strict alignment with GitHub's intended picker behavior, or broader compatibility with models that currently work? vote here: #964
Resolving this is required before merging. cc @Meetpatel006
Close #822