Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 19 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,12 @@
# OPENAI_MODEL=gpt-4o
# OPENAI_BASE_URL=https://api.openai.com/v1 (optional)
#
# Option 2a — AI/ML API:
# CLAUDE_CODE_USE_OPENAI=1
# AIMLAPI_API_KEY=your-aimlapi-key-here
# OPENAI_BASE_URL=https://api.aimlapi.com/v1
# OPENAI_MODEL=gpt-4o
#
# Option 3 — Google Gemini:
# CLAUDE_CODE_USE_GEMINI=1
# GEMINI_API_KEY=your-gemini-key-here
Expand Down Expand Up @@ -150,6 +156,19 @@ ANTHROPIC_API_KEY=sk-ant-your-key-here
# OPENAI_BASE_URL=https://api.openai.com/v1


# -----------------------------------------------------------------------------
# Option 2a: AI/ML API
# -----------------------------------------------------------------------------
# AI/ML API provides an OpenAI-compatible endpoint for 400+ models across
# chat, code, image, video, voice, embeddings, and more. OpenClaude uses
# chat/tool-capable models through the chat completions API.
#
# CLAUDE_CODE_USE_OPENAI=1
# AIMLAPI_API_KEY=your-aimlapi-key-here
# OPENAI_BASE_URL=https://api.aimlapi.com/v1
# OPENAI_MODEL=gpt-4o


# -----------------------------------------------------------------------------
# Option 3: Google Gemini
# -----------------------------------------------------------------------------
Expand Down
30 changes: 30 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,34 @@ $env:OPENAI_MODEL="gpt-4o"
openclaude
```

### Fastest AI/ML API setup

AI/ML API provides 400+ models behind an OpenAI-compatible API. OpenClaude uses chat/tool-capable models through `https://api.aimlapi.com/v1/chat/completions`.

macOS / Linux:

```bash
export CLAUDE_CODE_USE_OPENAI=1
export AIMLAPI_API_KEY=your-aimlapi-key-here
export OPENAI_BASE_URL=https://api.aimlapi.com/v1
export OPENAI_MODEL=gpt-4o

openclaude
```

Windows PowerShell:

```powershell
$env:CLAUDE_CODE_USE_OPENAI="1"
$env:AIMLAPI_API_KEY="your-aimlapi-key-here"
$env:OPENAI_BASE_URL="https://api.aimlapi.com/v1"
$env:OPENAI_MODEL="gpt-4o"

openclaude
```

You can also run `/provider`, choose `AI/ML API`, and paste your AI/ML API key. Browse available models at [aimlapi.com/models](https://aimlapi.com/models) and provider docs at [docs.aimlapi.com](https://docs.aimlapi.com/).

### Fastest local Ollama setup

macOS / Linux:
Expand Down Expand Up @@ -112,13 +140,15 @@ Beginner-friendly guides:

Advanced and source-build guides:

- [AI/ML API Setup](docs/aimlapi-setup.md)
- [Advanced Setup](docs/advanced-setup.md)
- [Android Install](ANDROID_INSTALL.md)

## Supported Providers

| Provider | Setup Path | Notes |
| --- | --- | --- |
| AI/ML API | `/provider` or env vars | First-class OpenAI-compatible setup for AI/ML API's 400+ model catalog; OpenClaude uses chat/tool-capable models |
| OpenAI-compatible | `/provider` or env vars | Works with OpenAI, OpenRouter, DeepSeek, Groq, Mistral, LM Studio, and other compatible `/v1` servers |
| Gemini | `/provider` or env vars | Supports API key, access token, or local ADC workflow on current `main` |
| GitHub Models | `/onboard-github` | Interactive onboarding with saved credentials |
Expand Down
35 changes: 35 additions & 0 deletions docs/aimlapi-setup.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# AI/ML API Setup

OpenClaude can run through AI/ML API as a first-class OpenAI-compatible provider. AI/ML API exposes `https://api.aimlapi.com/v1` and OpenClaude sends chat requests to `https://api.aimlapi.com/v1/chat/completions`.

## Setup with `/provider`

1. Start OpenClaude.
2. Run `/provider`.
3. Choose `AI/ML API`.
4. Paste your AI/ML API key.
5. Keep the default model `gpt-4o` or enter another chat-capable model ID.

## Setup with environment variables

macOS / Linux:

```bash
export CLAUDE_CODE_USE_OPENAI=1
export AIMLAPI_API_KEY=your-aimlapi-key-here
export OPENAI_BASE_URL=https://api.aimlapi.com/v1
export OPENAI_MODEL=gpt-4o
```

Windows PowerShell:

```powershell
$env:CLAUDE_CODE_USE_OPENAI="1"
$env:AIMLAPI_API_KEY="your-aimlapi-key-here"
$env:OPENAI_BASE_URL="https://api.aimlapi.com/v1"
$env:OPENAI_MODEL="gpt-4o"
```

OpenClaude uses chat/tool-capable models for coding-agent workflows. AI/ML API also offers other modalities such as image, video, voice, music, embeddings, OCR, and 3D generation; those models are available through AI/ML API but are outside OpenClaude's core chat/tool loop.

Browse models at [aimlapi.com/models](https://aimlapi.com/models) and read provider docs at [docs.aimlapi.com](https://docs.aimlapi.com/).
10 changes: 8 additions & 2 deletions src/commands/provider/provider.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,10 @@ import {
resolveCodexApiCredentials,
resolveProviderRequest,
} from '../../services/api/providerConfig.js'
import {
AIMLAPI_LABEL,
isAimlapiBaseUrl,
} from '../../providers/aimlapi/index.js'
import {
applySavedProfileToCurrentSession as applySharedProfileToCurrentSession,
buildCodexOAuthProfileEnv as buildSharedCodexOAuthProfileEnv,
Expand Down Expand Up @@ -207,10 +211,10 @@ export function getProviderWizardDefaults(
sanitizeProviderConfigValue(processEnv.GEMINI_MODEL, secretSource) ||
DEFAULT_GEMINI_MODEL
const safeMistralModel =
sanitizeProviderConfigValue(processEnv.MISTRAL_MODEL, processEnv) ||
sanitizeProviderConfigValue(processEnv.MISTRAL_MODEL, secretSource) ||
DEFAULT_MISTRAL_MODEL
const safeMistralBaseUrl =
sanitizeProviderConfigValue(processEnv.MISTRAL_BASE_URL, processEnv) ||
sanitizeProviderConfigValue(processEnv.MISTRAL_BASE_URL, secretSource) ||
DEFAULT_MISTRAL_BASE_URL

return {
Expand Down Expand Up @@ -287,6 +291,8 @@ export function buildCurrentProviderSummary(options?: {
let providerLabel = 'OpenAI-compatible'
if (request.transport === 'codex_responses') {
providerLabel = 'Codex'
} else if (isAimlapiBaseUrl(request.baseUrl)) {
providerLabel = AIMLAPI_LABEL
} else if (isLocalProviderUrl(request.baseUrl)) {
providerLabel = getLocalOpenAICompatibleProviderLabel(request.baseUrl)
}
Expand Down
18 changes: 18 additions & 0 deletions src/components/ProviderManager.test.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -105,6 +105,7 @@ async function waitForCondition(
// Order matches ProviderManager.renderPresetSelection() when
// canUseCodexOAuth === true (default in mocked tests).
const PRESET_ORDER = [
'AI/ML API',
'Alibaba Coding Plan',
'Alibaba Coding Plan (China)',
'Anthropic',
Expand Down Expand Up @@ -437,6 +438,23 @@ test('ProviderManager resolves GitHub virtual provider from async storage withou
expect(asyncRead).toHaveBeenCalled()
})

test('ProviderManager preset picker includes AI/ML API', async () => {
mockProviderManagerDependencies(
() => undefined,
async () => undefined,
)

const nonce = `${Date.now()}-${Math.random()}`
const { ProviderManager } = await import(`./ProviderManager.js?ts=${nonce}`)
const output = await renderProviderManagerFrame(ProviderManager, {
mode: 'first-run',
waitForOutput: frame =>
frame.includes('Set up provider') && frame.includes('AI/ML API'),
})

expect(output).toContain('AI/ML API')
})

test('ProviderManager avoids first-frame false negative while stored-token lookup is pending', async () => {
delete process.env.CLAUDE_CODE_USE_GITHUB
delete process.env.GITHUB_TOKEN
Expand Down
4 changes: 4 additions & 0 deletions src/components/ProviderManager.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,9 @@ import {
type AtomicChatReadiness,
type OllamaGenerationReadiness,
} from '../utils/providerDiscovery.js'
import {
AIMLAPI_PROVIDER_PRESET_OPTION,
} from '../providers/aimlapi/index.js'
import {
rankOllamaModels,
recommendOllamaModel,
Expand Down Expand Up @@ -1254,6 +1257,7 @@ export function ProviderManager({ mode, onDone }: Props): React.ReactNode {
// should always find known providers first. `Skip for now` (first-run
// only) comes last, after Custom.
const options = [
AIMLAPI_PROVIDER_PRESET_OPTION,
{
value: 'dashscope-intl',
label: 'Alibaba Coding Plan',
Expand Down
141 changes: 141 additions & 0 deletions src/providers/aimlapi/index.test.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,141 @@
import { describe, expect, test } from 'bun:test'
import {
AIMLAPI_ATTRIBUTION_HEADERS,
AIMLAPI_DEFAULT_BASE_URL,
AIMLAPI_DEFAULT_MODEL,
AIMLAPI_LABEL,
AIMLAPI_PROVIDER_ID,
AIMLAPI_PROVIDER_PRESET_OPTION,
getAimlapiApiKey,
getAimlapiAttributionHeaders,
getAimlapiOpenAICompatibleApiKey,
getAimlapiPresetDefaults,
hasAimlapiApiKey,
isAimlapiBaseUrl,
mapAimlapiModelCatalog,
syncAimlapiOpenAIEnv,
} from './index.js'

describe('AI/ML API provider module', () => {
test('detects only api.aimlapi.com base URLs', () => {
expect(isAimlapiBaseUrl('https://api.aimlapi.com/v1')).toBe(true)
expect(isAimlapiBaseUrl('https://api.aimlapi.com/v1/')).toBe(true)
expect(isAimlapiBaseUrl('https://example.com/api.aimlapi.com/v1')).toBe(false)
expect(isAimlapiBaseUrl('not a url')).toBe(false)
expect(isAimlapiBaseUrl(undefined)).toBe(false)
})

test('exposes preset defaults and picker option metadata', () => {
expect(getAimlapiPresetDefaults({
AIMLAPI_API_KEY: 'aiml-key',
OPENAI_API_KEY: 'openai-key',
})).toEqual({
provider: 'openai',
name: AIMLAPI_LABEL,
baseUrl: AIMLAPI_DEFAULT_BASE_URL,
model: AIMLAPI_DEFAULT_MODEL,
apiKey: 'aiml-key',
requiresApiKey: true,
})

expect(AIMLAPI_PROVIDER_PRESET_OPTION).toEqual({
value: AIMLAPI_PROVIDER_ID,
label: AIMLAPI_LABEL,
description: 'AI/ML API OpenAI-compatible endpoint',
})
})

test('resolves provider-specific auth without affecting other base URLs', () => {
const env = {
AIMLAPI_API_KEY: 'aiml-key',
OPENAI_API_KEY: 'openai-key',
}

expect(getAimlapiApiKey(env)).toBe('aiml-key')
expect(getAimlapiApiKey({ OPENAI_API_KEY: 'openai-key' })).toBe('openai-key')
expect(getAimlapiOpenAICompatibleApiKey(
'https://api.aimlapi.com/v1',
env,
)).toBe('aiml-key')
expect(getAimlapiOpenAICompatibleApiKey(
'https://openrouter.ai/api/v1',
env,
)).toBeUndefined()
expect(hasAimlapiApiKey('https://api.aimlapi.com/v1', env)).toBe(true)
})

test('adds attribution headers only for AI/ML API', () => {
expect(getAimlapiAttributionHeaders('https://api.aimlapi.com/v1')).toEqual(
AIMLAPI_ATTRIBUTION_HEADERS,
)
expect(getAimlapiAttributionHeaders('https://api.openai.com/v1')).toEqual({})
})

test('syncs AIMLAPI_API_KEY into OPENAI_API_KEY only for AI/ML API env', () => {
const aimlEnv = {
OPENAI_BASE_URL: 'https://api.aimlapi.com/v1',
AIMLAPI_API_KEY: 'aiml-key',
}
syncAimlapiOpenAIEnv(aimlEnv)
expect(aimlEnv.OPENAI_API_KEY).toBe('aiml-key')

const existingOpenAIEnv = {
OPENAI_BASE_URL: 'https://api.aimlapi.com/v1',
AIMLAPI_API_KEY: 'aiml-key',
OPENAI_API_KEY: 'openai-key',
}
syncAimlapiOpenAIEnv(existingOpenAIEnv)
expect(existingOpenAIEnv.OPENAI_API_KEY).toBe('openai-key')

const otherEnv = {
OPENAI_BASE_URL: 'https://api.openai.com/v1',
AIMLAPI_API_KEY: 'aiml-key',
}
syncAimlapiOpenAIEnv(otherEnv)
expect(otherEnv.OPENAI_API_KEY).toBeUndefined()
})

test('maps chat-completions models with metadata and deduplication', () => {
expect(mapAimlapiModelCatalog({
data: [
{
id: 'gpt-4o',
type: 'openai/chat-completions',
info: {
name: 'GPT 4o',
developer: 'OpenAI',
contextLength: 128000,
},
},
{
id: 'gpt-4o',
type: 'openai/chat-completions',
info: { name: 'Duplicate GPT 4o' },
},
{
id: 'image-model',
type: 'openai/images',
},
{
id: 'deepseek-chat',
type: 'openai/chat-completions',
info: {
name: 'DeepSeek Chat',
developer: 'DeepSeek',
},
},
],
})).toEqual([
{
value: 'gpt-4o',
label: 'GPT 4o',
description: 'OpenAI - 128000 context',
},
{
value: 'deepseek-chat',
label: 'DeepSeek Chat',
description: 'DeepSeek',
},
])
})
})
Loading
Loading