How to Integrate CCAPI LLM into OpenClaw
This guide walks you through connecting OpenClaw to CCAPI step-by-step. CCAPI is a fully OpenAI-compatible API gateway — OpenClaw treats it as a standard OpenAI endpoint, so no code changes are needed.
Prerequisites#
1.
OpenClaw installed — any version that supports custom LLM providers
Step 1: Get Your CCAPI API Key#
2.
Navigate to API Keys page (left sidebar > API Keys)
3.
Click Create New Key, give it a name like openclaw
4.
Copy the key immediately — it is only shown once. The key starts with sk-
5.
New accounts automatically receive $0.50 free credit, enough for extensive testing
Step 2: Choose a Model#
CCAPI uses provider/model-id format for all model IDs. Pick a model based on your needs:Budget Models (Best Cost-to-Quality)#
| Model ID | Context | Input / Output (per 1M tokens) | Highlights |
|---|
deepseek/deepseek-chat | 128K | 0.27/1.10 | DeepSeek V3. Best overall budget choice for OpenClaw. Strong coding, reasoning, and general tasks. |
minimax/MiniMax-M2.5 | 1M | 0.21/0.84 | Cheapest coding model. SWE-bench 80.2%. 1M context window. |
openai/gpt-5-nano | 1M | 0.035/0.28 | Ultra-low cost. Good for simple tasks, summaries, and classification. |
google/gemini-2.5-flash | 1M | 0.105/0.42 | Fast, cheap, 1M context. Good for large codebase analysis. |
Mid-Tier Models (Quality + Affordability)#
| Model ID | Context | Input / Output (per 1M tokens) | Highlights |
|---|
openai/gpt-5-mini | 1M | 0.175/1.40 | GPT-5 family, fast and affordable. Good all-rounder. |
deepseek/deepseek-reasoner | 128K | 0.55/2.19 | DeepSeek R1. Strong chain-of-thought reasoning. |
z-ai/glm-5 | 128K | 0.40/1.80 | Zhipu GLM-5. Excellent for Chinese language tasks. |
google/gemini-2.5-pro | 1M | 0.875/7.00 | Google's best. 1M context, strong multimodal. |
Premium Models (Maximum Quality)#
| Model ID | Context | Input / Output (per 1M tokens) | Highlights |
|---|
openai/gpt-5.2 | 256K | 1.225/9.80 | Latest GPT-5.2. Strongest OpenAI model. |
openai/gpt-4o | 128K | 1.75/7.00 | GPT-4o multimodal. Proven quality for complex tasks. |
openai/gpt-4.1 | 1M | 1.40/5.60 | GPT-4.1 with 1M context. |
Vision-Capable Models (for screenshot analysis)#
If you use OpenClaw's screenshot/vision features, you need a vision-capable model:| Model ID | Input / 1M | Notes |
|---|
google/gemini-2.5-flash | $0.105 | Cheapest vision option |
openai/gpt-5-mini | $0.175 | Best vision value |
google/gemini-2.5-pro | $0.875 | Best Google vision |
openai/gpt-4o | $1.75 | Best overall vision quality |
openai/gpt-5.2 | $1.225 | Latest GPT-5 vision |
There are three ways to connect OpenClaw to CCAPI. Choose the one that fits your workflow.Method A: Environment Variables (Simplest)#
The fastest way. Add these to your shell profile (~/.bashrc, ~/.zshrc) or OpenClaw's .env file:| Variable | Required | Description |
|---|
OPENCLAW_BASE_URL | Yes | Must be https://api.ccapi.ai/api/v1. This tells OpenClaw to send requests to CCAPI instead of OpenAI. |
OPENCLAW_API_KEY | Yes | Your CCAPI API key (starts with sk-). Get it from Dashboard > API Keys. |
OPENCLAW_MODEL | Yes | Default model ID in provider/model-id format. Example: deepseek/deepseek-chat. |
OPENCLAW_FALLBACK_MODEL | No | Backup model used when the primary model fails or is overloaded. Example: openai/gpt-5-mini. |
After setting the variables, restart your terminal and OpenClaw.Method B: openclaw.json Config File (Most Flexible)#
For multi-model setups and advanced configuration. Edit ~/.openclaw/openclaw.json:{
"models": {
"providers": {
"ccapi": {
"baseUrl": "https://api.ccapi.ai/api/v1",
"apiKey": "$CCAPI_API_KEY",
"api": "openai-completions",
"models": [
"deepseek/deepseek-chat",
"deepseek/deepseek-reasoner",
"openai/gpt-5-mini",
"openai/gpt-5-nano",
"openai/gpt-4o",
"google/gemini-2.5-flash",
"google/gemini-2.5-pro",
"z-ai/glm-5",
"minimax/MiniMax-M2.5"
]
}
}
},
"agents": {
"defaults": {
"model": {
"primary": "ccapi/deepseek/deepseek-chat"
}
}
}
}
| Field | Description |
|---|
models.providers.ccapi | Registers CCAPI as a custom provider named "ccapi". You can use any name. |
baseUrl | CCAPI endpoint. Always https://api.ccapi.ai/api/v1. |
apiKey | Your CCAPI API key. Use $CCAPI_API_KEY to reference an environment variable, or paste the key directly (less secure). |
api | Must be "openai-completions". Tells OpenClaw to use the OpenAI chat completions protocol. |
models | Array of model IDs available through this provider. Only models listed here will appear in OpenClaw's model selector. |
agents.defaults.model.primary | Default model for new agent sessions. Format: provider-name/model-id where provider-name matches the key under models.providers (in this case ccapi). |
The apiKey field supports env var syntax: "$CCAPI_API_KEY" reads from the CCAPI_API_KEY environment variable. This is more secure than hardcoding the key.
The models array determines which models OpenClaw offers you. You can list as many or as few as you want.
The primary model uses the format ccapi/deepseek/deepseek-chat — the first segment (ccapi) is the provider name you defined under models.providers, followed by the CCAPI model ID.
Method C: CLI Commands (Quick Setup)#
Use OpenClaw's built-in CLI to configure without editing files:Verify the config was applied:
Step 4: Verify the Integration#
4.1 Test the API Key Directly#
Before testing in OpenClaw, verify your CCAPI key works:You should get a JSON response with choices[0].message.content. If you see an error, check:API key is correct (starts with sk-)
Model ID uses provider/model-id format
4.2 Test Streaming#
You should see SSE events like data: {"choices":[{"delta":{"content":"1"}}]} streaming in.4.3 Test in OpenClaw#
Start OpenClaw and give it a simple task:If OpenClaw responds normally, the integration is working. You can check which model was used in OpenClaw's logs or the CCAPI Usage Dashboard.OpenClaw relies heavily on tool/function calling (shell commands, file operations, web browsing). Verify:List the files in the current directory.
OpenClaw should call a shell tool to run ls and return the output. All CCAPI text models support function calling.4.5 Check Your Usage#
Per-request token usage and cost
Daily/monthly spending breakdown
Model-by-model cost analysis
Step 5: Optimize Your Setup#
Recommended Configurations by Use Case#
Daily automation (~$3/month)Best for: file management, web browsing, shell scripts, everyday tasks.Coding assistant (~$8/month)Best for: code review, debugging, refactoring. MiniMax M2.5 scores 80.2% on SWE-bench.Quality-first (~$15/month)Best for: complex reasoning, multi-step tasks, screenshot analysis.Best for: maximum quality, large codebase analysis, advanced reasoning.Best for: Chinese documentation, code with Chinese comments, Chinese web browsing.Switching Models On-the-Fly#
With the openclaw.json method (Method B), you can list multiple models and switch between them during a session using OpenClaw's model selector — no restart needed.
Cost Comparison#
Estimated monthly cost at 50M tokens/day (typical heavy OpenClaw usage):| Setup | Model | Est. Monthly |
|---|
| Direct Anthropic | Claude Opus | ~$2,250 |
| Direct OpenAI | GPT-4o | ~$750 |
| CCAPI | DeepSeek V3 | ~$20 |
| CCAPI | MiniMax M2.5 | ~$16 |
| CCAPI | GPT-5 Nano | ~$5 |
All CCAPI prices are final — no surcharge, no markup. Pay-as-you-go, no subscription.
Troubleshooting#
| Problem | Cause | Solution |
|---|
Invalid API key | API key is incorrect or expired | Verify key at Dashboard > API Keys. Regenerate if needed. |
Insufficient balance | Account balance is $0 | Top up at Dashboard > Billing. |
Model not found | Wrong model ID format | Use provider/model-id format. Run curl https://api.ccapi.ai/api/v1/models -H 'Authorization: Bearer sk-...' to list all available models. |
| OpenClaw still uses OpenAI | OPENCLAW_BASE_URL not set or overridden | Verify with echo $OPENCLAW_BASE_URL. Must be https://api.ccapi.ai/api/v1. Check for conflicting .env files. |
| Tool calls fail | Model doesn't support function calling | All CCAPI text models support function calling. Check the model ID is correct. |
| Streaming doesn't work | Network proxy or firewall blocking SSE | Test with curl -N (streaming). Check proxy settings. |
| Slow responses | Model overloaded or network latency | Try a different model (e.g., gemini-2.5-flash is very fast). Check status.ccapi.ai. |
| Chinese models not available | Using wrong provider prefix | Chinese models: z-ai/glm-5, minimax/MiniMax-M2.5, moonshot/moonshot-v1-128k, qwen/qwen-plus-latest. No special setup needed. |
API Quick Reference#
Base URL: https://api.ccapi.ai/api/v1Auth: Authorization: Bearer sk-your-ccapi-api-keyEndpoint: POST /api/v1/chat/completionsModel ID format: provider/model-id (e.g., deepseek/deepseek-chat, openai/gpt-4o)Supported features: streaming, tool/function calling, vision (image input), JSON mode, system messages Modified at 2026-03-15 05:58:47