Using OpenClaw with abliteration.ai
Use abliteration.ai as a custom Anthropic-compatible provider in OpenClaw. Add a models.providers entry, set baseUrl and apiKey, select abliterated-model.
This guide shows how to use OpenClaw with abliteration.ai as a custom Anthropic-compatible model provider. OpenClaw is an open-source personal AI assistant that runs on macOS, Linux, and Windows (via WSL2). Its model layer reads custom provider definitions from ~/.openclaw/openclaw.json, so you can point its agent at any Anthropic- or OpenAI-compatible gateway.
abliteration.ai exposes the Anthropic Messages surface at /v1/messages and /v1/messages/count_tokens, which matches OpenClaw's anthropic-messages adapter one-for-one. Setup is entirely declarative: add one provider block under models.providers, register abliterated-model in its catalog, and set agents.defaults.model.primary.
Use openclaw onboard for guided setup, or edit ~/.openclaw/openclaw.json (JSON5) directly using the provider block below.
Quick start
// ~/.openclaw/openclaw.json
{
models: {
mode: "merge",
providers: {
abliteration: {
baseUrl: "https://api.abliteration.ai/v1",
apiKey: "${ABLITERATION_API_KEY}",
api: "anthropic-messages",
models: [
{
id: "abliterated-model",
name: "Abliterated Model",
reasoning: false,
input: ["text"],
contextWindow: 128000,
maxTokens: 8192,
cost: {
input: 0.005,
output: 0.005,
},
},
],
},
},
},
agents: {
defaults: {
model: {
primary: "abliteration/abliterated-model",
},
},
},
}Service notes
- Pricing model: Usage-based pricing (~$5 per 1M tokens) billed on total tokens (input + output). See the API pricing page for current plans.
- Data retention: No prompt/output retention by default. Operational telemetry (token counts, timestamps, error codes) is retained for billing and reliability.
- Compatibility: OpenAI-style /v1/chat/completions request and response format with a base URL switch.
- Latency: Depends on model size, prompt length, and load. Streaming reduces time-to-first-token.
- Throughput: Team plans include priority throughput. Actual throughput varies with demand.
- Rate limits: Limits vary by plan and load. Handle 429s with backoff and respect any Retry-After header.
Prerequisites
#Before you start, make sure the basics are in place.
- An abliteration.ai account with credits. Purchase credits here.
- An API key (
ak_...) from your abliteration.ai dashboard. - OpenClaw installed and the
openclawCLI available in your shell. See the OpenClaw repo for platform-specific install steps. - A working config directory at
~/.openclaw/. Runningopenclaw onboardonce creates it if it does not exist.
Provider settings
#OpenClaw's model layer reads custom providers from models.providers.<name> inside ~/.openclaw/openclaw.json. For abliteration.ai, use the anthropic-messages adapter so OpenClaw talks to /v1/messages and /v1/messages/count_tokens.
| Setting | Value | Purpose |
|---|---|---|
models.providers.abliteration.baseUrl | https://api.abliteration.ai/v1 | Targets the public abliteration.ai API under /v1. OpenClaw appends /messages and /messages/count_tokens. |
models.providers.abliteration.apiKey | ${ABLITERATION_API_KEY} | Sends your ak_ API key as the bearer token. The ${VAR} syntax reads the value from your shell at runtime. |
models.providers.abliteration.api | anthropic-messages | Tells OpenClaw to speak the Anthropic Messages protocol to this provider. |
models.providers.abliteration.models[].id | abliterated-model | The model id published by abliteration.ai. Must match exactly. |
models.mode | merge | Keeps OpenClaw's built-in providers available alongside your custom provider. Use replace if you want to expose only abliteration.ai. |
agents.defaults.model.primary | abliteration/abliterated-model | Selects abliteration.ai as the default model for new agent runs. |
Configure openclaw.json
#OpenClaw reads its primary config from ~/.openclaw/openclaw.json (legacy path ~/.clawdbot/clawdbot.json is still symlinked on upgraded installs). The file is JSON5, so comments, unquoted keys, and trailing commas are allowed.
Export your API key first so the ${ABLITERATION_API_KEY} interpolation resolves at runtime.
# Put this in ~/.bashrc, ~/.zshrc, or ~/.profile
export ABLITERATION_API_KEY="ak_YOUR_API_KEY"
# Reload your shell
source ~/.zshrc # or source ~/.bashrc
# Open the config in your editor
${EDITOR:-vi} ~/.openclaw/openclaw.jsonProvider block
#Paste this provider block into ~/.openclaw/openclaw.json. The merge mode keeps OpenClaw's built-in providers intact so you can still switch to other models from openclaw onboard.
{
models: {
mode: "merge",
providers: {
abliteration: {
baseUrl: "https://api.abliteration.ai/v1",
apiKey: "${ABLITERATION_API_KEY}",
api: "anthropic-messages",
models: [
{
id: "abliterated-model",
name: "Abliterated Model",
reasoning: false,
input: ["text"],
contextWindow: 128000,
maxTokens: 8192,
cost: {
input: 0.005,
output: 0.005,
},
},
],
},
},
},
agents: {
defaults: {
model: {
primary: "abliteration/abliterated-model",
fallbacks: [],
},
},
},
}Validate the config
#OpenClaw can dump its live JSON schema and validate the current config before you launch an agent.
# Print the schema OpenClaw uses to validate the config
openclaw config schema | less
# Show the resolved (merged) config that OpenClaw will use
openclaw config show
# Quick connectivity test against /v1/messages
curl -s https://api.abliteration.ai/v1/messages \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $ABLITERATION_API_KEY" \
-d '{
"model": "abliterated-model",
"max_tokens": 64,
"messages": [{"role": "user", "content": "ping"}]
}' | python3 -m json.toolRun the agent
#Once the provider is in place and agents.defaults.model.primary is set, OpenClaw's agent uses abliteration.ai automatically. The agent command does not accept a positional prompt or a per-run --model flag — pass the prompt with --message and select an agent profile with --agent. The reserved default agent id is main.
To switch models, edit agents.defaults.model.primary in ~/.openclaw/openclaw.json, or create a dedicated agent profile with openclaw agents add that pins its own model.
# List available agents (main is the reserved default id) openclaw agents list # Send a one-off message to the default agent using the primary model openclaw agent --agent main --message "summarize this directory" # Raise the reasoning effort for a single run openclaw agent --agent main --thinking high --message "refactor foo.py for readability" # Emit structured JSON instead of the default formatted output openclaw agent --agent main --json --message "list the files in this repo" # Bypass the gateway and run the embedded agent directly openclaw agent --agent main --local --message "ping"
How it works
#OpenClaw's anthropic-messages adapter is a drop-in client for Anthropic-compatible gateways. The custom provider you defined has the same contract as OpenClaw's built-in Anthropic support.
- Inference path:
POST /v1/messages. - Token counting:
POST /v1/messages/count_tokens. - Auth:
Authorization: Bearer ak_...sourced from${ABLITERATION_API_KEY}. - Streaming: Anthropic-style SSE, which is the default for agent loops and interactive turns.
- Tool calling: native Messages API tool use, including multi-turn agentic tool call chains.
- Rate limits: the normal abliteration.ai limits apply (120 req/min for API key callers).
Alternative: OpenAI-compatible wire format
#If you prefer to route OpenClaw through the OpenAI Chat Completions surface, abliteration.ai also exposes /v1/chat/completions. Use api: "openai-completions" instead. This path is useful if another part of your stack already standardizes on OpenAI-style payloads.
Avoid api: "openai-responses" for custom providers on current OpenClaw releases — it has known issues for user-defined providers (see openclaw/openclaw#43018). Prefer the anthropic-messages config at the top of this page.
{
models: {
mode: "merge",
providers: {
"abliteration-openai": {
baseUrl: "https://api.abliteration.ai/v1",
apiKey: "${ABLITERATION_API_KEY}",
api: "openai-completions",
models: [
{
id: "abliterated-model",
name: "Abliterated Model (OpenAI wire)",
input: ["text"],
contextWindow: 128000,
maxTokens: 8192,
cost: { input: 0.005, output: 0.005 },
},
],
},
},
},
agents: {
defaults: {
model: {
primary: "abliteration-openai/abliterated-model",
},
},
},
}Troubleshooting
#Most setup issues are config resolution, env var interpolation, or model id mismatches.
- error: unknown option '--model' —
openclaw agenthas no--modelflag. Model selection is config-driven: changeagents.defaults.model.primaryin~/.openclaw/openclaw.json, or useopenclaw agents addto pin a model on a specific agent profile. The supported flags are--agent,--message,--to,--session-id,--thinking,--channel,--reply-to,--reply-channel,--reply-account,--local,--deliver,--timeout,--json, and--verbose. - 401 Unauthorized — verify
ABLITERATION_API_KEYis exported in the same shell that launched OpenClaw, and that it starts withak_. Runecho $ABLITERATION_API_KEYto confirm. - Requests going to api.anthropic.com instead of abliteration.ai — the
apifield is probably missing, which causes OpenClaw to silently fall back to the built-in Anthropic provider. Make sureapi: "anthropic-messages"is set on your provider (see openclaw/openclaw#23332). - Model not found — the
idinsidemodels[]must be exactlyabliterated-model, andagents.defaults.model.primarymust use the<provider>/<model-id>form (e.g.abliteration/abliterated-model). - Connection refused / wrong base URL —
baseUrlmust be exactlyhttps://api.abliteration.ai/v1. Do not omit/v1and do not append/messagesyourself. - Built-in provider wins over your custom one — naming your provider
anthropiccan be ignored on some OpenClaw releases because the built-in provider has priority (see openclaw/openclaw#56679). Use a different name such asabliteration. - 402 Insufficient Credits — top up from the pricing page. OpenClaw agent loops can spend credits quickly.
- 429 Rate limited — back off using the
Retry-Afterheader, or reduce parallel sub-agents. - Schema validation errors — run
openclaw config schemato inspect the live schema, then runopenclaw config showto see how OpenClaw merged your file with its defaults.
Common errors & fixes
- error: unknown option '--model': openclaw agent does not accept --model. Edit agents.defaults.model.primary in ~/.openclaw/openclaw.json, or pin a model on an agent profile with openclaw agents add, then run: openclaw agent --agent main --message "...".
- 401 Unauthorized: Export ABLITERATION_API_KEY before launching OpenClaw and confirm the value is a real ak_ API key from your dashboard.
- Requests silently hit api.anthropic.com: Add api: "anthropic-messages" to your provider block so OpenClaw does not fall back to the built-in Anthropic provider.
- Model not found: Register abliterated-model in models[] and set agents.defaults.model.primary to abliteration/abliterated-model.
- Connection refused / wrong base URL: Set baseUrl to https://api.abliteration.ai/v1 exactly. Do not remove /v1 and do not append /messages manually.
- Built-in provider overrides your custom block: Rename the provider (for example to abliteration) so the built-in Anthropic provider does not take precedence.
- 402 Insufficient Credits: Purchase credits from the pricing page. OpenClaw agent loops can consume credits quickly.
- 429 Rate Limit: OpenClaw sends many requests during agent runs. Respect the Retry-After period or reduce parallel sub-agents.