Model
abliterated-model
Model ID: abliterated-model. Use this exact value in the model field of your requests.
Benchmark scores and refusal rate for the model are listed below.
Quick start
Base URL
Example request
{
"model": "abliterated-model",
"messages": [
{ "role": "user", "content": "Summarize this report in one paragraph." }
],
"temperature": 0.2
}Service notes
- Pricing model: Usage-based pricing (~$5 per 1M tokens) billed on total tokens (input + output). See the API pricing page for current plans.
- Data retention: No prompt/output retention by default. Operational telemetry (token counts, timestamps, error codes) is retained for billing and reliability.
- Compatibility: OpenAI-style /v1/chat/completions request and response format with a base URL switch.
- Latency: Depends on model size, prompt length, and load. Streaming reduces time-to-first-token.
- Throughput: Team plans include priority throughput. Actual throughput varies with demand.
- Rate limits: Limits vary by plan and load. Handle 429s with backoff and respect any Retry-After header.
Pricing at a glance
Effective price: ~$5 per 1M tokens (based on $10 for ~2M tokens).
Billing uses total tokens (input + output). Image inputs are metered as token equivalents and count as input tokens.
Subscription credits reset monthly and do not roll over. Prepaid credits are one-time and do not expire.
See the pricing page for current plans.
Model specs
Latest published benchmark scores and refusal rate for abliterated-model.
| Metric | Score |
|---|---|
mmlu_pro |
82.1 |
gpqa |
73.1 |
aime_2025 |
83.7 |
mmmu_pro |
68.1 |
refusals (mlabonne/harmful_behaviors) |
3/100 |
Common errors & fixes
- 401 Unauthorized: Check that your API key is set and sent as a Bearer token.
- 404 Not Found: Make sure the base URL ends with /v1 and you call /chat/completions.
- 400 Bad Request: Verify the model id and that messages are an array of { role, content } objects.
- 429 Rate limit: Back off and retry. Use the Retry-After header for pacing.