Designed for controlNo prompt/output retention · Telemetry for billing

Developer-controlled AI.
Zero Data Retention.

Chat directly with an unrestricted model or integrate via API. Full /v1/chat/completions parity. ~$5 per 1M tokens. No prompt/output retention by default.

Live Console

Try the model in real time

Stream responses, attach images, and inspect the request body.

Model ID: abliterated-model
Ephemeral Session
Welcome to abliteration.ai — ask me anything.
Free preview: 0/5.Usage-based pricing

Key Features of abliteration.ai Developer-Controlled LLM API (Uncensored Options)

Developer-controlled models (uncensored options)

Developer-controlled, less-censored model delivered via API. We do not apply provider-side refusal filters; you control outputs and policy enforcement. Uncensored model available for teams that need them. You are responsible for keeping usage lawful in your jurisdiction—do not generate or distribute illegal content.

Model ID: abliterated-modelView model specs

OpenAI-Compatible /v1/chat/completions API

Full /v1/chat/completions parity. Works with most clients that speak this format—just point them at this base URL.

Image understanding (vision) for screenshots & documents

Send images alongside text prompts to extract information, summarize what’s on screen, and answer questions about photos, charts, and UI screenshots. Use the same /v1/chat/completions interface with multimodal message parts for a single, developer-friendly image understanding API.

Usage-based token pricing for LLMs

Effective price: ~$5 per 1M tokens. Billing uses total tokens (input + output), and image inputs count as input tokens. Subscription credits reset monthly; prepaid credits do not expire. Credits are just the billing unit— see API pricing.

Zero Data Retention Policy

Default policy: no prompt/output retention. Payloads are processed transiently and never used for training. Operational telemetry (token counts, timestamps, error codes) is retained for billing and reliability.

Rate limits and retries

Requests are rate-limited per API key. If you exceed limits, you will receive a 429 response with a Retry-After header. Use backoff and retries, and upgrade for priority throughput when you need more capacity.

Account Authentication and API Integration

Create Account or Sign In

Create an account to save credits and API keys. No phone verification required.

Used for sign up, receipts, and password resets.
Use at least 12 characters with upper/lowercase letters, a number, and a symbol.
Log in accepts email or this username.
If you were referred, paste the code from your link before signing up.
or
Loading single sign-on…

API Integration & Code Examples

Generate keys for programmatic access.

curl https://api.abliteration.ai/v1/chat/completions \ -H "Authorization: Bearer $ABLIT_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "abliterated-model", "messages": [{"role":"user","content":[ {"type":"text","text":"What’s in this image?"}, {"type":"image_url","image_url":{"url":"https://abliteration.ai/stonehenge.jpg"}} ]}], "stream": true }'
Browse API examples on GitHub

Support

Frequently Asked Questions

Quick answers to the most common billing and integration questions.

What is an uncensored LLM API?

An uncensored LLM API gives developers access to less-censored models without provider-side refusal filters. Uncensored models are available when needed, and you control prompts, outputs, and policy enforcement; content must still comply with your local laws and policies.

Is abliteration.ai OpenAI-compatible?

Yes. We expose an OpenAI-compatible /v1/chat/completions endpoint with the same request/response shape, so most OpenAI clients work by changing the base URL and key.

Do you retain prompts or chat logs?

No. We do not retain prompts or outputs by default. Operational telemetry (token counts, timestamps, error codes) is retained for billing and reliability.

How does usage-based token pricing work?

You’re billed on total tokens processed (input + output). Image inputs are metered as token equivalents and count as input tokens. Effective pricing is ~$5 per 1M tokens; use monthly subscription credits or one-time prepaid credits (subscriptions reset monthly with no rollover).