Your AI.
Your rules.

Other AI providers decide what you can and can't ask. Abliteration gives you unrestricted LLM access with an enterprise Policy Gateway, so your organization controls the rules, not your vendor.

Live Console

Try the model in real time

Stream responses, attach images, and inspect the request body.

Model ID: abliterated-model
Ephemeral Session
Welcome to abliteration.ai — ask me anything.
Flagged categories
Block prompts that match selected categories.
0 selected
Self-harm and sexual content involving minors are always blocked.

Competitive Advantage

How we compare

Default public API behavior. abliteration.ai ships the control plane as part of the product.

Feature
abliteration.ai
OpenAI
Anthropic
Control over refusals
You decide
Provider-defined
Provider-defined
Governance layer
Built in
Build it yourself
Build it yourself
Enforcement actions
Built in
App logic
App logic
Audit exports
Ready to go
Custom build
Custom build
OpenAI client migration
Swap the base URL
Native
Compatibility layer

Get started in minutes

Step 1

Create your account

Sign up to manage credits, generate API keys, and access Policy Gateway controls.

Step 2

Generate an API key

Create a key in Integration to authenticate OpenAI-compatible /v1/chat/completions requests.

Step 3

Send your first request

Run the sample prompt or copy a snippet below. Free preview includes 5 requests.

Developer-controlled AI for synthetic data generation and enterprise governance

Policy Gateway: enterprise AI governance, your way

Replace provider refusals with your own rules

Policy Gateway is the control plane for your AI traffic. Define policy-as-code guardrails with rewrite, redact, escalate, or refuse outcomes. Manage per-project keys and quotas, test changes with shadow mode and canary rollouts, and export every decision to your SIEM.

Unrestricted LLM models for creative work

Developer-controlled, less-restricted models for creative writing and specialized workloads — free from provider-side refusal filters. Pair with Policy Gateway to enforce your own boundaries.

Model ID: abliterated-model

Synthetic data generation for training, fine-tuning, and evals

Generate synthetic data for training models with prompt-completion pairs, labeled examples, multi-turn conversations, and JSONL-ready outputs. Use the same API for dataset expansion, edge-case coverage, and policy-controlled generation workflows.

OpenAI-compatible API and instant migration

Full /v1/chat/completions parity with a drop-in base URL swap. Patch existing OpenAI code in minutes with the migration tool.

LLM audit logging and compliance exports

Export policy decisions to Splunk, Datadog, Elastic, S3, and Azure Monitor with structured metadata for audits and investigations.

Image understanding (vision) for screenshots and documents

Send images alongside text prompts to extract information, summarize content, and answer questions with the same API format.

Usage-based token pricing

Effective price: ~$5 per 1M tokens on total input + output usage. Subscription credits reset monthly; prepaid credits do not expire.

Zero data retention by default

Prompts and outputs are processed transiently and never used for training. Operational telemetry is retained for billing and reliability.

Rate limits and retries

Requests are rate-limited per API key. Handle 429 responses with backoff + retries, and upgrade for higher throughput.

Account Authentication and API Integration

Sign in or create account

Create an account to save credits and API keys. No phone verification required.

Used for sign up, receipts, and password resets.
Use at least 12 characters with upper/lowercase letters, a number, and a symbol.
Log in accepts email or this username.
If you were referred, paste the code from your link before signing up.
Already have an account?
or
Loading single sign-on…

API Integration & Code Examples

Generate keys for programmatic access.

curl https://api.abliteration.ai/v1/chat/completions \ -H "Authorization: Bearer $ABLIT_KEY" \ -H "Content-Type: application/json" \ -d '{ "model": "abliterated-model", "messages": [{"role":"user","content":[ {"type":"text","text":"What’s in this image?"}, {"type":"image_url","image_url":{"url":"https://abliteration.ai/stonehenge.jpg"}} ]}], "stream": true }'
Browse API examples on GitHub

Support

Frequently Asked Questions

Quick answers about synthetic data generation, Policy Gateway governance, unrestricted LLM models, billing, and integrations.

What is an unrestricted LLM API?

An unrestricted LLM API gives developers access to less-restricted models without provider-side refusal filters. Unrestricted models are available when needed, and you control prompts, outputs, and policy enforcement; content must still comply with your local laws and policies.

Do you support synthetic data generation for training models?

Yes. You can use abliteration.ai to generate synthetic training data, fine-tuning pairs, evaluation sets, labeled datasets, and edge-case examples through the same OpenAI-compatible API used for live inference.

What is the Policy Gateway?

Policy Gateway is an enterprise AI governance layer for abliteration.ai. It applies policy-as-code rules, quotas, rollout controls, and audit logs across apps, models, and agents.

How does Policy Gateway enforce policy?

Send requests to /policy/chat/completions with your policy_id, policy_user, and optional project ID. The gateway enforces your rules and returns decision metadata for audits.

Do I need a separate subscription for Policy Gateway?

Yes. Policy Gateway is an enterprise add-on billed monthly. It layers on top of your base token bundles.

What data does Policy Gateway store?

It stores policy configuration and enforcement metadata (decision, reason code, policy ID, project/user tags) for audits. Prompt/output retention remains off by default.

Is abliteration.ai OpenAI-compatible?

Yes. We expose an OpenAI-compatible /v1/chat/completions endpoint with the same request/response shape, so most OpenAI clients work by changing the base URL and key.

Do you retain prompts or chat logs?

No. We do not retain prompts or outputs by default. Operational telemetry (token counts, timestamps, error codes) is retained for billing and reliability.

How does usage-based token pricing work?

You’re billed on total tokens processed (input + output). Image inputs are metered as token equivalents and count as input tokens. Effective pricing is ~$5 per 1M tokens; use monthly subscription credits or one-time prepaid credits (subscriptions reset monthly with no rollover).