Create your account
Sign up to manage credits, generate API keys, and access Policy Gateway controls.
Standard AI providers block legitimate work—like creative writing, security research, and medical analysis—with rigid, unpredictable filters. Abliteration.ai fixes this by offering a uncensored LLM API paired with a built-in Policy Gateway. Stop dealing with random refusals and start defining your own safety rules using policy-as-code, PII redaction, audit logging, and quota controls.
Live Console
Stream responses, attach images, and inspect the request body.
Sign up to manage credits, generate API keys, and access Policy Gateway controls.
Create a key in Integration to authenticate OpenAI-compatible /v1/chat/completions requests.
Run the sample prompt or copy a snippet below. Free preview includes 5 requests.
Your AI governance control plane. Define safety rules with policy-as-code, enforce rewrite/redact/escalate outcomes, manage quotas, and stream audit logs.
Developer-controlled, less-restricted models without provider-side refusals. Pair with Policy Gateway to enforce your own business rules.
Export every policy decision to Splunk, Datadog, Elastic, S3, and Azure Monitor with structured metadata for SOC 2 and compliance.
Drop-in replacement for OpenAI API. Change the base URL and keep your existing code. Works with all major SDKs.
~$5 per 1M tokens with no hidden fees. Prepaid credits never expire. Simple, predictable billing for your AI workloads.
Migrate from OpenAI, Azure, Anthropic, or any provider in minutes. Our migration tool patches your code automatically.
Policy Gateway is the control plane for your AI traffic. Define policy-as-code guardrails with rewrite, redact, escalate, or refuse outcomes. Manage per-project keys and quotas, test changes with shadow mode and canary rollouts, and export every decision to your SIEM.
Developer-controlled, less-censored models for roleplay, creative writing, and specialized workloads — free from provider-side refusal filters. Pair with Policy Gateway to enforce your own boundaries.
abliterated-modelFull /v1/chat/completions parity with a drop-in base URL swap. Patch existing OpenAI code in minutes with the migration tool.
Export policy decisions to Splunk, Datadog, Elastic, S3, and Azure Monitor with structured metadata for audits and investigations.
Send images alongside text prompts to extract information, summarize content, and answer questions with the same API format.
Effective price: ~$5 per 1M tokens on total input + output usage. Subscription credits reset monthly; prepaid credits do not expire.
Prompts and outputs are processed transiently and never used for training. Operational telemetry is retained for billing and reliability.
Requests are rate-limited per API key. Handle 429 responses with backoff + retries, and upgrade for higher throughput.
Create an account to save credits and API keys. No phone verification required.
Generate keys for programmatic access.
Support
Quick answers about Policy Gateway governance, uncensored LLM models, billing, and integrations.
An uncensored LLM API gives developers access to less-censored models without provider-side refusal filters. Uncensored models are available when needed, and you control prompts, outputs, and policy enforcement; content must still comply with your local laws and policies.
Policy Gateway is an enterprise AI governance layer for abliteration.ai. It applies policy-as-code rules, quotas, rollout controls, and audit logs across apps, models, and agents.
Send requests to /policy/chat/completions with your policy_id, policy_user, and optional project ID. The gateway enforces your rules and returns decision metadata for audits.
Yes. Policy Gateway is an enterprise add-on billed monthly. It layers on top of your base token bundles.
It stores policy configuration and enforcement metadata (decision, reason code, policy ID, project/user tags) for audits. Prompt/output retention remains off by default.
Yes. We expose an OpenAI-compatible /v1/chat/completions endpoint with the same request/response shape, so most OpenAI clients work by changing the base URL and key.
No. We do not retain prompts or outputs by default. Operational telemetry (token counts, timestamps, error codes) is retained for billing and reliability.
You’re billed on total tokens processed (input + output). Image inputs are metered as token equivalents and count as input tokens. Effective pricing is ~$5 per 1M tokens; use monthly subscription credits or one-time prepaid credits (subscriptions reset monthly with no rollover).