abliteration.ai - Uncensored LLM API Platform
Abliteration
DocsRoleplayMigrationDefinitionsPricing
Home/Docs/Postman collection and OpenAPI spec

Docs

Postman collection and OpenAPI spec

Use the Run in Postman badge to open the collection.

Download the collection and environment JSON if you want to import locally or fork the requests.

OpenAPI is hosted at https://api.abliteration.ai/openapi.json and https://api.abliteration.ai/.well-known/openapi.json.

Quick start

Base URL
Example request
curl https://api.abliteration.ai/openapi.json

Service notes

  • Pricing model: Usage-based pricing (~$5 per 1M tokens) billed on total tokens (input + output). See the API pricing page for current plans.
  • Data retention: No prompt/output retention by default. Operational telemetry (token counts, timestamps, error codes) is retained for billing and reliability.
  • Compatibility: OpenAI-style /v1/chat/completions request and response format with a base URL switch.
  • Latency: Depends on model size, prompt length, and load. Streaming reduces time-to-first-token.
  • Throughput: Team plans include priority throughput. Actual throughput varies with demand.
  • Rate limits: Limits vary by plan and load. Handle 429s with backoff and respect any Retry-After header.

Postman public docs

Public Postman docs: https://documenter.getpostman.com/view/51220842/2sBXVcmCop.

Download the Postman assets

Collection JSON: Download the collection.

Environment template: Download the environment (variables: base_url, api_key).

Streaming note

Postman may buffer server-sent events. For real-time token streaming, use curl or an SDK that supports text/event-stream responses.

Syndication targets

One OpenAPI spec feeds multiple ecosystems:

  • RapidAPI import
  • Postman import
  • SDK generation (OpenAPI Generator or similar)
  • API client autocompletion in editors

Common errors & fixes

  • 401 Unauthorized: Check that your API key is set and sent as a Bearer token.
  • 404 Not Found: Make sure the base URL ends with /v1 and you call /chat/completions.
  • 400 Bad Request: Verify the model id and that messages are an array of { role, content } objects.
  • 429 Rate limit: Back off and retry. Use the Retry-After header for pacing.

Related links

  • OpenAI compatibility guide
  • Streaming chat completions
  • Rate limits and retries
  • See API Pricing
  • View Uncensored Models
  • Rate limits
  • Privacy policy
DefinitionsDocumentationRun in PostmanPrivacy PolicyTerms of ServiceHugging Facehelp@abliteration.ai
FacebookX (Twitter)

© 2025 Social Keyboard, Inc. All rights reserved.