Compatibility
OpenAI-compatible example payloads
These payloads follow the OpenAI-compatible request shape.
Confirm advanced features in the OpenAPI spec before shipping.
Quick start
Base URL
Chat payload
{
"model": "abliterated-model",
"messages": [
{ "role": "user", "content": "Reply with: payload verified." }
],
"temperature": 0.2
}Free preview for 5 messages. Sign up to continue.
Service notes
- Pricing model: Usage-based pricing (~$5 per 1M tokens) billed on total tokens (input + output). See the API pricing page for current plans.
- Data retention: No prompt/output retention by default. Operational telemetry (token counts, timestamps, error codes) is retained for billing and reliability.
- Compatibility: OpenAI-style /v1/chat/completions request and response format with a base URL switch.
- Latency: Depends on model size, prompt length, and load. Streaming reduces time-to-first-token.
- Throughput: Team plans include priority throughput. Actual throughput varies with demand.
- Rate limits: Limits vary by plan and load. Handle 429s with backoff and respect any Retry-After header.
Streaming payload
Use stream: true to receive tokens as they are generated.
Streaming payload
{
"model": "abliterated-model",
"messages": [
{ "role": "user", "content": "Stream a short reply." }
],
"temperature": 0.2,
"stream": true
}Tool calling payload (schema example)
Validate tool definitions against the OpenAI-compatible schema before production use.
Tool calling payload
{
"model": "abliterated-model",
"messages": [
{ "role": "user", "content": "Look up the status of order 123." }
],
"tools": [
{
"type": "function",
"function": {
"name": "get_order_status",
"description": "Lookup an order status by id.",
"parameters": {
"type": "object",
"properties": {
"order_id": { "type": "string" }
},
"required": ["order_id"]
}
}
}
]
}Mini playground
Response will appear here.
Expected output should include payload verified.
Test vector
Expected output should include payload verified.
Request payload
{
"model": "abliterated-model",
"messages": [
{
"role": "user",
"content": "Reply with: payload verified."
}
],
"temperature": 0.2,
"stream": false
}Common errors & fixes
- 401 Unauthorized: Check that your API key is set and sent as a Bearer token.
- 404 Not Found: Make sure the base URL ends with /v1 and you call /chat/completions.
- 400 Bad Request: Verify the model id and that messages are an array of { role, content } objects.
- 429 Rate limit: Back off and retry. Use the Retry-After header for pacing.