abliteration.ai - Uncensored LLM API Platform
Abliteration
DocsRoleplayMigrationDefinitionsPricing
Home/Streaming chunks not parsing

Troubleshooting

Streaming chunks not parsing

Use this checklist to diagnose the issue quickly.

Run the test vector to confirm the fix.

Quick start

Base URL
Sanity check request
curl https://api.abliteration.ai/v1/chat/completions \
  -H "Authorization: Bearer $ABLIT_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "model": "abliterated-model",
  "messages": [
    { "role": "user", "content": "Reply with: base url fixed." }
  ],
  "temperature": 0.2
}'

Free preview for 5 messages. Sign up to continue.

Service notes

  • Pricing model: Usage-based pricing (~$5 per 1M tokens) billed on total tokens (input + output). See the API pricing page for current plans.
  • Data retention: No prompt/output retention by default. Operational telemetry (token counts, timestamps, error codes) is retained for billing and reliability.
  • Compatibility: OpenAI-style /v1/chat/completions request and response format with a base URL switch.
  • Latency: Depends on model size, prompt length, and load. Streaming reduces time-to-first-token.
  • Throughput: Team plans include priority throughput. Actual throughput varies with demand.
  • Rate limits: Limits vary by plan and load. Handle 429s with backoff and respect any Retry-After header.

Symptoms

  • Raw data: lines instead of parsed deltas.
  • Client hangs or never reaches [DONE].
  • JSON parse errors mid-stream.

Fix checklist

  • Ensure stream: true is set in the request payload.
  • Parse data: lines and ignore empty keep-alives.
  • Do not buffer the response body before parsing.

Validate the fix

Run a short prompt and confirm the response includes the expected text.

Validation payload
{
  "model": "abliterated-model",
  "messages": [
    { "role": "user", "content": "Reply with: base url fixed." }
  ],
  "temperature": 0.2
}

Mini playground

Response will appear here.
Expected output should include base url fixed.

Test vector

Expected output should include base url fixed.

Request payload
{
  "model": "abliterated-model",
  "messages": [
    {
      "role": "user",
      "content": "Reply with: base url fixed."
    }
  ],
  "temperature": 0.2,
  "stream": false
}

Common errors & fixes

  • Streaming chunks not parsing: Ensure stream: true is set in the request payload.

Related links

  • OpenAI compatibility guide
  • Instant migration tool
  • Compatibility matrix
  • Streaming chat completions
  • See API Pricing
  • View Uncensored Models
  • Rate limits
  • Privacy policy
DefinitionsDocumentationRun in PostmanPrivacy PolicyTerms of ServiceHugging Facehelp@abliteration.ai
FacebookX (Twitter)

© 2025 Social Keyboard, Inc. All rights reserved.