Docs
Vision and multimodal inputs
Vision-capable models accept images alongside text in the same request.
Use the OpenAI-compatible content array with type: "text" and type: "image_url" parts.
Choose a vision-capable model id from your models list before sending images.
Quick start
Example request
import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.ABLIT_KEY,
baseURL: "https://api.abliteration.ai/v1",
});
const response = await client.chat.completions.create({
model: "vision-model-id",
messages: [
{
role: "user",
content: [
{ type: "text", text: "Describe the image in one sentence." },
{ type: "image_url", image_url: { url: "https://example.com/image.jpg" } },
],
},
],
});
console.log(response.choices[0]?.message?.content);Service notes
- Pricing model: Usage-based pricing (~$5 per 1M tokens) billed on total tokens (input + output). See the API pricing page for current plans.
- Data retention: No prompt/output retention by default. Operational telemetry (token counts, timestamps, error codes) is retained for billing and reliability.
- Compatibility: OpenAI-style /v1/chat/completions request and response format with a base URL switch.
- Latency: Depends on model size, prompt length, and load. Streaming reduces time-to-first-token.
- Throughput: Team plans include priority throughput. Actual throughput varies with demand.
- Rate limits: Limits vary by plan and load. Handle 429s with backoff and respect any Retry-After header.
Message content format
For vision inputs, set message.content to an array of parts that mixes text and image URLs.
Message content format
{
"role": "user",
"content": [
{ "type": "text", "text": "What is in this image?" },
{ "type": "image_url", "image_url": { "url": "https://example.com/cat.jpg" } }
]
}Image sources and formats
Use stable HTTPS URLs or inline base64 data URLs. Smaller images upload faster and reduce latency.
- Prefer HTTPS URLs that remain valid during the request.
- Compress large images before sending to reduce payload size.
- Limit the number of images per request to keep latency predictable.
Latency and quality tips
Guide the model with precise instructions about the level of detail you need.
- Ask for structured output if you need reliable parsing.
- Crop or annotate images to focus the model on relevant regions.
- Combine vision with text context for better grounding.
Streaming vision responses
Vision outputs can be streamed the same way as text outputs. See the streaming guide for UI patterns.
Common errors & fixes
- 401 Unauthorized: Check that your API key is set and sent as a Bearer token.
- 404 Not Found: Make sure the base URL ends with /v1 and you call /chat/completions.
- 400 Bad Request: Verify the model id and that messages are an array of { role, content } objects.
- 429 Rate limit: Back off and retry. Use the Retry-After header for pacing.
- Unsupported model: Use a vision-capable model id from your models list.
- Image fetch failed: Verify the image URL is publicly accessible and uses HTTPS.