Docs
LangChain integration
LangChain can call any OpenAI-compatible endpoint. Configure ChatOpenAI with the abliteration.ai base URL and your API key.
Your chains, tools, and prompts stay the same. Only the provider configuration changes.
Quick start
Example request
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="abliterated-model",
base_url="https://api.abliteration.ai/v1",
api_key="YOUR_ABLIT_KEY",
)
response = llm.invoke("Give me a one-sentence summary of Stonehenge.")
print(response.content)Service notes
- Pricing model: Usage-based pricing (~$5 per 1M tokens) billed on total tokens (input + output). See the API pricing page for current plans.
- Data retention: No prompt/output retention by default. Operational telemetry (token counts, timestamps, error codes) is retained for billing and reliability.
- Compatibility: OpenAI-style /v1/chat/completions request and response format with a base URL switch.
- Latency: Depends on model size, prompt length, and load. Streaming reduces time-to-first-token.
- Throughput: Team plans include priority throughput. Actual throughput varies with demand.
- Rate limits: Limits vary by plan and load. Handle 429s with backoff and respect any Retry-After header.
Common errors & fixes
- 401 Unauthorized: Check that your API key is set and sent as a Bearer token.
- 404 Not Found: Make sure the base URL ends with /v1 and you call /chat/completions.
- 400 Bad Request: Verify the model id and that messages are an array of { role, content } objects.
- 429 Rate limit: Back off and retry. Use the Retry-After header for pacing.