Integration guide
How to use Azure Functions with an OpenAI-compatible endpoint (C#)
Azure Functions works with OpenAI-compatible APIs by switching the base URL and API key.
This guide shows a C# example plus a test vector you can run to validate responses.
Quick start
Base URL
C# request example
using System;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text;
using System.Text.Json;
var payload = new {
model = "abliterated-model",
messages = new[] { new { role = "user", content = "Respond with: Azure Functions C# ready." } },
temperature = 0.2
};
var json = JsonSerializer.Serialize(payload);
var request = new HttpRequestMessage(HttpMethod.Post, "https://api.abliteration.ai/v1/chat/completions");
request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", Environment.GetEnvironmentVariable("ABLIT_KEY"));
request.Content = new StringContent(json, Encoding.UTF8, "application/json");
var client = new HttpClient();
var response = await client.SendAsync(request);
Console.WriteLine(await response.Content.ReadAsStringAsync());Free preview for 5 messages. Sign up to continue.
Service notes
- Pricing model: Usage-based pricing (~$5 per 1M tokens) billed on total tokens (input + output). See the API pricing page for current plans.
- Data retention: No prompt/output retention by default. Operational telemetry (token counts, timestamps, error codes) is retained for billing and reliability.
- Compatibility: OpenAI-style /v1/chat/completions request and response format with a base URL switch.
- Latency: Depends on model size, prompt length, and load. Streaming reduces time-to-first-token.
- Throughput: Team plans include priority throughput. Actual throughput varies with demand.
- Rate limits: Limits vary by plan and load. Handle 429s with backoff and respect any Retry-After header.
Configure Azure Functions
Follow this checklist to point your integration at the OpenAI-compatible endpoint.
- Add ABLIT_KEY to your function environment and keep requests on the server.
- Set the base URL to https://api.abliteration.ai/v1.
- Provide your abliteration.ai API key as a Bearer token (ABLIT_KEY).
- Use model "abliterated-model" to match the provider naming.
- Keep the messages schema unchanged (role/content).
OpenAI-compatible payload
Use this request body as a known-good payload before customizing parameters.
Chat completions payload
{
"model": "abliterated-model",
"messages": [
{ "role": "user", "content": "Respond with: Azure Functions C# ready." }
],
"temperature": 0.2
}Streaming and tool calling readiness
If you stream responses or send tool definitions, keep the OpenAI-compatible schema and validate against the OpenAPI spec.
- Set stream: true to receive chunks as they arrive.
- Parse SSE data lines and ignore keep-alives.
- Validate tool schemas before sending production traffic.
Mini playground
Response will appear here.
Expected output should include Azure Functions C# ready.
Test vector
Expected output should include Azure Functions C# ready.
Request payload
{
"model": "abliterated-model",
"messages": [
{
"role": "user",
"content": "Respond with: Azure Functions C# ready."
}
],
"temperature": 0.2,
"stream": false
}Common errors & fixes
- 401 Unauthorized: Check that your API key is set and sent as a Bearer token.
- 404 Not Found: Make sure the base URL ends with /v1 and you call /chat/completions.
- 400 Bad Request: Verify the model id and that messages are an array of { role, content } objects.
- 429 Rate limit: Back off and retry. Use the Retry-After header for pacing.