Integrations
Uncensored LLM for Go
Use the standard Go http client to call the /v1/chat/completions endpoint.
Quick start
Example request
package main
import (
"bytes"
"encoding/json"
"fmt"
"io"
"net/http"
"os"
)
func main() {
payload := map[string]any{
"model": "abliterated-model",
"messages": []map[string]string{
{"role": "user", "content": "Hello from Go."},
},
}
body, _ := json.Marshal(payload)
req, _ := http.NewRequest("POST", "https://api.abliteration.ai/v1/chat/completions", bytes.NewBuffer(body))
req.Header.Set("Authorization", "Bearer "+os.Getenv("ABLIT_KEY"))
req.Header.Set("Content-Type", "application/json")
resp, err := http.DefaultClient.Do(req)
if err != nil {
panic(err)
}
defer resp.Body.Close()
raw, _ := io.ReadAll(resp.Body)
fmt.Println(string(raw))
}Service notes
- Pricing model: Usage-based pricing (~$5 per 1M tokens) billed on total tokens (input + output). See the API pricing page for current plans.
- Data retention: No prompt/output retention by default. Operational telemetry (token counts, timestamps, error codes) is retained for billing and reliability.
- Compatibility: OpenAI-style /v1/chat/completions request and response format with a base URL switch.
- Latency: Depends on model size, prompt length, and load. Streaming reduces time-to-first-token.
- Throughput: Team plans include priority throughput. Actual throughput varies with demand.
- Rate limits: Limits vary by plan and load. Handle 429s with backoff and respect any Retry-After header.
Common errors & fixes
- 401 Unauthorized: Check that your API key is set and sent as a Bearer token.
- 404 Not Found: Make sure the base URL ends with /v1 and you call /chat/completions.
- 400 Bad Request: Verify the model id and that messages are an array of { role, content } objects.
- 429 Rate limit: Back off and retry. Use the Retry-After header for pacing.