abliteration.ai - Uncensored LLM API Platform
Abliteration
DocsRoleplayMigrationDefinitionsPricing
Home/Docs/How to use Go net/http with an OpenAI-compatible endpoint (Go)

Integration guide

How to use Go net/http with an OpenAI-compatible endpoint (Go)

Go net/http works with OpenAI-compatible APIs by switching the base URL and API key.

This guide shows a Go example plus a test vector you can run to validate responses.

Quick start

Base URL
Go request example
package main

import (
  "bytes"
  "fmt"
  "io"
  "net/http"
  "os"
)

func main() {
  payload := []byte(`{
  "model": "abliterated-model",
  "messages": [
    { "role": "user", "content": "Respond with: Go net/http Go ready." }
  ],
  "temperature": 0.2
}`)
  req, _ := http.NewRequest("POST", "https://api.abliteration.ai/v1/chat/completions", bytes.NewBuffer(payload))
  req.Header.Set("Authorization", "Bearer "+os.Getenv("ABLIT_KEY"))
  req.Header.Set("Content-Type", "application/json")

  resp, err := http.DefaultClient.Do(req)
  if err != nil {
    panic(err)
  }
  defer resp.Body.Close()
  body, _ := io.ReadAll(resp.Body)
  fmt.Println(string(body))
}

Free preview for 5 messages. Sign up to continue.

Service notes

  • Pricing model: Usage-based pricing (~$5 per 1M tokens) billed on total tokens (input + output). See the API pricing page for current plans.
  • Data retention: No prompt/output retention by default. Operational telemetry (token counts, timestamps, error codes) is retained for billing and reliability.
  • Compatibility: OpenAI-style /v1/chat/completions request and response format with a base URL switch.
  • Latency: Depends on model size, prompt length, and load. Streaming reduces time-to-first-token.
  • Throughput: Team plans include priority throughput. Actual throughput varies with demand.
  • Rate limits: Limits vary by plan and load. Handle 429s with backoff and respect any Retry-After header.

Configure Go net/http

Follow this checklist to point your integration at the OpenAI-compatible endpoint.

  • Store ABLIT_KEY server-side and call the OpenAI-compatible endpoint from your server routes.
  • Set the base URL to https://api.abliteration.ai/v1.
  • Provide your abliteration.ai API key as a Bearer token (ABLIT_KEY).
  • Use model "abliterated-model" to match the provider naming.
  • Keep the messages schema unchanged (role/content).

OpenAI-compatible payload

Use this request body as a known-good payload before customizing parameters.

Chat completions payload
{
  "model": "abliterated-model",
  "messages": [
    { "role": "user", "content": "Respond with: Go net/http Go ready." }
  ],
  "temperature": 0.2
}

Streaming and tool calling readiness

If you stream responses or send tool definitions, keep the OpenAI-compatible schema and validate against the OpenAPI spec.

  • Set stream: true to receive chunks as they arrive.
  • Parse SSE data lines and ignore keep-alives.
  • Validate tool schemas before sending production traffic.

Mini playground

Response will appear here.
Expected output should include Go net/http Go ready.

Test vector

Expected output should include Go net/http Go ready.

Request payload
{
  "model": "abliterated-model",
  "messages": [
    {
      "role": "user",
      "content": "Respond with: Go net/http Go ready."
    }
  ],
  "temperature": 0.2,
  "stream": false
}

Common errors & fixes

  • 401 Unauthorized: Check that your API key is set and sent as a Bearer token.
  • 404 Not Found: Make sure the base URL ends with /v1 and you call /chat/completions.
  • 400 Bad Request: Verify the model id and that messages are an array of { role, content } objects.
  • 429 Rate limit: Back off and retry. Use the Retry-After header for pacing.

Related links

  • OpenAI compatibility guide
  • Instant migration tool
  • Compatibility matrix
  • Streaming chat completions
  • See API Pricing
  • View Uncensored Models
  • Rate limits
  • Privacy policy
DefinitionsDocumentationRun in PostmanPrivacy PolicyTerms of ServiceHugging Facehelp@abliteration.ai
FacebookX (Twitter)

© 2025 Social Keyboard, Inc. All rights reserved.