Guides
Roleplay backend guide for interactive entertainment
Build roleplay and interactive entertainment apps on a developer-controlled, OpenAI-compatible LLM backend.
This guide covers prompt structure, character memory, refusal mitigation, and integration with frontends like SillyTavern.
Quick start
{
"model": "abliterated-model",
"messages": [
{ "role": "system", "content": "You are a roleplay engine. Stay in character and avoid breaking the fourth wall." },
{ "role": "user", "content": "Scene: A medieval shopkeeper greets a traveler entering the shop." }
],
"temperature": 0.9,
"stream": true
}Free preview for 5 messages. Sign up to continue.
Service notes
- Pricing model: Usage-based pricing (~$5 per 1M tokens) billed on total tokens (input + output). See the API pricing page for current plans.
- Data retention: No prompt/output retention by default. Operational telemetry (token counts, timestamps, error codes) is retained for billing and reliability.
- Compatibility: OpenAI-style /v1/chat/completions request and response format with a base URL switch.
- Latency: Depends on model size, prompt length, and load. Streaming reduces time-to-first-token.
- Throughput: Team plans include priority throughput. Actual throughput varies with demand.
- Rate limits: Limits vary by plan and load. Handle 429s with backoff and respect any Retry-After header.
What roleplay backends need
Roleplay and interactive entertainment platforms depend on consistent character voice, low latency, and long-context memory.
- Stable, less-censored behavior that avoids surprise refusals mid-scene.
- Streaming for fast time-to-first-token and responsive dialogue.
- Long context windows to keep character cards, lore, and conversation history intact.
- Deterministic controls (temperature, top_p) to tune creativity vs. consistency.
Prompt architecture for character cards
Keep the system message short and durable, then load character cards and scene context as user content.
- System: define roleplay rules and format (stay in character, avoid OOC unless requested).
- User: provide character card data, scene setting, and player action.
- Assistant: return only in-character dialogue and actions.
System: You are a roleplay engine. Stay in character and avoid breaking the fourth wall. User: Character: Lyra, a witty airship mechanic with a secret. Setting: A stormy skyport, engines roaring. Player action: "I show you a broken compass and ask for help."
Refusal mitigation without breaking immersion
Roleplay apps often hit refusals from default provider safety layers. A developer-controlled backend keeps the behavior predictable while you enforce your own policies.
- Use less-censored or uncensored models to avoid hard refusals in lawful, consensual roleplay.
- Apply your own policy rules or allowlists at the application layer.
- Provide gentle in-world redirections instead of hard refusals when needed.
Integration with roleplay platforms
Most roleplay frontends support OpenAI-compatible settings.
- SillyTavern: set API source to OpenAI and override the base URL to https://api.abliteration.ai/v1.
- Other frontends (RisuAI, TavernAI, custom clients): use the same /v1/chat/completions schema and your abliteration.ai key.
- Enable streaming in the client for a smoother typing effect.
Performance & reliability checklist
Treat roleplay traffic like real-time chat: minimize latency spikes and plan for concurrency.
- Use streaming and display partial tokens to keep immersion.
- Queue long-running generations and cap max tokens per turn.
- Back off on 429s and respect Retry-After to avoid rate-limit thrash.
Privacy & telemetry expectations
Keep player data private while still meeting billing requirements.
- No prompt/output retention by default; payloads are processed transiently.
- Operational telemetry (token counts, timestamps, error codes) is retained for billing and reliability.
- Apply your own logging policies if you need conversation history.
Common errors & fixes
- 401 Unauthorized: Check that your API key is set and sent as a Bearer token.
- 404 Not Found: Make sure the base URL ends with /v1 and you call /chat/completions.
- 400 Bad Request: Verify the model id and that messages are an array of { role, content } objects.
- 429 Rate limit: Back off and retry. Use the Retry-After header for pacing.