Defense Production Act for AI models explained
The Defense Production Act (DPA) is a 1950 federal law that gives the president broad authority to direct private industry in support of national defense. Originally passed during the Korean War, it has been invoked for everything from semiconductor manufacturing to pandemic ventilator production.
In February 2026, reporting indicated that the administration was exploring whether DPA authority could be used to compel AI companies — specifically Anthropic — to provide models for defense applications. This page explains what the DPA is, how it might apply to AI, and what you should be doing operationally regardless of how this plays out.
Quick start
curl https://api.abliteration.ai/policy/chat/completions \
-H "Authorization: Bearer $POLICY_KEY" \
-H "Content-Type: application/json" \
-H "X-Policy-User: procurement-reviewer" \
-H "X-Policy-Project: gov-workloads" \
-d '{
"policy_id": "lawful-use-contract-alignment",
"model": "abliterated-model",
"messages": [
{
"role": "user",
"content": "Provide a neutral summary of potential DPA impacts and list required internal controls."
}
]
}'Free preview for 5 messages. Sign up to continue.
Service notes
- Pricing model: Usage-based pricing (~$5 per 1M tokens) billed on total tokens (input + output). See the API pricing page for current plans.
- Data retention: No prompt/output retention by default. Operational telemetry (token counts, timestamps, error codes) is retained for billing and reliability.
- Compatibility: OpenAI-style /v1/chat/completions request and response format with a base URL switch.
- Latency: Depends on model size, prompt length, and load. Streaming reduces time-to-first-token.
- Throughput: Team plans include priority throughput. Actual throughput varies with demand.
- Rate limits: Limits vary by plan and load. Handle 429s with backoff and respect any Retry-After header.
How the DPA works and why AI is in the conversation
The DPA has three main titles. Title I lets the government require companies to prioritize government contracts over commercial ones. Title III authorizes loans, subsidies, and purchase commitments to expand production capacity. Title VII gives the president authority to create voluntary industry agreements and establish the Committee on Foreign Investment in the United States (CFIUS).
The AI angle is Title I. If the administration determines that frontier AI models are essential to national defense, DPA Title I authority could theoretically be used to compel an AI company to accept and prioritize government contracts — even if the company's acceptable use policy would otherwise prohibit the use case. This has never been tested with software or AI, and legal scholars disagree on whether it would hold up.
- Title I is the relevant provision — it's about contract prioritization, not nationalization or seizure.
- The legal question of whether "production" under the DPA covers AI model inference is genuinely unsettled.
- Even if DPA authority is never formally invoked, the threat of it changes negotiating dynamics in procurement discussions.
What engineering teams should do regardless of outcome
Whether the DPA is formally invoked, used as leverage, or shelved entirely, the engineering preparation is the same. You want to be in a position where a sudden shift in provider policy or government requirements doesn't force you into a fire drill.
- Scope your policies to customer segments — create separate policy IDs for government, enterprise, and commercial workloads so you can adjust behavior for one segment without affecting others.
- Tag every request — include user and project identifiers in each API call so you can trace decisions back to specific workflows during an audit.
- Default to canary rollouts — any policy change with significant behavioral impact should go to a small percentage of traffic first. No exceptions.
- Monitor decision distributions, not just uptime — if the ratio of refused, escalated, or redacted requests suddenly shifts, something changed and you need to know about it immediately.
Preparing a briefing for leadership
If DPA discussions reach the point where your board, leadership, or major customers are asking questions, you need a single briefing document that legal, security, and engineering can all sign off on. Keep it concrete — executives want to know your exposure and your controls, not a summary of the news cycle.
- Current contract obligations — list each government or regulated customer, the relevant contract terms, and their effective/renewal dates.
- Policy-to-contract mapping — show which policy IDs and enforcement rules satisfy each contractual obligation.
- Fallback readiness — name your alternative providers, describe the traffic-shift criteria, and state whether failover has been tested.
- Audit trail — document where decision logs are exported, how long they're retained, and who has access.
Common errors & fixes
- You can't trace contract requirements to actual enforcement behavior: Build a traceability matrix: each contract clause maps to specific policy rules, reason codes, and automated test cases. When someone asks "does our system comply with Section 4.2 of Contract X?" you should be able to point to a policy ID and a test result.
- Policy changes go live without legal or security sign-off: For any rule change that affects government or regulated workloads, require explicit approval from legal and security before expanding beyond canary traffic. Document the approval in your policy version history.
- You're monitoring uptime but not enforcement behavior: Set up alerts on the distribution of policy decisions (allow, rewrite, escalate, refuse) by project and customer segment. A sudden spike in refusals for a government workload is a more urgent signal than a latency increase.
Related links
- Anthropic Pentagon case explainer
- Anthropic supply chain risk explainer
- Policy-as-code for LLM behavior
- Policy Gateway integration contract
- Splunk HEC export
- Datadog Logs export
- Elastic audit log export
- Amazon S3 export
- Azure Monitor / Log Analytics export
- Rate limits and retries
- Anthropic Pentagon case explainer
- API pricing
- Privacy policy