Why Your SOC 2 Won’t Protect You From AI Risk
The dangerous assumption: "We have SOC 2 Type II and HITRUST certification. Our AI systems are compliant." I hear this constantly from healthcare vendors. It reflects a fundamental misunderstanding of what these frameworks actually cover—and what they don't.
What SOC 2 and HITRUST Actually Cover
SOC 2 and HITRUST are excellent frameworks for IT security. They address:
- Access controls and identity management
- Network security and encryption
- Change management and deployment processes
- Incident response and business continuity
- Physical security and data center controls
These are critical. Every healthcare organization should require them from vendors. But they were designed for traditional IT systems—databases, web applications, infrastructure. They weren't designed for AI.
The AI-Specific Risks They Miss
AI systems introduce risks that traditional IT compliance frameworks weren't built to address:
| Risk Category | SOC 2 / HITRUST | AI-Specific Need |
|---|---|---|
| Model Hallucinations | Not AI-specific | Guardrail execution evidence |
| Prompt Injection | Not AI-specific | Input validation attestation |
| Training Data Bias | Not AI-specific | Bias testing documentation |
| Model Drift | Not AI-specific | Performance monitoring |
| Decision Explainability | Not AI-specific | Inference-level logging |
| Data Encryption | Covered | Covered |
| Access Controls | Covered | Covered |
| Incident Response | Covered | AI-specific extension needed |
The Specific Gaps
1. No Inference-Level Accountability
SOC 2 requires logging around the scoped control environment, such as system access and administrative actions. By itself, it is not a per-inference AI traceability framework. When an AI makes a clinical recommendation, a standard SOC 2 report does not automatically require a record of every prompt, output, and model-specific decision path.
2. No Guardrail Verification
You might have guardrails. A SOC 2 report may cover whether related controls exist and are operated within scope. But it is not designed to prove that a specific AI guardrail executed for a specific inference. "We have a content filter" is different from "Here is proof the content filter ran for this request."
3. No Model Behavior Documentation
SOC 2 audits your change-management and control environment. Unless the engagement is scoped unusually broadly, it is not an assessment of model behavior, drift, or reproducibility for a specific input.
4. No Third-Party Verifiability
SOC 2 produces an attestation that auditors verified your controls. That attestation is useful, but it normally does not let a third party verify a specific AI decision. It is a statement about the control environment, not evidence of a specific model execution.
The bottom line: SOC 2 tells you something meaningful about a vendor's IT control environment. It is not, by itself, a guarantee about model safety, hallucination rates, prompt-handling behavior, or clinical suitability.
What Healthcare Organizations Actually Need
For AI specifically, healthcare organizations need evidence that addresses the unique risks of machine learning systems:
- Guardrail execution traces — proof that safety controls ran for specific inferences
- Model version attestation — cryptographic proof of which model version processed a request
- Decision reconstruction capability — ability to recreate the context for any AI output
- Bias and fairness documentation — evidence of testing across demographic groups
- Third-party verifiable evidence — not just attestations, but proof that can be independently validated
The Framework Convergence
The good news: new frameworks are emerging to address AI-specific risks:
- NIST AI RMF — the emerging standard for AI risk management
- ISO 42001 — AI management system certification
- EU AI Act — regulatory requirements for high-risk AI (including healthcare)
The challenge: these frameworks require evidence that many organizations still struggle to produce. Inference-level capture, retention, and verification remain immature in a large share of deployments.
What to Ask Your Vendors
When evaluating AI vendors, don't stop at "Are you SOC 2 certified?" Ask:
- Can you show me which guardrails executed for a specific inference?
- How do you prove model version for historical requests?
- Can a third party verify your AI's behavior without trusting your internal logs?
- What's your mapping to NIST AI RMF controls?
- How are you preparing for EU AI Act Article 12 requirements?
The strongest vendors will be able to answer both sets of questions: the traditional security questions and the newer ones about how specific AI behavior is recorded and reviewed.
The Complementary Approach
To be clear: SOC 2 and HITRUST remain essential. You should absolutely require them. They're table stakes for any vendor handling sensitive data.
But for AI systems, they're the beginning, not the end. Healthcare organizations need both traditional IT compliance AND AI-specific evidence. The vendors who understand this distinction are the ones worth talking to.
For the complete framework on what to demand from AI vendors, read our white paper.
Primary Sources
Beyond SOC 2
Our white paper "The Proof Gap in Healthcare AI" details exactly what AI-specific evidence looks like—and the four pillars every healthcare organization should demand.
Read the White Paper