Ambient AI Scribe Privacy Read Now
Compliance

Why Your SOC 2 Won't Protect You From AI Risk

SOC 2 and HITRUST are excellent for IT security. But they weren't designed for AI.

7 min read
Joe Braidwood
Joe Braidwood
Co-founder & CEO, GLACIS
7 min read

The dangerous assumption: "We have SOC 2 Type II and HITRUST certification. Our AI systems are compliant." I hear this constantly from healthcare vendors. It reflects a fundamental misunderstanding of what these frameworks actually cover—and what they don't.

What SOC 2 and HITRUST Actually Cover

SOC 2 and HITRUST are excellent frameworks for IT security. They address:

These are critical. Every healthcare organization should require them from vendors. But they were designed for traditional IT systems—databases, web applications, infrastructure. They weren't designed for AI.

The AI-Specific Risks They Miss

AI systems introduce risks that traditional IT compliance frameworks weren't built to address:

Risk CategorySOC 2 / HITRUSTAI-Specific Need
Model HallucinationsNot addressedGuardrail execution evidence
Prompt InjectionNot addressedInput validation attestation
Training Data BiasNot addressedBias testing documentation
Model DriftNot addressedPerformance monitoring
Decision ExplainabilityNot addressedInference-level logging
Data EncryptionCoveredCovered
Access ControlsCoveredCovered
Incident ResponseCoveredAI-specific extension needed

The Specific Gaps

1. No Inference-Level Accountability

SOC 2 requires logging of system access and administrative actions. It doesn't require logging of individual AI inferences. When an AI makes a clinical decision, SOC 2 doesn't mandate that you capture what went in, what came out, and why.

2. No Guardrail Verification

You might have guardrails. SOC 2 might even audit that they exist. But it doesn't require evidence that they executed for specific inferences. "We have a content filter" is very different from "Here's proof the content filter ran for this request."

3. No Model Behavior Documentation

SOC 2 audits your change management process. It doesn't audit whether your AI model behaves consistently, whether it's drifting over time, or whether you can reproduce its behavior for a specific input.

4. No Third-Party Verifiability

SOC 2 produces an attestation that auditors verified your controls. But that attestation doesn't let a third party verify specific AI decisions. It's a statement about your processes, not evidence of specific executions.

The bottom line: SOC 2 tells you the vendor has good IT hygiene. It doesn't tell you their AI won't hallucinate, leak data through prompts, or make inexplicable decisions that harm patients.

What Healthcare Organizations Actually Need

For AI specifically, healthcare organizations need evidence that addresses the unique risks of machine learning systems:

The Framework Convergence

The good news: new frameworks are emerging to address AI-specific risks:

The challenge: these frameworks require evidence that most organizations can't currently produce. The infrastructure to capture, store, and verify AI behavior at inference-level doesn't exist in most deployments.

Beyond SOC 2

Our white paper "The Proof Gap in Healthcare AI" details exactly what AI-specific evidence looks like—and the four pillars every healthcare organization should demand.

Read the White Paper

What to Ask Your Vendors

When evaluating AI vendors, don't stop at "Are you SOC 2 certified?" Ask:

The vendors who can answer these questions are building for the future. The vendors who point to their SOC 2 report are building for 2019.

The Complementary Approach

To be clear: SOC 2 and HITRUST remain essential. You should absolutely require them. They're table stakes for any vendor handling sensitive data.

But for AI systems, they're the beginning, not the end. Healthcare organizations need both traditional IT compliance AND AI-specific evidence. The vendors who understand this distinction are the ones worth talking to.

For the complete framework on what to demand from AI vendors, read our white paper.

Related Posts