Ambient AI Scribe Privacy Read Now
AI Governance

The Three Layers of AI Security (And Why Everyone's Missing Layer 3)

Most AI security tools solve the problems vendors want to sell, not the problems regulators will ask about.

8 min read
Joe Braidwood
Joe Braidwood
Co-founder & CEO, GLACIS
8 min read

The uncomfortable reality: Most AI security tools solve the problems vendors want to sell, not the problems regulators will ask about. After dozens of conversations with CISOs at healthcare organizations, I've identified a critical blind spot that almost nobody is addressing.

The Security Stack Everyone Builds

When organizations deploy AI in sensitive environments, they typically invest heavily in two areas:

Both are necessary. Neither is sufficient. Here's why.

Layer 1: Runtime Security

Available Pre-inference protection

Filters, guardrails, and safety systems that evaluate inputs before they reach the model. Solutions like Protect AI, Robust Intelligence, and Anthropic's constitutional AI fall here. Vendors can catch threats.

Runtime security is well-understood because it mirrors traditional application security. Input validation. Rate limiting. Access controls. The tooling has matured rapidly.

Layer 2: Monitoring & Observability

Available Post-hoc analysis

Logging, dashboards, and analytics that show what happened after the fact. LangSmith, Weights & Biases, and traditional observability tools cover this layer. Vendors can log requests.

Monitoring tells you what happened. It's essential for debugging, performance optimization, and understanding usage patterns. But it has a fundamental limitation.

Layer 3: Evidence-Grade Attestation

The Gap Third-party verifiable proof

Cryptographic evidence that safety controls actually executed for a specific inference, verifiable by external parties without access to vendor systems. No vendor can prove it.

This is the layer that almost nobody is building. And it's the layer that regulators will increasingly demand.

Why Layer 3 Matters

The distinction between Layer 2 and Layer 3 is subtle but critical:

When a regulator, auditor, or plaintiff's attorney asks what happened during a specific AI inference, Layers 1 and 2 can only provide your internal records. Layer 3 provides evidence that would hold up in court.

The legal problem: Internal logs are discoverable, editable, and subject to credibility attacks. Cryptographically signed attestations with merkle proofs are none of those things.

The Regulatory Tailwind

Three major regulatory frameworks take effect in the next 18 months:

All three share a common thread: you must be able to reconstruct what your AI did and why. Not in aggregate. For specific decisions. With evidence that third parties can verify.

Want the Full Analysis?

Our white paper "The Proof Gap in Healthcare AI" goes deeper on this problem—including case studies and the four pillars of inference-level evidence.

Read the White Paper

What Layer 3 Actually Requires

Building evidence-grade attestation isn't just better logging. It requires four capabilities:

1. Guardrail Execution Trace

Tamper-evident traces showing which controls ran, in what sequence, with pass/fail status and cryptographic timestamps. Not "we have guardrails" but "guardrail X ran at timestamp Y and returned Z."

2. Decision Rationale

Complete reconstruction of input context: prompts, redactions, retrieved data, and configuration state tied to each output. Everything needed to understand why the output was what it was.

3. Independent Verifiability

Cryptographically signed, immutable receipts that third parties can validate without access to vendor internal systems.

4. Framework Anchoring

Direct mapping to specific control objectives in ISO 42001, NIST AI RMF, and EU AI Act Article 12.

The Procurement Implication

Here's what this means practically: if you're procuring AI systems for healthcare (or any high-stakes domain), your security questionnaires are probably asking the wrong questions.

Most questionnaires ask:

Better questions:

The uncomfortable truth: Most vendors will answer "yes" to the first set of questions and stumble on the second. That gap is where organizational risk lives.

What We're Building

At GLACIS, we're focused specifically on Layer 3. Not because Layers 1 and 2 aren't important—they are—but because plenty of excellent companies are solving those problems. Almost nobody is building the evidence infrastructure that healthcare organizations will need when regulators come asking.

If you're a healthcare AI vendor whose deals are stuck in security review, or a healthcare organization trying to figure out what to demand from your AI vendors, I'd love to talk.

Read the full white paper for the complete analysis, including case studies where the absence of Layer 3 evidence turned recoverable situations into catastrophic ones.

Related Posts