Back to Guides
AI Governance

The Three Layers of AI Security (And Why Everyone’s Missing Layer 3)

Joe Braidwood
Joe Braidwood
Co-founder & CEO
· December 2025 · 8 min read

The uncomfortable reality: Many AI security stacks focus on prevention and monitoring but stop short of producing records that are easy for third parties to review later.

The Security Stack Everyone Builds

When organizations deploy AI in sensitive environments, they typically invest heavily in two areas:

  • Runtime security — prompt injection defense, content filtering, rate limiting
  • Monitoring — usage dashboards, anomaly detection, post-hoc analysis

Both are necessary. Neither is sufficient. Here's why.

Layer 1: Runtime Security

Available

Pre-inference protection

Filters, guardrails, and safety systems that evaluate inputs before they reach the model. Solutions like Protect AI, Robust Intelligence, and Anthropic's constitutional AI fall here. Vendors can catch threats.

Runtime security is well-understood because it mirrors traditional application security. Input validation. Rate limiting. Access controls. The tooling has matured rapidly.

Layer 2: Monitoring & Observability

Available

Post-hoc analysis

Logging, dashboards, and analytics that show what happened after the fact. LangSmith, Weights & Biases, and traditional observability tools cover this layer. Vendors can log requests.

Monitoring tells you what happened. It's essential for debugging, performance optimization, and understanding usage patterns. But it has a fundamental limitation.

Layer 3: Evidence-Grade Attestation

The Gap

Third-party verifiable proof

Cryptographic evidence that safety controls actually executed for a specific inference, verifiable by external parties without access to vendor systems. This layer is still relatively immature.

This is the layer that relatively few teams have built well so far. It is also the layer that emerging regulations and customer reviews increasingly reward.

Why Layer 3 Matters

The distinction between Layer 2 and Layer 3 is subtle but critical:

  • Layer 2 (monitoring) says: "Here's what our logs show happened"
  • Layer 3 (attestation) says: "Here's cryptographic proof that can be independently verified"

When a regulator, auditor, or investigator asks what happened during a specific AI inference, Layers 1 and 2 can only provide your internal records. Layer 3 is about producing stronger, tamper-evident records that are easier to defend later.

The legal problem: Internal logs can be challenged on integrity and chain-of-custody grounds. Signed attestations and append-only proofs improve record integrity, but they still need governance, retention, and review processes around them.

The Regulatory Tailwind

The timeline is tightening across several frameworks:

  • Colorado AI Act (June 30, 2026) — requires documentation of AI decision-making processes
  • EU AI Act Article 12 (August 2, 2026) — adds automatic logging requirements for covered high-risk AI systems, with some product-safety systems following on August 2, 2027
  • California ADMT Regulations (rule package effective January 1, 2026, with ADMT-specific business compliance phased beginning in 2027 under CPPA guidance) — add risk-assessment, notice, access, and opt-out obligations for covered uses

All three increase the importance of reconstructable records, documentation, and operational evidence. The specifics differ, but none of them reward teams that can only point to policy documents after the fact.

What Layer 3 Actually Requires

Building evidence-grade attestation isn't just better logging. It requires four capabilities:

1. Guardrail Execution Trace

Tamper-evident traces showing which controls ran, in what sequence, with pass/fail status and cryptographic timestamps. Not "we have guardrails" but "guardrail X ran at timestamp Y and returned Z."

2. Decision Rationale

Complete reconstruction of input context: prompts, redactions, retrieved data, and configuration state tied to each output. Everything needed to understand why the output was what it was.

3. Independent Verifiability

Cryptographically signed, immutable receipts that third parties can validate without access to vendor internal systems.

4. Framework Anchoring

Direct mapping to specific control objectives in ISO 42001, NIST AI RMF, and EU AI Act Article 12.

The Procurement Implication

Here's what this means practically: if you're procuring AI systems for healthcare (or any high-stakes domain), your security questionnaires are probably asking the wrong questions.

Most questionnaires ask:

  • Do you have guardrails?
  • Do you log requests?
  • Are you SOC 2 compliant?

Better questions:

  • Can you prove which guardrails executed for a specific inference?
  • Can third parties verify your logs without trusting your internal systems?
  • Can you produce evidence that would satisfy EU AI Act Article 12 requirements?

The uncomfortable truth: A vendor can answer traditional security questions well and still leave hard questions unanswered about how a specific AI decision was recorded and reviewed.

What We’re Building

At GLACIS, we're focused specifically on Layer 3. Not because Layers 1 and 2 aren't important—they are—but because the market for independently reviewable AI evidence is still much less mature than the markets for filtering and observability.

If you're a healthcare AI vendor whose deals are stuck in security review, or a healthcare organization trying to figure out what to demand from your AI vendors, I'd love to talk.

Read the full white paper for the complete analysis, including practical questions to ask about evidence integrity and record reconstruction.

Primary Sources

Pango waving

Want the Full Analysis?

Our white paper "The Proof Gap in Healthcare AI" goes deeper—including case studies and the four pillars of inference-level evidence.

Read the White Paper