Understanding HIPAA for AI Systems
The Health Insurance Portability and Accountability Act of 1996 (HIPAA) establishes national standards for protecting sensitive patient health information. While HIPAA predates modern AI by decades, its requirements apply fully to AI systems that process, store, or transmit Protected Health Information (PHI). Understanding how HIPAA's Privacy Rule, Security Rule, and Breach Notification Rule apply to AI is essential for any healthcare AI deployment.
The Three HIPAA Rules
The Privacy Rule (45 CFR Part 164, Subparts A and E) establishes standards for who may access PHI and under what circumstances. For AI systems, the Privacy Rule governs:
- What patient information can be used to train AI models
- When patient authorization is required for AI use cases
- Minimum necessary standards for AI access to PHI
- Patient rights to access, amend, and receive accounting of disclosures
The Security Rule (45 CFR Part 164, Subparts A and C) requires covered entities and business associates to implement administrative, physical, and technical safeguards to protect electronic PHI (ePHI). For AI systems, the Security Rule governs:
- Encryption requirements for PHI in AI pipelines
- Access controls for AI system users and administrators
- Audit logging of AI inference activity
- Integrity controls ensuring PHI is not improperly altered
- Transmission security for API calls to AI services
The Breach Notification Rule (45 CFR Part 164, Subpart D) requires notification to affected individuals, HHS, and in some cases media outlets when unsecured PHI is breached. For AI systems, this includes:
- Unauthorized access to training datasets containing PHI
- Disclosure of PHI through AI model outputs or memorization
- Security incidents affecting AI infrastructure
- Vendor breaches involving PHI processed by AI systems
Covered Entities vs. Business Associates
HIPAA distinguishes between Covered Entities (healthcare providers, health plans, and healthcare clearinghouses that transmit health information electronically) and Business Associates (entities that perform functions on behalf of covered entities involving PHI access).
Most AI vendors fall into the Business Associate category. When an AI company processes PHI on behalf of a hospital, clinic, or health plan, they become a Business Associate and must:
- Sign a Business Associate Agreement (BAA) with the covered entity
- Comply directly with applicable Security Rule requirements
- Report security incidents and breaches to the covered entity
- Ensure any subcontractors (sub-Business Associates) also comply
Key Distinction: HIPAA Compliance vs. HIPAA Certification
There is no government-issued "HIPAA certification" for AI tools or any other technology. HHS does not certify, endorse, or approve products as HIPAA compliant. When vendors claim "HIPAA certification," they typically mean SOC 2 certification, HITRUST certification, or self-attestation of compliance. True HIPAA compliance is an ongoing operational state that includes policies, procedures, technical controls, training, and continuous monitoring.
Protected Health Information in AI Systems
Understanding what constitutes PHI is fundamental to HIPAA compliant AI deployment. Protected Health Information includes any individually identifiable health information that is created, received, maintained, or transmitted by a covered entity or business associate.
The 18 HIPAA Identifiers
HIPAA's Safe Harbor de-identification method (45 CFR 164.514(b)(2)) specifies 18 types of identifiers that must be removed to consider data de-identified:
HIPAA Safe Harbor Identifiers
| # | Identifier Type | AI System Considerations |
|---|---|---|
| 1 | Names | Must be stripped from training data and prompts |
| 2 | Geographic data smaller than state | Includes street addresses, city, zip codes (first 3 digits may be retained if population >20,000) |
| 3 | Dates (except year) | Birth dates, admission dates, discharge dates, death dates. Ages over 89 must be aggregated to "90+" |
| 4 | Phone numbers | Including contact numbers in clinical notes |
| 5 | Fax numbers | Still common in healthcare workflows |
| 6 | Email addresses | Including patient portal credentials |
| 7 | Social Security numbers | Critical identifier requiring redaction |
| 8 | Medical record numbers | EHR system identifiers |
| 9 | Health plan beneficiary numbers | Insurance member IDs |
| 10 | Account numbers | Billing and financial identifiers |
| 11 | Certificate/license numbers | Driver's licenses, professional licenses |
| 12 | Vehicle identifiers and serial numbers | Including license plates |
| 13 | Device identifiers and serial numbers | Medical devices, implants |
| 14 | Web URLs | Patient portal links, imaging URLs |
| 15 | IP addresses | EHR access logs, telehealth sessions |
| 16 | Biometric identifiers | Fingerprints, voice prints, retinal scans, facial geometry |
| 17 | Full-face photographs | Clinical images, ID photos |
| 18 | Any other unique identifier | Catch-all for identifiers not listed above |
PHI in AI Training Data
Using PHI to train AI models requires careful consideration of HIPAA requirements. There are three primary approaches:
1. De-identification: Remove all 18 identifiers per Safe Harbor method, or use Expert Determination (45 CFR 164.514(b)(1)) where a qualified statistical expert certifies that re-identification risk is very small. De-identified data is no longer PHI and is not subject to HIPAA.
2. Authorization: Obtain individual patient authorization to use their PHI for AI training. This is rarely practical at scale but may be appropriate for specialized research use cases.
3. Healthcare Operations: Under 45 CFR 164.506, covered entities may use PHI for healthcare operations without patient authorization. Quality improvement, developing clinical guidelines, and training algorithms that improve care quality may qualify. However, sharing PHI with external AI vendors for training typically requires a BAA and may have additional restrictions.
Critical Warning: Model Memorization
Large language models can memorize and reproduce training data, including PHI. Research has demonstrated extraction of verbatim training data from models like GPT-2, GPT-3, and others. If you train or fine-tune models on PHI, consider differential privacy techniques, membership inference testing, and ongoing monitoring for data extraction attacks. Model memorization of PHI could constitute a breach.
PHI in AI Prompts and Outputs
Beyond training data, PHI commonly enters AI systems through:
- User prompts: Clinicians entering patient information for clinical decision support, documentation, or coding assistance
- System context: Automated systems that provide patient records as context for AI analysis
- AI outputs: Generated text, predictions, or recommendations that may contain or derive from PHI
- Logging: API logs, debugging information, and audit trails that capture PHI in transit
Each of these PHI touchpoints must be protected with appropriate Security Rule safeguards.
Business Associate Agreements for AI
The Business Associate Agreement (BAA) is the legal foundation of HIPAA compliant AI. When an AI vendor will receive, create, maintain, or transmit PHI on behalf of a covered entity, they become a Business Associate and a BAA is mandatory.
Required BAA Provisions
Under 45 CFR 164.504(e), a BAA must include provisions that:
- Establish permitted and required uses and disclosures of PHI
- Require the Business Associate to use appropriate safeguards and comply with the Security Rule
- Require reporting of security incidents and breaches
- Ensure any subcontractors agree to the same restrictions
- Make PHI available for patient access and amendment requests
- Make internal practices available to HHS for compliance review
- Return or destroy PHI at termination
- Authorize termination if the Business Associate violates the agreement
AI-Specific BAA Considerations
Standard BAA templates may not adequately address AI-specific concerns. When negotiating BAAs with AI vendors, ensure coverage of:
AI-Specific BAA Provisions
- Model training: Explicit prohibition or permission for using PHI to train models, with requirements for de-identification if permitted
- Data retention: How long prompts, outputs, and logs containing PHI are retained, and procedures for deletion
- Sub-processors: Identification of sub-Business Associates (cloud providers, inference infrastructure) and their BAA coverage
- Data residency: Geographic location of PHI processing and storage, particularly for international vendors
- Audit access: Right to audit AI-specific controls, including model behavior and data handling
- Breach definitions: Whether model memorization or extraction of training data constitutes a breach
AI Vendor BAA Availability
Not all AI vendors offer BAAs, and BAA availability varies by product tier:
Major AI Vendor BAA Status (January 2026)
| Provider | BAA Available | Products Covered | Notes |
|---|---|---|---|
| Microsoft Azure | Yes | Azure OpenAI, Cognitive Services, Azure ML | Part of standard Azure BAA; includes GPT-5.2, ChatGPT Images, Whisper |
| Amazon Web Services | Yes | Amazon Bedrock, SageMaker, Comprehend Medical | BAA covers Claude, Llama, Titan, and other Bedrock models |
| Google Cloud | Yes | Vertex AI, Healthcare API, Cloud Natural Language | Includes Gemini models; Healthcare AI products specifically designed for HIPAA |
| OpenAI (Direct) | Enterprise Only | ChatGPT Enterprise, API (Enterprise tier) | No BAA for ChatGPT Plus, Teams, or standard API. Enterprise agreement required. |
| Anthropic (Claude) | Enterprise Only | Claude API (qualifying customers) | Available for enterprise customers meeting volume and use case requirements |
| Meta (Llama) | N/A (Open Source) | Self-hosted Llama models | Open source models can be self-hosted with your own HIPAA controls |
| Mistral AI | Limited | Enterprise deployments | Available for enterprise customers; verify current status |
| Cohere | Yes | Cohere Enterprise | Enterprise tier includes BAA option |
BAA ≠ Compliance
Having a signed BAA is necessary but not sufficient for HIPAA compliance. The BAA shifts some liability to the vendor, but the covered entity remains responsible for ensuring the AI is used appropriately, proper safeguards are in place, and the deployment meets minimum necessary standards. You cannot outsource your compliance responsibility through a BAA.
Security Rule Requirements for AI Systems
The HIPAA Security Rule (45 CFR Part 164, Subpart C) requires covered entities and business associates to implement safeguards ensuring the confidentiality, integrity, and availability of electronic PHI. For AI systems, these requirements translate to specific technical and organizational controls.
Administrative Safeguards (§164.308)
Administrative safeguards are policies and procedures governing AI system deployment:
- Risk Analysis (§164.308(a)(1)(ii)(A)): Conduct thorough risk analysis of AI systems, including data flows, access patterns, and potential threats. Document risks specific to AI—model extraction, prompt injection, training data exposure.
- Risk Management (§164.308(a)(1)(ii)(B)): Implement measures to reduce identified risks to reasonable levels. For AI, this includes input validation, output filtering, and monitoring for anomalous behavior.
- Workforce Training (§164.308(a)(5)): Train staff on AI-specific HIPAA requirements—what can and cannot be entered into AI prompts, how to handle AI outputs containing PHI, incident reporting procedures.
- Contingency Planning (§164.308(a)(7)): Include AI systems in disaster recovery and business continuity plans. Consider AI service outages, vendor failures, and data recovery procedures.
Physical Safeguards (§164.310)
Physical safeguards protect the physical infrastructure where AI systems operate:
- Facility Access Controls (§164.310(a)): For on-premise AI deployments, limit physical access to servers and storage. For cloud deployments, verify vendor's physical security controls.
- Workstation Security (§164.310(c)): Protect workstations used to access AI systems. Consider screen privacy, automatic lockout, and restrictions on copying AI outputs containing PHI.
- Device and Media Controls (§164.310(d)): Secure disposal of hardware that processed PHI through AI systems, including GPUs and storage devices.
Technical Safeguards (§164.312)
Technical safeguards are the security technologies protecting AI systems and PHI:
Access Controls (§164.312(a))
- Unique User Identification: Each AI system user must have unique credentials—no shared accounts.
- Emergency Access Procedures: Document how to access AI systems in emergencies while maintaining accountability.
- Automatic Logoff: AI interfaces must timeout after inactivity periods appropriate to the clinical environment.
- Encryption and Decryption: Implement encryption for PHI stored in AI system databases, caches, and logs.
Audit Controls (§164.312(b))
Implement mechanisms to record and examine AI system activity. This is particularly important for AI and often underimplemented—see the dedicated section below.
Integrity Controls (§164.312(c))
- Data Integrity: Protect PHI from improper alteration or destruction. Ensure AI outputs don't corrupt source records.
- Authentication: Verify that PHI received from AI systems has not been altered in transit.
Transmission Security (§164.312(e))
- Encryption: All API calls to AI services must use TLS 1.2 or higher. Verify certificate validation and reject downgrade attacks.
- Integrity Controls: Implement message authentication to detect tampering with PHI in transit.
Encryption Standards
While HIPAA doesn't mandate specific encryption algorithms, OCR guidance and industry standards establish clear expectations:
Recommended Encryption Standards for HIPAA AI
- Data at Rest: AES-256 encryption for all PHI in databases, file storage, caches, and logs
- Data in Transit: TLS 1.2 minimum (TLS 1.3 preferred) for all API communications with AI services
- Key Management: Use HSMs or cloud KMS for encryption key storage; implement key rotation procedures
- Certificate Management: Validate TLS certificates; implement certificate pinning where appropriate
Audit Logging for AI Systems: The Compliance Gap
Audit logging is where most AI deployments fall short of HIPAA requirements. The Security Rule requires audit controls that record and examine activity in information systems containing or using ePHI (§164.312(b)). For AI systems, this creates specific challenges that standard logging infrastructure doesn't address.
The AI Logging Problem
Traditional application logging captures access events—who logged in, what records they viewed. But AI systems require inference-level logging that captures:
- What PHI was sent to the AI — The actual content of prompts and context windows
- What the AI returned — Generated text, predictions, or recommendations
- Who initiated the query — User identification and authentication context
- When and where — Timestamps, session identifiers, client information
- Which model was used — Model version, configuration parameters
- What happened next — Whether outputs were used, modified, or discarded
Most AI platforms provide only basic access logs—they record that an API call occurred, but not the content of that call. This creates a fundamental compliance gap: if you can't demonstrate what PHI was processed and what the AI output was, you can't prove compliance or respond effectively to audits or incidents.
HIPAA Log Retention Requirements
HIPAA requires retention of documentation for six years from the date of creation or the date when the document was last in effect (45 CFR 164.530(j)). This includes:
- Policies and procedures governing AI use
- Risk assessments including AI systems
- Audit logs of AI system activity
- Training records for staff using AI with PHI
- BAAs with AI vendors
For AI inference logs containing PHI, this creates tension between retention requirements and data minimization principles. Organizations must balance compliance documentation needs against the risk of retaining PHI longer than operationally necessary.
Implementing Compliant AI Logging
AI Audit Logging Architecture
- Capture Layer: Implement logging middleware that captures prompts and responses before transmission to AI APIs and after receipt
- Secure Storage: Store logs in encrypted, tamper-evident storage with write-once or append-only guarantees
- Access Controls: Restrict log access to authorized security and compliance personnel with separate authentication
- Integrity Protection: Implement cryptographic hashing or blockchain-style chaining to detect log tampering
- Retention Automation: Automate 6-year retention and secure disposal after retention period expires
Common HIPAA Violations with AI
Understanding common violations helps organizations avoid them. These patterns emerge repeatedly in healthcare AI deployments:
1. Consumer AI Tools with PHI
The most prevalent violation: healthcare workers using ChatGPT, Claude, Gemini, or other consumer AI tools to process patient information. This includes:
- Pasting clinical notes into ChatGPT to summarize patient encounters
- Asking Claude to draft referral letters containing patient details
- Using AI to translate patient communications
- Getting clinical decision support from consumer AI tools
Why it's a violation: Consumer AI tools don't offer BAAs, may use input data for model training, retain prompts for extended periods, and lack the security controls required by HIPAA. Even if an individual clinician doesn't intend to violate HIPAA, inputting PHI into these systems creates unauthorized disclosure.
2. Missing or Inadequate BAAs
Organizations assume that because a vendor is "healthcare-focused" or "enterprise-grade," they have automatic HIPAA coverage. Common gaps:
- Using AI services without any BAA in place
- BAA that covers cloud infrastructure but not AI-specific services
- BAA with the parent company that doesn't extend to AI product subsidiaries
- Outdated BAA that predates AI service offerings
3. Inadequate Logging and Accountability
Deploying AI systems without capturing the audit trail required by HIPAA:
- No logging of AI prompts and responses
- Logs that capture access but not content
- Logs stored in ephemeral systems without retention controls
- Inability to provide accounting of AI disclosures upon patient request
4. Shadow AI Deployments
Individual departments or clinicians deploying AI tools without IT or compliance review:
- Radiology using AI diagnostic tools without security assessment
- Clinical research teams using LLMs to analyze patient data
- Administrative staff using AI for medical coding or billing
- Telehealth platforms adding AI features without compliance review
5. Training Data Exposure
Improper handling of PHI in AI model training:
- Training on PHI without proper de-identification
- Sharing PHI with AI vendors for model training without authorization
- Model memorization of PHI that can be extracted through prompting
- Failure to assess re-identification risk in training datasets
Case Study: OCR Settlement for AI-Related Violation
Illustrative Case: Healthcare Organization AI Breach (2024)
While OCR has not yet published AI-specific enforcement actions, existing breach patterns predict likely AI violation scenarios:
- Staff using consumer AI tools disclosed PHI for 2,847 patients
- No BAA in place with AI provider
- No policies prohibiting consumer AI use with PHI
- Settlement: $1.8 million + 3-year corrective action plan
Architectural Patterns for HIPAA Compliant AI
There are several approaches to using AI with PHI while maintaining HIPAA compliance. Each has trade-offs in complexity, cost, capability, and risk profile.
Pattern 1: Enterprise AI with BAA
The simplest compliant pattern: use enterprise AI services from vendors offering BAAs (Azure OpenAI, AWS Bedrock, Google Vertex AI). PHI flows to the AI provider, which is covered under the BAA.
Advantages
- Simplest to implement
- Leverage provider's security controls
- Access to latest models
- Clear liability framework via BAA
Disadvantages
- PHI leaves your environment
- Dependent on provider compliance
- Potentially higher per-query costs
- Limited customization options
Pattern 2: PHI Redaction/De-identification Proxy
Deploy a proxy layer that strips PHI before sending data to AI, then re-inserts it in the response. The AI never sees actual PHI.
Advantages
- PHI never leaves your control
- Can use any AI provider (no BAA needed)
- Reduced compliance scope for AI vendor
- Defense in depth protection
Disadvantages
- Complex to implement correctly
- May reduce AI quality for some use cases
- Risk of incomplete de-identification
- Adds latency and infrastructure complexity
Learn more in our technical deep-dive: How We Used AI on Patient Data Without a BAA.
Pattern 3: On-Premise / Self-Hosted AI
Deploy AI models within your own infrastructure using open-source models (Llama, Mistral) or licensed on-premise solutions. PHI never leaves your environment.
Advantages
- Maximum control over data
- No external BAA required for model
- Can fine-tune on proprietary data
- Potentially lower long-term costs at scale
Disadvantages
- Significant infrastructure investment
- May not match cloud AI quality
- Requires ML operations expertise
- Full security responsibility retained
Pattern 4: Hybrid Architecture
Combine approaches based on use case sensitivity. Use enterprise AI with BAA for general clinical workflows, add de-identification for highly sensitive cases, and deploy on-premise for research and model development.
Evaluating AI Vendors for HIPAA Compliance
When evaluating AI vendors for healthcare use, a systematic assessment process ensures you select partners capable of supporting compliant deployments.
HIPAA AI Vendor Evaluation Checklist
Documentation & Agreements
- BAA available and executed covering AI services specifically
- BAA covers all sub-processors (cloud infrastructure, inference providers)
- Data processing addendum specifying PHI handling procedures
- Security documentation (SOC 2 Type II, HITRUST, penetration test results)
- Insurance coverage including cyber liability
Technical Controls
- Encryption at rest (AES-256) for all PHI including prompts and logs
- Encryption in transit (TLS 1.2+) for all API communications
- Unique user identification with MFA support
- Role-based access controls for administrative functions
- Comprehensive audit logging including inference-level records
- Log retention meeting 6-year HIPAA requirement
Data Handling
- Clear policy on PHI use for model training (ideally prohibited without explicit consent)
- Data residency options (US-only processing for PHI)
- Data retention policies with configurable retention periods
- Secure deletion procedures at contract termination
- Tenant isolation in multi-tenant environments
Incident Response
- Documented incident response procedures
- Breach notification within HIPAA timelines (60 days to affected individuals)
- Security incident SLAs (e.g., notification within 24 hours)
- Post-incident analysis and remediation procedures
OCR Enforcement Trends and AI
The Office for Civil Rights (OCR) within HHS enforces HIPAA. Understanding OCR's enforcement priorities helps organizations focus compliance efforts on highest-risk areas.
2024-2025 Enforcement Priorities
OCR has signaled increased focus on:
- Risk Analysis Failures: The most common HIPAA violation—organizations failing to conduct comprehensive risk assessments that include new technologies like AI
- Hacking and IT Incidents: Large-scale breaches from ransomware, phishing, and system vulnerabilities
- Business Associate Oversight: Covered entities failing to ensure Business Associates comply with HIPAA requirements
- Access Controls: Inadequate authentication, shared credentials, and excessive access privileges
AI-Specific Guidance
In December 2023, OCR issued guidance on HIPAA and AI, emphasizing that:
- Covered entities must conduct risk analyses before deploying AI tools that process PHI
- BAAs are required with AI vendors who access PHI
- Workforce training must address AI-specific risks
- PHI used for AI training requires the same protections as PHI used for any other purpose
HHS has indicated it may issue additional AI-specific HIPAA guidance as AI adoption accelerates in healthcare.
Penalty Structure
HIPAA Civil Penalty Tiers (2025 Adjusted)
| Culpability Level | Per Violation | Annual Maximum |
|---|---|---|
| Unknown (despite reasonable diligence) | $137 - $68,928 | $2,067,813 |
| Reasonable cause (not willful neglect) | $1,379 - $68,928 | $2,067,813 |
| Willful neglect, corrected within 30 days | $13,785 - $68,928 | $2,067,813 |
| Willful neglect, not corrected | $68,928 - $2,067,813 | $2,067,813 |
Note: Penalties are adjusted annually for inflation. 2025 figures shown above.
Beyond HIPAA: Emerging AI Regulations
HIPAA establishes the compliance floor for healthcare AI, but new regulations are emerging that address AI-specific risks beyond data privacy:
Colorado AI Act (June 2026)
The Colorado AI Act requires developers and deployers of high-risk AI systems to use reasonable care to prevent algorithmic discrimination. Healthcare AI making consequential decisions (treatment recommendations, coverage determinations) falls within scope. Requirements include impact assessments, risk management policies, and consumer disclosures.
EU AI Act (August 2026)
The EU AI Act classifies healthcare AI as high-risk, requiring conformity assessments, technical documentation, human oversight, and post-market monitoring. Organizations serving EU patients or processing EU data should prepare for these requirements.
FDA Regulation of AI/ML Medical Devices
AI systems intended for diagnosis, treatment, or disease prevention may qualify as medical devices subject to FDA regulation. FDA's Digital Health Center of Excellence provides guidance on AI/ML-based software as a medical device (SaMD).
State Privacy Laws
States including California, Virginia, Connecticut, and others have enacted comprehensive privacy laws with provisions affecting AI processing of health information. While HIPAA generally preempts state law for covered entities, gaps exist for non-covered entities using health data.
FTC Enforcement
The Federal Trade Commission has enforcement authority over deceptive and unfair practices involving AI, including misleading claims about AI capabilities, algorithmic discrimination, and inadequate data security. FTC has signaled increased scrutiny of AI in healthcare contexts.
HIPAA Compliant AI Implementation Roadmap
Whether you're a healthcare organization deploying AI or an AI vendor entering the healthcare market, this roadmap provides a structured path to compliance:
HIPAA AI Compliance Sprint
AI System Inventory (Week 1)
Catalog all AI systems in use or planned for deployment. Identify which systems will process PHI, what PHI elements they access, and what the data flows look like. Include shadow AI—tools staff may be using without formal approval.
Risk Assessment (Weeks 2-3)
Conduct HIPAA risk analysis for each AI system. Document threats specific to AI: prompt injection, model extraction, training data leakage, output disclosure. Assess current controls and identify gaps.
Vendor Assessment & BAAs (Weeks 4-6)
Evaluate AI vendors using the checklist above. Negotiate and execute BAAs for all vendors processing PHI. Ensure BAAs specifically cover AI services and address training data, logging, and retention.
Technical Controls (Weeks 7-10)
Implement Security Rule safeguards for AI systems. Configure encryption, access controls, and audit logging. Deploy inference-level logging infrastructure. Establish monitoring and alerting for security events.
Policies & Training (Weeks 11-12)
Develop AI-specific HIPAA policies: acceptable use, prohibited activities (consumer AI with PHI), incident reporting. Train workforce on AI policies. Document training completion.
Continuous Monitoring (Ongoing)
Establish ongoing monitoring of AI system security. Review audit logs regularly. Conduct periodic risk assessments (annually minimum). Update policies as AI technology and regulations evolve.
Evidence over documentation: Focus on generating verifiable evidence that controls are working, not just policies stating what should happen. Cryptographic attestations, tamper-proof logs, and testable controls demonstrate compliance more effectively than policy documents.
Frequently Asked Questions
Can I use ChatGPT or Claude with patient information?
Standard consumer versions (ChatGPT Free, Plus, Teams; Claude Free, Pro) should never be used with PHI. These services don't offer BAAs, may use your data for training, and lack the security controls required by HIPAA. Enterprise versions with BAAs (ChatGPT Enterprise, Azure OpenAI) can be used as part of a HIPAA-compliant architecture, but you must still implement appropriate safeguards.
Is there a list of HIPAA-certified AI tools?
No. There is no government-issued HIPAA certification for any product, including AI tools. HIPAA compliance is an operational state, not a product attribute. Any vendor claiming "HIPAA certification" is using shorthand for "we have controls enabling compliant deployment"—not an official certification. Always verify BAA availability and assess specific security controls.
Do I need a BAA with every AI vendor?
You need a BAA with any vendor that will access, process, store, or transmit PHI on your behalf. If you use a de-identification proxy that removes all PHI before data reaches the AI vendor, the vendor may not need a BAA (since they never receive PHI). However, this architecture is complex to implement correctly and requires rigorous validation.
Can PHI be used to train AI models?
PHI can be used for AI training under specific conditions: (1) proper de-identification per HIPAA Safe Harbor or Expert Determination methods, after which it's no longer PHI; (2) patient authorization; or (3) healthcare operations purposes with appropriate safeguards and BAA coverage. Sharing PHI with external vendors for training requires careful analysis and typically requires authorization. Watch for model memorization risks.
What logging is required for AI systems?
The Security Rule requires audit controls recording activity in systems containing ePHI. For AI, this includes: who initiated queries, what PHI was sent to the AI, what the AI returned, timestamps, and session information. Logs must be retained for 6 years and protected from tampering. Most AI platforms provide only basic access logs—you may need to implement additional logging infrastructure.
What if staff are already using consumer AI with PHI?
This is a HIPAA violation that should be addressed immediately: (1) Issue clear policy prohibiting consumer AI use with PHI; (2) Communicate policy to all workforce members; (3) Conduct training on AI-specific HIPAA requirements; (4) Assess whether a breach occurred and determine notification obligations; (5) Deploy approved AI alternatives that meet HIPAA requirements; (6) Document remediation efforts.
How do I evaluate if an AI vendor is HIPAA compliant?
Request and review: (1) BAA covering AI services specifically; (2) Security documentation (SOC 2 Type II, HITRUST); (3) Technical specifications for encryption, access controls, logging; (4) Data handling policies including training data use; (5) Incident response procedures; (6) Sub-processor list and their compliance status. Conduct your own security assessment and include AI systems in your HIPAA risk analysis.
Does HIPAA apply to AI-generated clinical notes?
Yes. AI-generated content that contains or is derived from PHI is itself PHI and subject to HIPAA protections. This includes AI-drafted clinical notes, summaries, recommendations, and any other outputs incorporating patient information. The clinical provider who reviews and signs the note bears responsibility for its accuracy and appropriate handling.
Key Takeaways
- There is no "HIPAA certified AI" — compliance depends entirely on how AI is deployed and operated
- BAAs are required when AI vendors handle PHI—but a BAA alone isn't sufficient for compliance
- Consumer AI tools are never HIPAA compliant for use with PHI—period
- Logging is critical — you need inference-level audit trails, not just access logs
- Evidence over documentation — focus on demonstrating controls work, not just having policies
- New regulations are coming — HIPAA is the floor, not the ceiling for healthcare AI compliance
For more on building the evidence infrastructure that supports both HIPAA compliance and emerging AI regulations, explore our other resources: