All Posts
·4 min read

HIPAA and AI: What Medical Practices Need to Know

Your staff is probably using ChatGPT with patient data right now. Here's why that's a reportable HIPAA breach, what the penalties look like, and how private AI deployment eliminates the risk.

Private AIHIPAAMedical PracticesComplianceHealthcare
HIPAA and AI: What Medical Practices Need to Know

Somewhere in your practice right now, someone is pasting patient information into an AI tool. Maybe it's the front desk summarizing a referral. Maybe it's a medical assistant drafting a prior authorization letter. Maybe it's a billing coordinator decoding a denial reason.

They're not being careless. They're being resourceful. The problem is that they just created a reportable HIPAA breach - and nobody knows it happened.

Why cloud AI tools violate HIPAA

HIPAA requires that any entity processing Protected Health Information (PHI) must be the covered entity itself or a business associate operating under a signed BAA.

When a staff member pastes patient data into ChatGPT, Claude, or Gemini:

  1. PHI is transmitted to a third party. The AI provider receives and processes patient data on their infrastructure.
  2. No BAA exists. Consumer-tier AI tools do not offer BAAs. Even most enterprise subscriptions have significant carve-outs.
  3. You can't verify data handling. You don't know where the data is stored, how long it's retained, or whether it's used for training.
  4. The breach is immediate. The moment PHI enters a system without a BAA, a violation has occurred.

The penalty structure

Tier Description Penalty per violation Annual maximum
1 Unaware of violation $100 - $50,000 $25,000
2 Reasonable cause $1,000 - $50,000 $100,000
3 Willful neglect, corrected $10,000 - $50,000 $250,000
4 Willful neglect, not corrected $50,000 $1.5 million

Each patient record processed through a cloud AI tool is a separate violation. If a medical assistant uses ChatGPT to summarize intake notes for 10 patients in a week, that's 10 violations. At Tier 2 minimums, that's $10,000 in a single week from a single employee.

What we actually see in medical practices

During AI Operations Audits, the most common AI usage patterns are:

  • Clinical documentation: Staff using AI to draft SOAP notes, referral letters, and patient summaries by pasting patient information into ChatGPT.
  • Prior authorization: Staff pasting clinical information and denial reasons to draft appeal letters.
  • Patient communication: Front desk drafting emails with clinical context - all PHI.
  • Coding and billing: Billing staff using AI with diagnosis information and procedure details.
  • Referral management: Summarizing patient cases including clinical history and medications.

Every one of these delivers real productivity gains. Your staff isn't wrong that AI makes them faster. They're wrong about where that AI processing should happen.

The solution: private AI on your hardware

Private AI deployment means the AI model runs on a device in your office. Patient data is processed locally. Nothing touches a third-party server.

Hardware: A Mac Mini M4 Pro in your server room, connected to your local network.

Models: Open-source AI models (DeepSeek, Llama, Mistral) installed locally, performing well on clinical documentation, data extraction, and summarization.

Portal: Staff accesses AI through a web portal on your office network. Looks and feels like ChatGPT. Every interaction stays on your hardware.

What it handles:

  • Patient intake processing: 45 minutes down to 5
  • Clinical documentation assistance: properly structured SOAP notes, referral letters
  • Prior authorization drafting: no PHI leaves your network
  • Appointment scheduling: AI receptionist handles inbound calls (runs on managed cloud since it handles public calls - no PHI at this stage)

Hybrid routing: General tasks without PHI - drug interaction lookups, billing code research, generic patient education materials - route to cloud AI for higher quality. The system classifies and routes automatically.

The compliance position

With private AI infrastructure:

  1. No third-party data processing. No BAA needed because no provider touches your PHI.
  2. Auditable. Every interaction logged on your hardware.
  3. Controllable. You set retention policies, access controls, and deletion schedules.
  4. Defensible. You can demonstrate PHI processing occurs exclusively on your facility's hardware.

Cost vs. risk

Item Cost
HIPAA violation (Tier 2, single incident) $1,000 - $50,000
10 incidents from one week of shadow AI $10,000 - $500,000
OCR investigation and remediation $50,000 - $200,000+
Private AI deployment + first year managed ~$65,000

The deployment costs less than the minimum penalty for a single Tier 3 violation.

What to do this week

  1. Find out what's happening. Ask your staff what AI tools they're using and what data they're processing. The answer will surprise you.
  2. Get an assessment. Our AI Operations Audit evaluates your current exposure, classifies data by PHI sensitivity, and delivers a written AI usage policy - plus a working prototype of your first private AI automation.
  3. Deploy. Give your staff the AI tools they clearly want, without the compliance risk.

The audit costs $3,500 and is credited toward a deployment. Book a 15-minute call and we'll walk through what it covers for your practice.


Related reading:

Want to see what AI can do for your business?

Book a free 15-minute call. We'll tell you exactly what's automatable — and what isn't.

Schedule a 15-Minute Fit Call