← Back to feed
6

AI-Coded Medical App Exposes Patient Data, Likely Violating Swiss Privacy Laws

Security1 source·1d ago

Summary

  • • Medical professional used an AI coding agent to build a patient management app with zero security controls
  • • All patient data stored unencrypted and publicly accessible via a single curl command
  • • Voice recordings from appointments sent to two US AI services without patient consent or Data Processing Agreement
  • • Operator responded with an AI-generated message and added only basic authentication as remediation
Adjust signal

Details

1.Security Alert

Medical practice deployed AI-built patient management app with no security controls

A medical professional built and published a patient management system using an AI coding agent, importing real patient data into it. The motivation was a video demonstrating how easy AI makes software development — no technical expertise was applied or sought.

2.Tech Info

Entire app was a single HTML file; all access control logic lived in client-side JavaScript

The backend was a managed database with zero access control or row-level security configured. Because all 'security' was handled in browser-side JavaScript, the database was effectively public — any HTTP request bypassing the front end had unrestricted read/write access to all patient records.

3.Security Alert

Security-aware patient gained full read/write access to all patient data within 30 minutes

A technically literate patient discovered the application and within half an hour had complete access to the entire dataset. The data was unencrypted and stored on a US-based server with no Data Processing Agreement in place.

4.Security Alert

Voice recordings from medical appointments sent to two external AI services without patient consent

The application recorded conversations during appointments and automatically transmitted audio to two major US-based AI companies for transcription and summarization. Patients were never informed this was happening, creating both a consent violation and an unauthorized cross-border data transfer.

5.Legal

Incident likely violated Swiss nDSG data protection law and Berufsgeheimnis professional secrecy statutes

Swiss nDSG requires appropriate technical safeguards, explicit consent, and lawful cross-border data transfer agreements. Berufsgeheimnis imposes strict confidentiality obligations on medical professionals. Storing patient data on US servers without a DPA and routing audio to US AI providers without consent appears to breach both frameworks. The article author notes they are not a lawyer.

6.Insight

Operator's remediation response was itself AI-generated and substantively inadequate

When notified, the operator sent an AI-generated acknowledgment and responded by adding basic authentication and rotating access keys. This indicates the operator did not understand what had been exposed, how long it had been exposed, or what an adequate remediation would require.

7.Context

Vibe coding produces functional-looking software that can be catastrophically insecure in ways invisible to non-technical operators

AI coding agents optimize for functional output, not security architecture. An operator with no technical background cannot distinguish a secure application from an insecure one by using it. This gap — functional appearance masking structural risk — is the core danger of deploying AI-generated software in regulated domains without expert review.

Security Alert = documented breach or vulnerability, Tech Info = architectural/code flaw details, Legal = regulatory and legal exposure, Insight = behavioral pattern, Context = systemic implication

What This Means

This incident is a concrete, real-world demonstration of the risk that vibe coding poses in regulated domains — AI coding agents can produce applications that appear functional while being architecturally insecure in ways invisible to non-technical operators. The healthcare sector, with its strict data protection and professional secrecy obligations, is among the highest-risk environments for this failure mode. The operator had no framework to evaluate what the AI built, no way to detect the exposure, and no understanding of the legal obligations attached to handling patient data — yet the tools made building and deploying trivially easy. For enterprises and regulated industries, this signals an urgent need for technical review gates before AI-generated software touches sensitive data.

Sources

Similar Events