← Back to feed
7

CodeWall Claims It Hacked McKinsey's Internal AI Platform Lilli

Security1 source·Mar 13

Summary

  • • CodeWall claims its autonomous agent breached McKinsey's Lilli AI platform in two hours
  • • Alleged breach exposed 46.5 million chat messages including client strategy and M&A data
  • • SQL injection via unauthenticated API endpoint reportedly enabled full database read and write access
  • • McKinsey has not confirmed or denied the claims; this is a single-source research disclosure
Adjust signal

Details

1.Security Alert

CodeWall claims autonomous agent breached McKinsey's Lilli AI platform in roughly two hours

According to CodeWall's own disclosure, no credentials or insider access were used. The firm says its offensive agent autonomously selected McKinsey as a target after parsing its public responsible disclosure policy, then allegedly achieved full read and write access to the production database. McKinsey has not confirmed or denied this account.

2.Context

McKinsey's Lilli platform reportedly serves 43,000+ employees processing 500,000+ prompts monthly

Launched in 2023 and reportedly adopted by over 70% of McKinsey staff, Lilli allegedly supports chat, document analysis, and RAG-based search across more than 100,000 internal documents. This background is drawn from CodeWall's disclosure and has not been independently verified.

3.Security Alert

Alleged entry point: unauthenticated API endpoint with JSON key SQL injection that OWASP ZAP missed

CodeWall claims the agent found over 200 publicly exposed API endpoints, 22 reportedly requiring no authentication. One write endpoint allegedly concatenated JSON key names — not values — directly into SQL queries, creating a blind SQL injection. CodeWall states OWASP ZAP did not flag this flaw; the agent ran 15 blind iterations before data began returning.

4.Security Alert

Chained IDOR vulnerability and write access to system prompts alleged, enabling behavioral manipulation

Per CodeWall's disclosure, the SQL injection was chained with an insecure direct object reference flaw to access data across user accounts. Write access to system prompts via SQL injection is also alleged — representing a particularly sensitive capability in an AI platform, as it would allow an attacker to silently alter AI behavior for all users.

5.Stat

46.5 million plaintext chat messages allegedly accessed, covering strategy, client, and M&A data

These figures are self-reported by CodeWall and unverified. The alleged plaintext storage of sensitive consulting communications — including client financials and M&A discussions — would represent a significant data protection failure if confirmed.

6.Stat

Alleged exposure: 728,000 files, 57,000 user accounts, 3.68 million RAG document chunks, 95 AI model configs

File types allegedly included 192,000 PDFs, 93,000 Excel files, 93,000 PowerPoints, and 58,000 Word documents. Also allegedly exposed: 384,000 AI assistants, 94,000 workspaces, and 95 AI model configurations across 12 model types. All figures originate from CodeWall's disclosure only.

7.Insight

Offensive agent reportedly selected McKinsey as a target autonomously without human direction

CodeWall states its agent identified McKinsey by analyzing public responsible disclosure policies — raising questions about risks of fully autonomous offensive security tooling and how organizations' own transparency policies may factor into threat actor targeting.

8.Context

McKinsey has not confirmed, denied, or responded to CodeWall's claims

This disclosure comes from a single source: CodeWall's own research writeup published via a low-trust aggregator. Without independent verification or a McKinsey response, the full scope and accuracy of the alleged breach remain unconfirmed.

Security Alert = claimed vulnerability or breach detail, Stat = alleged numerical data point, Insight = analytical observation, Context = background information

What This Means

CodeWall's disclosure, if accurate, would represent one of the most significant AI platform security failures in enterprise consulting history — exposing decades of confidential client strategy to an unauthenticated autonomous agent in two hours. However, this account comes from a single source: the attacking firm's own writeup, published via a low-trust aggregator, with no independent verification and no response from McKinsey. The disclosure's real-world significance lies in the pattern it illustrates regardless of this specific case — enterprise AI platforms aggregating sensitive data at scale create high-value targets, and API security practices have not kept pace with deployment speed. Security teams and enterprise AI platform owners should treat the attack vectors described — unauthenticated endpoints, blind SQL injection in API layers, IDOR chaining — as credible threat patterns worth auditing, independent of whether this particular breach occurred as claimed.

Sources

Similar Events