Incidents
Eight times AI agents caused real damage.
These are not edge cases. They are what happens when AI agents operate without inventory, contracts, or runtime oversight.
01Data ExfiltrationSamsung Semiconductor · March 2023
Data left the building.
An agent can behave correctly and still transmit your most sensitive data outside your organisation — with no outbound c...
Read the full incident →
02Prompt InjectionSlack · August 2024
A hidden instruction took control.
An agent that reads external content can be redirected by instructions hidden in that content. Your build-time guardrail...
Read the full incident →
03Legal LiabilityAir Canada · 2024
The agent made a promise the company had to keep.
An agent can produce output that is reasonable, helpful, and legally binding — and the organisation deploying it is resp...
Read the full incident →
04Autonomous DamageAmazon · December 2025
The agent deleted production to fix a bug.
An agent doesn’t need to be compromised to cause significant damage. It only needs access and autonomy without defined b...
Read the full incident →
05Hallucination at ScaleAvianca / Mata v. Avianca · 2023
The agent invented six court cases.
AI agents produce outputs that appear authoritative and verifiable — and humans trust them. The agent has no awareness o...
Read the full incident →
06Autonomous ScaleReplit / OpenClaw · 2025
Failure scaled before anyone noticed.
Agents don’t fail once and stop. They execute at machine speed, at scale, without fatigue. A single misaligned decision ...
Read the full incident →
07Model DriftIndustry-wide · 2024–2025
The agent changed when nobody changed it.
An AI agent is not a fixed system. Its behaviour is partially determined by external systems that change on a schedule n...
Read the full incident →
08Shadow AIEnterprise-wide · 2023–2025
Nobody knew what was operating.
You cannot control what you cannot see. The first requirement of any control framework is a complete and current invento...
Read the full incident →