· Security  · 6 min read

The First 72 Hours: An AI Data Leak Response Playbook

When your DLP flags an anomaly on a Tuesday afternoon, the clock starts. Here is exactly what to do in the first 72 hours — and what you need to have in place before it happens.

When your DLP flags an anomaly on a Tuesday afternoon, the clock starts. Here is exactly what to do in the first 72 hours — and what you need to have in place before it happens.

2:14 PM, Tuesday — The Alert

Your DLP system flags an anomaly. A customer-facing AI assistant route shows an unusual spike in sensitive data detections. The initial alert indicates PII — possibly credit card numbers and national ID numbers — present in prompts sent to an external LLM provider over the past 4 hours.

The GDPR Article 33 clock has not started yet. It starts when your organisation becomes “aware” of a personal data breach. Right now, you are in the assessment phase. But once you confirm that personal data left your control boundary, you have 72 hours to notify your supervisory authority.

Every minute you spend reconstructing events manually is a minute you do not spend on containment, scoping, or preparing the notification. This playbook assumes you have runtime AI security controls with decision logging in place. If you do not, skip to the Prevention Architecture section at the end — you need to read that first.


Hour 0–4: Containment

Immediate Actions (First 30 Minutes)

  1. Switch affected routes from Shadow Mode to Block Mode. If you are running SafeLLM, this is a configuration change — no redeployment required. Every request through the affected routes now hits the full L0–L2 pipeline with enforcement enabled.

  2. Pull the decision logs for the affected time window. Query your logging backend (Loki, S3, SIEM) for all requests on the flagged routes in the past 24 hours. You need:

    • Request timestamps
    • Detection reason codes (detected:pii:credit_card, detected:pii:national_id, etc.)
    • Route identifiers
    • User group or session identifiers (if available)
    • Decision outcome (allowed, blocked, redacted)
  3. Notify the incident response lead and the DPO. Do not wait for a full scope. Notify with what you know: “Potential PII exposure via AI assistant route, containment in progress, scope assessment underway.”

Scoping (Hours 1–4)

Answer three questions as quickly as possible:

  • Which users were affected? Cross-reference session identifiers from decision logs with your identity provider. How many unique users sent prompts containing PII?
  • What data types were exposed? Categorise by detection reason code. Credit cards and national IDs carry different regulatory weight than email addresses.
  • Which AI providers received the data? Map routes to upstream providers. If the route points to OpenAI, that is one DPA and one data processor. If it fans out to multiple providers, each one is a separate data flow to document.

With runtime decision logs, this scoping takes hours. Without them, it takes days — because the alternative is interviewing engineers, reviewing application code, and guessing from access logs that were never designed to capture prompt content.


Hour 4–24: Investigation

Reconstruct the Event Chain

Using your decision logs, build a timeline:

  1. When did the first PII-containing prompt enter the system? This is your breach start time.
  2. Was there a pattern? One user making a mistake, or multiple users following a workflow that inadvertently includes PII?
  3. Did the system detect it at the time? If SafeLLM was in Shadow Mode, the detection was logged but not blocked. This is important context — it shows you had monitoring in place even if enforcement was not yet active.
  4. What was the model provider’s response? Did the LLM echo PII back in its response? Check DLP output scan logs if available.

Build the Evidence Package

Your DPO and legal team need a structured package. Prepare:

FieldContent
Breach timelineFirst exposure → detection → containment, with timestamps
Data categoriesTypes of personal data involved (names, financial, national ID)
Data subjectsEstimated number of individuals affected
Data recipientsWhich AI providers received the data, with DPA status
Containment actionsWhat you did and when (route blocking, policy changes)
Decision log exportSHA-256 hashed prompt records with reason codes
Policy versionWhich detection rules were active during the incident

This evidence package serves dual purpose: it feeds your GDPR notification and it demonstrates to the regulator that you had a functioning control and monitoring system.


Hour 24–72: GDPR Article 33 Notification

If you have confirmed that personal data reached a third-party AI provider without adequate legal basis, you must notify your supervisory authority within 72 hours of becoming aware.

Required Notification Content (Article 33(3))

The notification must include:

  1. Nature of the breach — personal data included in prompts sent to external AI provider, categories of data, approximate number of data subjects and records
  2. DPO contact details — name, role, contact information
  3. Likely consequences — potential for identity theft (if national IDs), financial fraud (if credit cards), reputational damage
  4. Measures taken or proposed — route switched to enforcement mode, PII detection rules tightened, affected users notified, review of all AI-facing routes initiated

What “Good” Looks Like to a Regulator

Regulators assess two things: the breach itself, and your response capability. An organisation that can produce:

  • A timestamped event chain with decision logs
  • Evidence of pre-existing monitoring (even in Shadow Mode)
  • Immediate containment actions with verifiable timestamps
  • A structured evidence package within 24 hours

…is in a fundamentally different position than an organisation that discovers the breach weeks later and reconstructs events from memory.

The difference is not just legal. It is the difference between a regulator who views you as a responsible operator who had a control gap versus an organisation that had no controls at all.


Prevention Architecture: What You Need Before the Incident

If you are reading this playbook and realising that your current setup cannot produce the evidence described above, here is what to prioritise:

Without Runtime Decision Logs, Step 2 Becomes “Interview Every Engineer”

This is not an exaggeration. If you do not have request-level decision logs with reason codes, your incident investigation looks like this:

  1. Identify which application routes connect to AI providers (hours to days, depending on documentation quality)
  2. Review application code to determine what data could have been sent (days)
  3. Interview developers and users to estimate what actually was sent (days to weeks)
  4. Attempt to reconstruct a timeline from access logs that do not capture prompt content (largely guesswork)

Total time: weeks instead of hours. During which your 72-hour GDPR notification window has long expired.

Minimum Viable Control Stack

  1. Runtime inspection on all AI-facing routes — every prompt and response analysed before it leaves your boundary
  2. PII detection with reason codes — not just “PII detected” but “credit_card”, “iban”, “national_id” with confidence scores
  3. Decision logging with tamper-resistant storage — SHA-256 hashed records exportable to your SIEM
  4. Shadow Mode as default — start observing immediately, enforce when ready
  5. Route-level policy configuration — different routes carry different risk; treat them differently

SafeLLM provides all five out of the box. The OSS edition covers L0–L1.5. Deploy it today in Shadow Mode and start building the evidence baseline you will need when — not if — an incident occurs.

git clone https://github.com/safellmio/safellm-apisix-gateway-sidecar
docker compose up -d --build

For enterprise features including AI Guard, GLiNER entity detection, and hands-on incident response planning, contact our engineering team.

Back to Blog

Related Posts

View All Posts »