
Welcome to the 8 new deep divers who joined us since last week.
If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates and interviews with industry experts straight to your feed.
“We are managing the digital memories of machines – and those memories can now be used as legal evidence.”
In December 2023, The New York Times filed a high-profile copyright lawsuit against OpenAI and Microsoft. But the legal implications of this case extend far beyond copyright. The discovery process in the case has created a new precedent: as of May 2025, user inputs and deleted prompts can now be placed under legal hold – and preserved indefinitely.
If you run or rely on generative AI systems, this case could reshape how your organisation must approach privacy, data governance, and compliance.
We asked Betania Allo (Cybersecurity Lawyer and Policy Strategist) to explain why every organisation building or using GenAI must rethink data policies, legal exposure, and user trust.
Here’s what she told us.
“Delete doesn’t always mean disappear. Especially when you're under legal fire.
“This is not a matter of surveillance, but of discovery – the legal mechanism that allows parties in litigation to obtain relevant evidence.”
“In practical terms, a preservation order requires companies to retain all data that could be relevant to a legal case, overriding normal retention and deletion policies.
“Discovery allows legal teams to request and inspect logs, prompts, outputs, training data, and more.
“A legal hold freezes deletion policies and activates enterprise-wide data retention protocols.
“Even users who had disabled chat history or deleted conversations could no longer assume their data was erased. That data had to be preserved – not by corporate policy, but by judicial mandate.”
"OpenAI appealed to vacate the preservation order, arguing it conflicted with its privacy commitments and imposed technical and compliance burdens. It moreover characterised the preservation requirement as an overly broad and harmful precedent.
“On June 26th, the appeal was denied, affirming the preservation order and noting that OpenAI’s own terms of use permit retention when legally required. The privacy objections were dismissed in light of the plaintiffs’ discovery needs.”
“Any organisation leveraging generative AI must now consider the legal discoverability of their AI interactions. The boundaries between personal data, proprietary data, and legal evidence are becoming increasingly blurred.
“Deleted prompts may still be accessible. Experimentation logs may become subpoenaed records. Privacy policies may no longer protect users if legal orders require data retention.”
“This directly challenges GDPR principles such as data minimisation and the right to erasure.
“For security and privacy leaders operating within the EU, this creates a significant compliance dilemma.
“Until the legal hold is lifted or narrowed, the only reliable way to shield sensitive inputs from litigation-based preservation is to use ChatGPT Enterprise, Edu, or API tiers with Zero Data Retention (ZDR) enabled.
“In the meantime, DPOs and CISOs should revisit vendor risk assessments, revise privacy notices to reflect potential cross-border retention, and prepare to address gaps in data subject rights enforcement – especially where AI output logs can now be classified as legal evidence.”
“First, recognise the risk of shadow retention. Even when users delete data, your vendors – or the courts – may still retain it. This introduces compliance risks that may not be fully reflected in your current governance models. Where possible, enable Zero Data Retention (ZDR), particularly for API or SaaS-based generative AI tools. ZDR is no longer a nice-to-have but an essential safeguard when processing personal, proprietary, or regulated data.
“Second, establish litigation readiness within your AI environments. Prompt engineering and interaction logs must be tracked, attributed, and governed with the same rigour as other sensitive systems. You should know who submitted what, when, and for what purpose.
“Finally, update your privacy policies. If your organisation promises data minimisation or deletion, those statements must now clearly disclose legal exceptions such as court-ordered data holds. Failing to do so could result in misrepresentation or noncompliance under frameworks like GDPR.”
“Leadership must take proactive steps to address the emerging legal and regulatory risks introduced by AI data retention.
“Begin with a comprehensive inventory of all AI systems that store logs, ensuring a clear separation between production and experimental environments, and enforcing ZDR where feasible.
“Existing data deletion policies should be reassessed in light of litigation hold scenarios, as information once considered ephemeral may now be preserved indefinitely.
“For DPOs, it is essential to revise privacy notices to reflect potential legal retention obligations, push vendors to include ZDR clauses in contracts, and map data flows involving large language models — particularly those touching personal or sensitive data.
“CAIOs should establish a prompt risk matrix to identify legally or ethically sensitive inputs, and educate users that generative AI is not a black box; every prompt may generate a permanent, discoverable legal record.
“And legal and compliance teams must be embedded from the outset in decisions related to prompt logging, fine-tuning, and overall AI system design.”
“This case is about much more than copyright. It illustrates what happens when highly complex, probabilistic technologies collide with legal systems designed around traceability, accountability, and evidence preservation.
“We once thought of generative AI as experimental or even ephemeral. Now, it must also be viewed as legally actionable.
“We’re no longer just securing systems. We are managing the digital memories of machines – and ensuring those memories can withstand legal scrutiny.”
Connect with Betania Allo on LinkedIn. Pre-register now to attend DeepFest 2026 and stay ahead of the compliance curve.