When AI writes the law – and gets it wrong

When AI writes the law – and gets it wrong

If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates and interviews with industry experts straight to your feed.


DeepDive 

Your weekly immersion in AI 

A high-profile US law firm recently admitted that a major court document contained multiple errors generated by artificial intelligence. 

The firm formally apologised after the mistakes were identified by opposing counsel – and with partners billing over $2,000-per-hour and the filing submitted to the US federal judge, the story hit the news in a big way. 

But the firm did already have strict AI policies in place – including (as reported by the Financial Times) guidance to “trust nothing and verify everything”.

So what went wrong? 

The details are important here, because they show why this is more than yet another story about AI hallucinations. 

The filing was part of a complex bankruptcy proceeding tied to allegations of large-scale fraud and money laundering. The legal stakes were high, so the scrutiny was intense. 

Within that context, AI tools were used to support the preparation of the document. The result included: 

  • Incorrectly cited legal cases
  • Misquoted sections of the US Bankruptcy Code
  • Fabricated or misidentified precedents
  • Inaccurate summaries of judicial conclusions

The errors passed through both the initial drafting process and a secondary review layer before being filed in court. They were later corrected, with the firm acknowledging that failure to verify AI-generated content constituted a breach of its own internal policy.

Other firms have faced sanctions, fines, and reputational damage after submitting AI-generated material without adequate verification. The difference here is the scale, the reputation of the firm, and the clarity of the governance failure.

Expertise is no longer enough

Most of us have a tendency to assume that highly trained professionals (think lawyers, analysts, consultants) will naturally catch issues like these. But this case challenges that assumption. 

The law firm in question is one of the most respected in the world, and its lawyers operate at the highest levels of complexity and accountability. But the combination of AI assistance, time pressure, and workflow design created a problem that expertise alone didn’t catch. 

When we spoke to previous DeepFest speaker and AI expert Lee Tiedrich about why everyone should learn about AI, she said: 

“Society faces the grand challenge of unlocking AI’s tremendous promises while also safeguarding against its harms and risks.”

And this incident shows that the safeguarding side of that equation can’t rely on individual competence. It requires systems that hold under pressure. 

How policy translates into practice 

Lots of organisations now have AI policies. Some of them are detailed, and closely aligned with evolving regulation. 

But policies don’t execute themselves. 

In this case, two layers failed: 

  • Usage discipline – AI tools were used in ways that did not follow internal guidance
  • Review integrity – verification processes did not detect clear factual errors

And that space between policy and practice is where risk exists. 

When we interviewed Roman Yampolskiy (AI Author and Director at Cybersecurity Lab, University of Louisville) about the development of a safety-first AI culture, he pointed to an issue that affects all organisations using AI: 

“The pace of technological advancement often outstrips the development of corresponding safety measures and regulatory frameworks.”

And crucially:

“There is a need for more proactive engagement, rigorous safety research, and ethical considerations integrated into the AI development lifecycle.”

Governance can’t just take the form of documents or training modules that are separate from an organisation’s day-to-day work. It has to be embedded into everyday workflows, incentives, and accountability structures.

What can every organisation learn from this? 

This isn’t a legal sector anomaly. It’s relevant to every industry and every organisation – because it brings to light the challenges that apply every time we leverage AI to generate outputs. 

Two lessons stand out: 

  1. Verification must be systematic
    Relying on individuals to double-check outputs introduces variability. High-stakes use cases require structured validation – tools, checklists, and clear ownership.
  2. AI literacy needs to go deeper
    Understanding that AI can hallucinate is one thing. Recognising when and how it is likely to do so (and designing around that) is another.

As Tiedrich told us: 

“Knowledge can empower people to make choices… and protect themselves against the harms and risks.”

AI can accelerate research and drafting. It can’t assume accountability – and in regulated environments, that’s non-negotiable. 

Share your perspective 

If AI is part of your core workflow, where does verification come in – and who’s accountable when it fails? 

Open this newsletter on LinkedIn and tell us how your organisation is answering that.

We’ll see you back here next week.

Related
articles