Sullivan & Cromwell Apologizes for Submitting AI Slop
We’ve covered AI citation errors before here and here, but never at this scale or from a firm of this stature. Sullivan & Cromwell is one of the world’s most prestigious firms. Whole books have been written describing the firm’s power and reach.
Oh, how the mighty have fallen! Santul Nerkar reports for The New York Times that Sullivan & Cromwell has apologized to a federal judge for submitting legal papers that contained dozens of A.I.-generated errors. Opposing counsel in a proceeding in Bankruptcy Court noted errors in Sullivan & Cromwell’s filings, and the firm came clean with a three-page list of three dozen AI-generated and other errors in its submissions.
Sullivan & Cromwell of course apologized for the errors. It also pointed out that its attorneys undergo a rigorous training before being allowed to access the firm’s AI tools. The training encourages lawyers to “trust nothing and verify everything.” That’s pretty good advice, but it does not explain how the firm filed submissions with three-dozen errors.
I shared this draft post with Claude, and it confirmed the advice was sound. However, in order to go beyond a mere slogan, firms need to introduce procedures, like a checklist that time-pressured associates must internalize and mentally tick off before they submit work product. Claude has a few recommendations, but the most cost-effective one, I would think, would be to require source-pulling. Don’t ever cite to a source that you have not located, read, and confirmed that it actually exists and says what you think it says.
There really is a double irony in this story. The first is that a firm as well-resourced as Sullivan & Cromwell, whose attorneys are extraordinarily competent and knowledgeable, could have submitted such an error-riddled document. The smartest lawyers in the world are relying on AI to compose their submissions, and the end result is much less polished than it would have been had they just done the work themselves.
The second is that lawyers use AI to assist in their research in order to conduct that research more cost-effectively, but using AI right might make them less efficient. I love having AI to assist me in my blogging, but it does not make me a more efficient blogger. Yes, AI catches my mistakes, but it also forces re-writes. I am happier with the end product, but I won’t always have the time to devote to editing posts that — let’s be honest — don’t reach nearly as many readers as they did ten or fifteen years ago when blogs were in their prime. Sometimes I will have to choose between skipping AI-editing and blogging less. I don’t have clients (or anyone) paying me for my time.
I have read that AI will always hallucinate citations. Hallucinations may be a feature of the way large language models work that can never be un-bugged. If that is so, there will always be a trade-off between the time savings achieved through AI-assisted research and the time-suck of checking to make sure that the AI-generated citations are reliable. A firm like Sullivan & Cromwell could just expend resources on cite checking — probably highly skilled legal assistants can pull sources. But some high-priced associate is still going to have to read the sources and confirm their relevance, which is exactly what they would have done if the firm had never invested in AI tools in the first place.