Law Review Contracts and AI
We have posted here before about law review contracts — that is, the largely adhesive contracts that journals seek to impose on authors after the authors have committed to publishing a work of scholarship in a particular journal. Dave Hoffman has done a quick study of the ways in which law journals use contractual or pre-contractual means to regulate authors’ use of AI. His findings are not surprising in the aggregate. Like everyone else on the planet, law reviews are concerned that authors are submitting AI slop but they don’t have the time or resources to police submissions, so the best they can do is ask for disclosures and threaten retraction of publications offers if they discover hallucinated citations. Professor Hoffman’s specific discoveries are interesting, and he builds on some prior empirical studies to provide an overview of the current state of play.
Building on an empirical study by UNLV Law’s Nachman Gutowski, Professor Hoffman, with the help of Claude, offers some preliminary observations. It is hard to get one’s hands on a law review contract without getting an offer of publication. Still, using information available through Scholastica, it seems that most law review have no official AI policies. The publicly-available information indicates that only about 3% of the law reviews put authors on notice that they have an AI policy. Professor Hoffman found eight publicly-available contracts with provisions relating to AI.
It looks like six journals require authors to provide a warranty that they have not used AI in impermissible ways. Ten journals require disclosure of AI; two journals require acknowledgment of some sort, and one is in the process of developing a mandatory AI policy and is making do with an informal disclosure system for now. It is not clear what consequences follow from incomplete disclosure.
Professor Hoffman then moves on to look at how courts are responding to AI. Some have adopted rules requiring that attorneys warrant that they did not use AI or that, if they did, they checked the accuracy of all citations. Others, like the Fifth Circuit, see no need to supplement existing rules. Lawyers have an ethical duty to ensure the accuracy and reliability of their submissions. Submitting a brief containing unverified AI reasoning and citations would be novel in form but not in substance.
Professor Hoffman thinks the various approaches to the challenge of AI assistance in the generation of legal work are linked by an “aesthetic” choice. Editors and courts want work product that is generated by humans rather than by machines. This suggests that the promulgator of the various policies promote values other than quality. Professor Hoffman appreciates the sentiment, but he suggests that what the policies police is AI authorship, not AI assistance. The goal is that the intellectual word is ultimately the product of a human mind. But the boundary between human and machine intelligence is getting increasingly difficult to maintain. Even if we want to prevent people from submitting AI-generated work product, law reviews and courts are unlikely to develop the tools that would enable them to do so.
In the end, Professor Hoffman returns us to a warning from one of his previous posts. If you use AI to help you draft, you do so at your own risk, because the terms of service to which me must agree in order to access the tech companies’ services insulate them from any accountability.
I don’t have much to add on this topic. It can be treacherous to walk on shifting sands, and interactions between AI and the law have been developing at an alarming clip. It is helpful in such circumstances to have a guide.