By Chris Cochran

For the sake of argument, let’s say 2023 was the year we were introduced to generative AI and ChatGPT, along with the myriad of other generative AI products from the big tech players. It was also the year we were introduced to any number of warnings of the social and economic effects of generative AI, “deepfakes” were moved into the mainstream, and dire predictions of AI-induced workforce reductions rumbled through the labor markets. Cecily Mauran over at Mashable.com gives us a good take on AI and the internet in 2023.
As a researcher and content creator, my relationship with AI is tentative at best right now. I want to find, create, and analyze, and I want to use my skills, my experience, and my brain to do that. My colleague Mary Ellen Bates wrote an interesting piece on the AI approach to her work, and how researchers and other infopreneurs can utilize AI without relying on it.
I’ve been particularly intrigued by the use of AI in the legal system, or more correctly put, the obfuscation of the use of AI in the legal system. It seems that every couple of months we find out new details of a lawyer “unwittingly” submitting a court brief or other document citing non-existent cases. Some of the parties involved usually claim innocence; they had no idea generative AI was used to create their arguments (most recently this happened with Michael Cohen). In mid-2023 a judge fined attorneys for submitting a legal brief that contained citations to cases that didn’t exist. The law firm basically pleaded ignorance of how AI actually worked and its capabilities.
Legal systems are trying to keep up with the explosion of AI rollouts. The UK Courts and Tribunal Judiciary decided to get ahead of the curve and issued guidance in December 2023 for the use of AI by judicial office holders. As Thomas Germain at Gizmodo.com describes it, the guidance is just that, and warns its members that using AI is a poor way of conducting research to find new information you can’t verify. The guidance also recommended that judges check the accuracy of AI responses before they make rulings.
In the U.S., the 5th Circuit Court of Appeals in November 2023 proposed updating a court administrative rule that requires attorneys to verify that either they did not rely on AI to draft briefs or that humans reviewed the accuracy of any text generated by AI in their court filings.
I cheekily suggested in a LinkedIn post recently that rather than a job cutting, workforce realigning technology, AI might actually help create jobs for law librarians and legal research professionals (well I guess that IS workforce realignment, just not in the way futurists have been predicting). I doubt that attorneys will be tasked with reviewing the accuracy of text generated by AI in their court filings. Legal research professionals are perfectly positioned to do that work, not least because they enjoy doing it. Is AI going to increase the workforce, or just the workload? That is an interesting point to consider. Efficiency at the cost of effectiveness and accuracy is not a tradeoff most people want to test.
Just ask Pras Michel, who claims his attorneys used an AI program to create a closing argument in his conspiracy trial that lost him the case. It’s not quite the same thing as using AI to conduct legal research, but it’s just one more spoke in a wheel of AI confusion in the legal field that has serious implications. As they say in the news world: developing…