Jeff Hancock, a leading expert in misinformation and the founder of the Stanford Social Media Lab, has publicly admitted to using ChatGPT in a way that led to errors in a legal document. These errors, identified as AI-generated “hallucinations,” have raised questions about the reliability of AI tools in professional and legal settings. The controversy centers around an affidavit Hancock submitted in support of Minnesota’s law regulating the use of deep fake technology to influence elections.
The affidavit was filed in response to a legal challenge to the law by conservative YouTuber Christopher Khols (known as Mr. Reagan) and Minnesota State Representative Mary Franson. The law is designed to prevent the use of deep fake technology to manipulate voters in elections, and Hancock’s affidavit aimed to support its validity by providing expert testimony on the impact of AI technology, particularly in the realm of misinformation.
AI “Hallucinations” and Legal Challenges
The controversy arose when it was discovered that the affidavit contained references to sources and citations that did not exist. Attorneys for Khols and Franson pointed out that the citations included in Hancock’s affidavit were fabricated, accusing the document of being unreliable. They demanded that the affidavit be excluded from consideration in the case, arguing that the errors called into question the entire document’s credibility.
Hancock initially used ChatGPT, specifically the GPT-4 model, to help him organize and manage the citations for the legal filing. However, the AI tool, which is known for sometimes generating incorrect or made-up information in a phenomenon referred to as “hallucination,” produced fake citations that Hancock did not catch before submitting the document. ChatGPT, while capable of producing impressive text, is not always accurate, particularly when it comes to citing real-world sources. In this case, the AI tool provided citations to non-existent research papers or misquoted real ones, a flaw that Hancock later admitted.
Hancock’s Response and Defense
After the errors were discovered, Hancock issued a declaration in which he acknowledged that he had used ChatGPT to assist in organizing the references for the affidavit. However, he insisted that the AI tool was only used for citation organization and that he had written and reviewed the substance of the declaration himself. Hancock emphasized that he had not relied on the AI for drafting the core content of the document.
In his statement, Hancock expressed confidence in the overall substance of his claims, asserting that they were supported by the most recent scholarly research on the effects of AI on misinformation and its societal impacts. He maintained that the errors in citation did not change the substantive points of his argument, but the error in the citations did undermine the perceived reliability of the document.
“I wrote and reviewed the substance of the declaration, and I stand firmly behind each of the claims made in it, all of which are supported by the most recent scholarly research in the field and reflect my opinion as an expert regarding the impact of AI technology on misinformation and its societal effects,” Hancock wrote in his follow-up declaration. Despite this, the legal team for Khols and Franson continued to question the validity of the affidavit due to the errors.
The Issue of AI in Legal and Professional Contexts
This incident has sparked a wider conversation about the use of AI in professional and legal contexts, particularly around the issue of “hallucinations.” The phenomenon, in which AI generates plausible-sounding but ultimately incorrect information, has been a known limitation of large language models like GPT-4. While AI can assist in many tasks, including generating text and organizing information, it can sometimes create content that appears accurate but is entirely fabricated.
The case raises questions about the responsibility of individuals who use AI tools in sensitive areas such as legal filings, research, and academic work. While AI has made significant strides in assisting with tasks like writing and data management, the responsibility for ensuring accuracy and reliability still falls on the user. Hancock’s experience highlights the risks of relying on AI for tasks like citation management without careful oversight.
Broader Implications for AI in Misinformation Research
The controversy surrounding Hancock’s affidavit also sheds light on the growing role of AI in the field of misinformation research. Hancock is a leading expert in this area, and his work typically focuses on the ways in which technology, particularly AI, can be used to spread misinformation. However, this incident reveals the irony of using AI tools that themselves can contribute to the spread of false or misleading information.
While Hancock’s research has focused on the dangers of AI-generated misinformation, this mistake has become an example of how even experts in the field are not immune to the challenges posed by AI tools. The incident underscores the importance of critical oversight when using AI, particularly in areas where factual accuracy is paramount.
Conclusion: A Cautionary Tale for AI Usage
In the wake of this controversy, Hancock’s case serves as a cautionary tale for anyone using AI tools in professional settings. While AI can be incredibly helpful for organizing and generating text, users must be vigilant in reviewing and verifying the information it produces, especially when it comes to legal, academic, or other high-stakes areas. Hancock’s error illustrates how even small oversights—such as failing to catch an AI-generated citation mistake—can have significant consequences in professional and legal contexts.
Ultimately, the case highlights the need for improved tools, greater transparency, and better user understanding of AI’s limitations as it continues to be integrated into various fields. For now, Hancock stands by the substance of his claims but acknowledges the need for more careful oversight when using AI in his work.