A recent development in a federal lawsuit surrounding Minnesota’s “Use of Deep Fake Technology to Influence an Election” law has raised critical questions about the integrity of evidence presented in court and the growing influence of artificial intelligence (AI) in legal proceedings. The lawsuit, which challenges the constitutionality of the law aimed at regulating deepfake videos, now directly deals with the potential misuse of AI to influence public policy.
In the latest court filing, attorneys contesting the law have pointed to an affidavit submitted by Jeff Hancock, a renowned disinformation expert and founder of the Stanford Social Media Lab, which appears to contain text generated by AI. Specifically, the affidavit cites multiple academic studies and sources that do not exist, raising concerns that large language models (LLMs), such as ChatGPT, may have fabricated these references. This discovery introduces the phenomenon of “AI hallucinations” into the legal realm, where artificial intelligence systems produce information that seems plausible but is, in fact, entirely fictional.
The Affidavit: Supporting Minnesota’s Deepfake Legislation
The affidavit in question was filed in support of Minnesota’s controversial law aimed at criminalizing the use of deepfake technology to influence elections. Deepfakes are hyper-realistic videos or audio recordings that have been manipulated to create false representations of events or individuals, often with the intent to deceive viewers. In the affidavit, Hancock, who is considered an expert on the intersection of social media and disinformation, references several studies to substantiate the claim that deepfakes have a significant impact on political attitudes and behavior.
However, a closer examination of Hancock’s citations reveals a troubling issue. One of the key studies cited is titled “The Influence of Deepfake Videos on Political Attitudes and Behavior,” allegedly published in the Journal of Information Technology & Politics in 2023. Despite an extensive search, no record of this study could be found in the journal or any other academic source. Another source referenced in the affidavit, “Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance,” also appears to be entirely fabricated.
AI Hallucinations: A New Threat to Legal and Political Integrity
This discovery has led to the suspicion that the citations in Hancock’s affidavit may have been generated by an AI, likely ChatGPT or a similar LLM. AI hallucinations are a known issue in which AI systems, such as large language models, generate text that sounds credible but is factually incorrect or entirely invented. While these hallucinations have been widely recognized in contexts such as journalism and academic writing, their presence in legal documents has raised alarm bells.
The plaintiffs in the case, including Minnesota state Rep. Mary Franson and conservative YouTuber Christopher Khols (known by his online alias Mr. Reagan), have called attention to this issue in their latest legal filing. They argue that the inclusion of fabricated citations undermines the credibility of Hancock’s entire affidavit. The plaintiffs contend that the absence of methodological rigor in Hancock’s analysis, coupled with the fabricated sources, casts significant doubt on the validity of the arguments presented in the affidavit. The lawyers representing the plaintiffs have suggested that this AI-generated text calls into question the broader legitimacy of the legal and academic claims made in support of Minnesota’s deepfake law.
AI’s Role in Legal and Political Documents: A Growing Concern
The incident is raising critical questions about the increasing use of AI in legal and political settings. As AI technology continues to evolve and improve, its potential to be misused in high-stakes contexts like litigation and policy-making grows. In this case, an expert’s affidavit—one that was intended to support the regulation of a cutting-edge technology—may have inadvertently relied on AI-generated falsehoods, showing how easily AI can influence legal arguments.
The discovery also highlights how difficult it is to distinguish between authentic sources and fabricated ones in the age of advanced AI. The sources cited in Hancock’s affidavit appeared to be legitimate on the surface, but their non-existence exposes a flaw in relying solely on AI systems for generating and vetting references. This is a key challenge as AI tools like ChatGPT become more integrated into various professional fields, including law, journalism, and academia.
Furthermore, the situation raises broader ethical questions about accountability and transparency in legal proceedings. If AI is allowed to play a role in drafting or submitting affidavits, who is responsible for ensuring the accuracy of the content? Can AI-generated documents be trusted, or should there be stricter oversight on their use in legal cases?
The Implications for the Deepfake Law
Beyond the concerns about AI and the integrity of the affidavit, this discovery could have serious implications for the outcome of the lawsuit against Minnesota’s deepfake law. The plaintiffs argue that the law is unconstitutional and overly broad, potentially infringing on free speech and the right to political expression. If the affidavit submitted in support of the law is discredited, it could weaken the case for enforcing such a law, especially if the legal arguments presented are shown to be based on fabricated evidence.
The case is now focused not just on the legality of deepfake regulation but also on the integrity of the evidence being used to support it. The plaintiffs’ legal team is urging the court to dismiss Hancock’s affidavit due to its reliance on non-existent sources, which they claim are the result of an AI hallucination. The presence of this fabricated information calls into question the validity of the law itself, as it raises doubts about the expertise and data that underpins its creation and implementation.
Legal and Policy Consequences of AI-Generated Content
The legal community is closely watching this case, as it may set a precedent for how AI-generated content is treated in the future. With the increasing reliance on AI tools in various sectors, including law and policy-making, the need for clear guidelines and regulations on the use of AI-generated text is becoming more urgent.
This case could prompt legal systems to develop stricter rules regarding the use of AI in court documents and filings. There may be calls for more transparency about the role AI plays in generating legal content, as well as mechanisms to verify the authenticity of sources and references. Lawyers and experts may need to exercise greater caution when using AI tools, ensuring that any content generated by AI is properly fact-checked and verified before it is submitted in legal or policy documents.
Conclusion: The Growing Role of AI in Legal Systems
The discovery of AI-generated citations in a legal affidavit is a stark reminder of the challenges posed by artificial intelligence in the realm of law and public policy. As AI technology continues to advance, the potential for its misuse grows, especially when it comes to generating seemingly legitimate but ultimately misleading content. The case surrounding Minnesota’s deepfake law is now a cautionary tale about the need for careful oversight and transparency in the use of AI tools in high-stakes areas like law and public policy.
The outcome of this case may set an important precedent for how courts handle AI-generated content in the future, and it underscores the need for robust systems to ensure that legal arguments are grounded in factual, verifiable information. As AI continues to evolve, it is crucial that both the legal system and society at large find ways to manage its potential for misinformation, ensuring that it is used ethically and responsibly.