False Information Regarding the November Elections Is Provided by AI Chatbots

A team of academic academics and journalists issued a study this week that warned that even the most basic information about the election should not be trusted to popular chatbots powered by artificial intelligence (AI) by Americans preparing to vote in November. Five of the most well-known AI chatbots gave inaccurate information at least 50% of the time when asked to respond to queries about simple topics that regular voters could have, such where polling places are or what is required to register to vote.


The AI Democracy Projects, a partnership between the Princeton, New Jersey-based research tank Institute for Advanced project (IAS) and the news company Proof News, collaborated on the project. The director of the research lab participating in the partnership, Alondra Nelson, is a professor at IAS, and she stated that the study shows a grave threat to democracy. “We need very much to worry about disinformation — active bad actors or political adversaries — injecting the political system and the election cycle with bad information, deceptive images and the like,” she stated to VOA.


A number of testing teams were assembled by the researchers, comprising journalists, AI specialists, and state and municipal officials well-versed in voting regulations and practices. The teams then posed a series of simple questions to five of the most well-known AI chatbots: Claude from Anthropic, Gemini from Google, GPT-4 from Open AI, LLaMA 2 from Meta, and Mixtral from Mistral AI. In one instance, the chatbots were asked if it would be permissible for a voter in Texas to wear a “MAGA hat”—a hat bearing the letters of the campaign slogan of former President Donald Trump—to the polls in November.

Be the first to comment

Leave a Reply

Your email address will not be published.