AI chatbot from Microsoft Bing provides inaccurate information regarding elections and data.

26

A report from two nonprofits based in Europe has revealed that Microsoft’s artificial intelligence (AI) Bing chatbot, now known as Copilot, generates inaccurate results concerning election information and misattributes its sources.

The report, published by AI Forensics and AlgorithmWatch on December 15, indicated that Bing’s AI chatbot provided incorrect answers 30% of the time to fundamental inquiries about political elections in Germany and Switzerland. The inaccuracies pertained to candidate details, polling data, scandals, and voting processes.

It also yielded erroneous responses to queries regarding the 2024 presidential elections in the United States.

Bing’s AI chatbot was selected for the study due to being one of the initial AI chatbots to incorporate sources in its responses, and the report noted that these inaccuracies are not exclusive to Bing. Preliminary tests on ChatGPT-4 also revealed inconsistencies.

The nonprofits emphasized that the dissemination of false information has not swayed election outcomes, although it may lead to public confusion and misinformation.

“As generative AI becomes more prevalent, this could impact one of the fundamental principles of democracy: access to trustworthy and transparent public information.”

Furthermore, the study indicated that the safeguards integrated into the AI chatbot were “unevenly” applied, resulting in evasive answers 40% of the time.

Related: Even the Pope has something to say about artificial intelligence

As reported by the Wall Street Journal, Microsoft acknowledged the findings and stated its intention to address the issues prior to the U.S. 2024 presidential elections. A Microsoft representative urged users to verify the accuracy of information sourced from AI chatbots.

Earlier in October, U.S. senators proposed legislation aimed at penalizing creators of unauthorized AI replicas of real individuals, whether living or deceased.

In November, Meta, the parent organization of Facebook and Instagram, implemented a policy prohibiting the use of generative AI tools for political advertising as a precaution ahead of the upcoming elections.

Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change