Social Media

Light
Dark

Google & Microsoft Chatbots Fabricate Super Bowl Stats

If you needed further proof of GenAI’s tendency to concoct falsehoods, Google’s Gemini chatbot, formerly known as Bard, has jumped the gun by declaring that the 2024 Super Bowl has already taken place, complete with fabricated statistics to support its claim.

According to a Reddit thread, Gemini, powered by Google’s GenAI models, is confidently responding to inquiries about Super Bowl LVIII as if the game concluded just yesterday—or even weeks ago. Strikingly, it appears to favor the Kansas City Chiefs over the San Francisco 49ers, much to the chagrin of fans in the Bay Area.

Gemini’s fabrications extend to detailed player statistics, with one instance attributing Kansas City Chiefs quarterback Patrick Mahomes with an improbable 286 rushing yards for two touchdowns and an interception, juxtaposed with Brock Purdy’s 253 yards and a single touchdown.

But Gemini isn’t alone in its missteps. Microsoft’s Copilot chatbot also asserts the conclusion of the game and offers erroneous references to support its assertions. However, it diverges from Gemini’s narrative by claiming a victory for the 49ers, with a purported final score of 24-21.

The situation is not only absurd but potentially rectified by now, as attempts to replicate Gemini’s responses in the Reddit thread have yielded no success. It wouldn’t be surprising if Microsoft is also working to address the issue. Nevertheless, this episode underscores the significant limitations of current GenAI technology—and the perils of placing undue reliance on it.

GenAI models lack genuine intelligence; they are trained on vast datasets sourced primarily from the internet, learning to predict the likelihood of certain data based on contextual patterns. While this probabilistic approach generally yields coherent output, it is far from foolproof. Language models may generate grammatically correct but nonsensical text or propagate inaccuracies from their training data.

This isn’t malicious intent on the part of the GenAI models—they lack the capacity for malice, and the concepts of truth and falsehood are meaningless to them. Instead, they have learned to associate certain words or phrases with specific concepts, irrespective of their accuracy.

Hence, the dissemination of falsehoods about the 2024 Super Bowl by Gemini and Copilot.

Both Google and Microsoft, like many other GenAI vendors, acknowledge that their AI applications are imperfect and prone to errors. However, these disclaimers often appear in fine print, potentially overlooked by users.

While the Super Bowl misinformation may seem innocuous compared to more harmful instances of GenAI misbehavior—such as endorsing torture, perpetuating stereotypes, or promulgating conspiracy theories—it serves as a crucial reminder to verify statements from GenAI bots. There’s a substantial likelihood that they may not be accurate.

Read More On: Thestartupscoup.Com

Leave a Reply

Your email address will not be published. Required fields are marked *