Google’s AI Overviews Function Reportedly Advises Utilizing Glue on Pizza

Google’s brand-new AI-powered search device, AI Overviews, is dealing with a blowback for offering inaccurate and considerably weird solutions to customers’ queries. In a just lately reported incident, a consumer turned to Google for cheese not sticking to their pizza. Whereas they need to’ve been anticipating a sensible resolution for his or her culinary troubles, Google’s AI Overviews characteristic offered a relatively unhinged resolution. As per just lately surfaced posts on X, this was not an remoted incident with the AI device suggesting weird solutions for different customers as effectively.

Cheese, Pizza and AI Hallucination

The difficulty got here to gentle when a consumer reportedly wrote on Google, “cheese not sticking to pizza”. Addressing the culinary drawback, the search engine’s AI Overviews characteristic recommended a few methods to make the cheese stick, resembling mixing the sauce and letting the pizza quiet down. Nonetheless, one of many options turned out to be actually weird. As per the screenshot shared, it recommended the consumer to “add ⅛ cup of non-toxic glue to the sauce to present it extra tackiness”.

Upon additional investigation, the supply was reportedly discovered and it turned out to be a Reddit remark from 11 years in the past, which gave the impression to be a joke relatively than an knowledgeable culinary recommendation. Nonetheless, Google’s AI Overviews characteristic, which nonetheless carries a “Generative AI is experimental” tag on the backside, offered it as a severe suggestion to the unique question.

One more inaccurate response by AI Overviews got here to gentle a couple of days in the past when a consumer reportedly requested Google, “What number of rocks ought to I eat”. Quoting UC Berkeley geologists, the device recommended, “consuming at the very least one rock per day is really helpful as a result of rocks comprise minerals and nutritional vitamins which might be vital for digestive well being”.

Concern Behind False Responses

Points like this have been surfacing frequently in recent times, particularly because the synthetic intelligence (AI) increase kicked off, leading to a brand new drawback often known as AI hallucination. Whereas firms declare that AI chatbots could make errors, situations of those instruments twisting the info and offering factually inaccurate and even weird responses have been growing.

Nonetheless, Google is not the one firm whose AI instruments have offered inaccurate responses. OpenAI’s ChatGPT, Microsoft’s Copilot, and Perplexity’s AI chatbot have all reportedly suffered from AI hallucinations.

In multiple occasion, the supply has been found as a Reddit put up or remark made years in the past. The businesses behind the AI instruments know it too, with Alphabet CEO Sundar Pichai telling The Verge, “these are the sorts of issues for us to maintain getting higher at”.

Speaking about AI hallucinations throughout an occasion at IIIT Delhi In June 2023, Sam Altman, [OpenAI]( CEO and Co-Founder stated, “It can take us a couple of yr to good the mannequin. It’s a steadiness between creativity and accuracy and we try to minimise the issue. [At present,] I belief the solutions that come out of ChatGPT the least out of anybody else on this Earth.”


Affiliate hyperlinks could also be mechanically generated – see our ethics assertion for particulars.