If you use Google regularly, you may have noticed the company’s new AI Overviews providing summarized answers to some of your questions in recent days. After looking through dozens of examples of Google AI Overview mistakes, we’ve noticed a few broad categories of errors that seemed to show up again and again. An AI answer that suggested using “1/8 cup of non-toxic glue” to stop cheese from sliding off pizza can be traced back to someone who was obviously trying to troll an ongoing thread. A response recommending “Blinker fluid” for a turn signal that doesn’t make noise can similarly be traced back to a troll on the Good Sam advice forums, which Google’s AI Overview apparently trusts as a reliable source.

Source: Google’s AI Overview can give false, misleading, and dangerous answers

This is clearly rushed