You can trick Google's AI Overviews into explaining made-up idioms
6 months ago
7
As Big Tech pours countless dollars and resources into AI, preaching the gospel of its utopia-creating brilliance, here's a reminder that algorithms can screw up. Big time. The latest evidence: You can trick Google's AI Overview (the automated answers at the top of your search queries) into explaining fictional, nonsensical idioms as if they were real.
According to Google's AI Overview (via @gregjenner on Bluesky), "You can't lick a badger twice" means you can't trick or deceive someone a second time after they've been tricked once.
That sounds like a logical attempt to explain the id...
NEWS.SP1.RO is an automatic news aggregator. In each article, taken over by NEWS.SP1.RO with maximum 500 characters from the original article, the source name and hyperlink to the source are specified.
The acquisition of information aims to promote and facilitate access to information, in compliance with intellectual property rights, in accordance with the terms and conditions of the source.
If you are the owner of the content and do not wish to publish your materials, please contact us by email at [email protected] and the content will be deleted as soon as possible.