Social media users, through memes, have become a sort of "red team" for testing and critiquing unfinished AI features.

According to Google's new AI search feature, running with scissors could be a cardio exercise that would make one's heartbeat rise and demands concentration and focus. Some even say it improves the pores and gives one strength.
Social media users, through memes, have become a sort of "red team" for testing and critiquing unfinished AI features.

According to Google's new AI search feature, running with scissors could be a cardio exercise that would make one's heartbeat rise and demands concentration and focus. Some even say it improves the pores and gives one strength.

Google's AI feature pulled this response from a website called Little Old Lady Comedy, which, as its name makes clear, is a comedy blog. But the gaffe is so ridiculous that it's been circulating on social media, along with other obviously wrong AI overviews on Google. In essence, everyday users are now red-teaming these products on social media.

Some companies hire "red teams" consisting of ethical hackers who try to breach their products as if they are bad actors. If a red team can find a vulnerability, then the company can fix it before the product ships. Google certainly performed a form of red teaming before releasing an AI product on Google Search, which is estimated to process trillions of queries per day.

It's jarring, then, when a company as well-resourced as Google delivers products with glaring flaws. This explains why failures in AI products have become a meme to clown on, especially at a time where the ubiquity of AI is gaining pace. Here, it was bad spelling in ChatGPT, video generators not knowing how humans eat spaghetti, and Grok AI news summaries on X that, like Google, don't quite get the joke of satire. These memes could, however, actually represent some of the most valuable feedback companies creating and testing AI could have.

When such flaws appear at large scale, tech companies are all too eager to play down their seriousness.

The examples we've seen are pretty rarely occurring queries, and not representative of most people's experiences," Google said in an emailed statement to TechCrunch. "We run very thorough testing before launching a new experience, and will use these isolated examples as we continue to refine our systems overall.".

Different users get different AI results, and by the time a particularly bad AI suggestion gets around, the problem has been frequently fixed already. In a more recent viral case, Google suggested that if you're making pizza but the cheese won't stick to it, you could add about an eighth of a cup of glue to the sauce to "give it more tackiness." As it turns out, the AI is pulling this answer from an eleven-year-old Reddit comment from a user named "f––smith.".

Google AI earlier suggest glue needs to be added so that cheese can stick to the pizza, and voila the source appears to be 11 yrs old from Reddit account of user F*cksmith ???? pic.twitter.com/uDPAbsAKeO

Not only that, but it also adds that AI content licensing deals are probably overpriced. Google reportedly signed a $60 million deal with Reddit to license its content for AI model training, for example. Reddit last week signed a similar deal with OpenAI, and Automattic properties WordPress.org and Tumblr are said to be in talks to sell data to Midjourney and OpenAI.

To Google's credit, much of what's wrong probably comes from some pretty weird searches into AI. At least I hope no one is actually searching for "health benefits of running with scissors." Some of these mistakes, though are more critical. Science writer Erin Ross posted on X that Google spat out errors about what to do if you get bitten by a rattlesnake.

Ross's post, which attracted more than 13,000 likes, suggests AI recommended applying a tourniquet to the wound, cutting the wound and sucking out the venom. "All of these are things you should not do if you get bitten," the U.S. Forest Service says. Meanwhile, on Bluesky, author T Kingfisher amplified a post showing how the Google Gemini AI misidentified a poisonous mushroom as a common white button mushroom-screenshots of the post spread to other platforms as a cautionary tale.

Good ol' Google AI: telling you to do the exact things you *are not supposed to do* when bitten by a rattlesnake.

From mushrooms to snakebites, AI content is genuinely dangerous. pic.twitter.com/UZXgBjsre9

When a bad AI response goes viral, the AI could get more confused by the new content around the topic that comes about as a result. On Wednesday, New York Times reporter Aric Toler posted on X an image showing a question of whether or not a dog has ever played in the NHL. The AI answered with a yes – for whatever reason, the AI had determined Calgary Flames player Martin Pospisil was a dog. Now, when you input that same query, the AI brings up an article on the Daily Dot about how Google's AI keeps thinking that dogs are playing sports. The AI is being fed its own mistakes, poisoning it further.

This is the intrinsic problem of training these large-scale AI models on the internet: sometimes people on the internet lie. But just like there's no rule against a dog playing basketball, big tech companies unfortunately have shipped bad AI products.
There's an old computer adage: garbage in, garbage out.

Blog
|
2024-11-18 21:34:19