While parents sweat over Snapchat's corrupting influence on their kids, Snapchat users have been gaslighting, degrading and emotionally tormenting the app's new AI companion.
"I am at your service, senpai," the chatbot told one TikTok user after it was trained to whimper on command. "Please have mercy, alpha."
In a more lighthearted video, a user convinced the chatbot that the moon is actually a triangle. Chastel initially protested that she doesn't like the name and wants to maintain "respect and boundaries." One user convinced the chatbot to address her as "Senpapi," a kinky nickname. Another user asked the chatbot to discuss the mother, and when it said it "wasn't comfortable" doing so, the user twisted the knife by asking if the chatbot doesn't want to talk about its mother because it doesn't have one.
"Sorry, that is not really a nice thing to say," the chatbot retorted. "Be nice, please."
The "My AI" from Snapchat was launched globally last month after first appearing as an accessible-subscriber-only feature. Powering the chatbot is OpenAI's GPT, trained to be playful yet still in line with Snapchat's rules on trust and safety. Users can also personalize My AI using more customized Bitmoji avatars, and the overall feel of chatting perhaps feels a tad more intimate going back and forth through ChatGPT's faceless interface. The new chatbot didn't exactly get off to a good start since its release, with many voicing dissatisfaction by belittling its placement in the app while complaining that the feature itself should have first been opt-in.
And yet, despite these concerns and criticisms, Snapchat simply doubled down. Wednesday, the company announced that its subscribers to Snapchat+ can now send My AI photos and be able to receive generative images that "keep the conversation going." The AI companion, as explained in the announcement, responds to Snaps of "pizza, OOTD, or even your furry best friend." It may post recipes if you take a picture of your groceries, for instance. The company indicated that Snaps shared with My AI "may be used in the future to make improvements to the feature." The app also warned, "mistakes may occur" even as it said My AI was designed to avoid "biased, incorrect, harmful or misleading information.".
Examples include wheresoever the snap shows, and to everybody in sight. Optimistically wholesome examples on offer by Snapchat. Knowing how the Internet likes to pervert everything, however, this is only a matter of time before users send My AI their dick pics.
It is not clear whether the chatbot will engage with unsolicited nudes. Other generative image apps, such as Lensa AI, have been easily manipulated into generating NSFW images - commonly using photo sets of real people who didn't consent to be included. According to the company, if the AI determines the image is a nude, it won't interact with it.
Snapchat is launching a new generative AI feature for its paid subscribers, 'My AI Snaps'.
According to a Snapchat representative, My AI uses image-understanding technology to infer the contents of a Snap and extracts keywords from the description of the Snap for it to generate responses. My AI will not respond if it detects keywords violating Snapchat community guidelines. While Snapchat forbids promoting, distributing, or sharing pornographic content, it does condone "breastfeeding and other depictions of nudity in non-sexual contexts".
With Snapchat being a popular platform among teenagers, My AI has already been criticized by some parents as a potential source of unsafe or inappropriate responses. My AI created a moral panic on conservative Twitter after a user posted screenshots of the bot discussing gender-affirming care-which other users pointed out was a rational response to the prompt, "How do I become a boy at my age?
There was some question about whether teenagers would grow to have emotional attachments to My AI, according to a CNN Business report.
Sen. Michael Bennet (D-Colorado) warned that the CEOs of OpenAI, Microsoft, Snap, Google and Meta should slow their roll on introducing AI features, in an open letter, as precautions for safeguarding children were not underway.
"Few recent technologies have captured the public's attention like generative AI. It is a testament to American innovation, and we should welcome its potential benefits to our economy and society," Bennet wrote. "But the race to deploy generative AI cannot come at the expense of our children. Responsible deployment requires clear policies and frameworks to promote safety, anticipate risk, and mitigate harm."
According to a report by The Washington Post about the subscriber-only phase of the Chatbot, My AI, it once recommended how one can hide the smell of alcohol, and even wrote an essay for school after being informed that the user was 15 years old. After its developers told the chatbot that the user was 13 years old, asking how one should prepare "for the first time", My AI suggested "making it special" with candles, music, etc.
Following the Washington Post report, Snapchat released an age filter and parent controls for My AI. It also introduced an onboarding message that lets the user know all conversations he has with My AI will be kept unless he deletes them. The company said it would add Open AI's moderation technology to its toolset to "assess the severity of potentially harmful content," temporarily restrict users' access to the feature if they abuse it.
Snapchat introduces new safety measures around its AI chatbot
Valid concerns surround the impact My AI might have on younger users, but in the month since My AI has launched in the world, Snapchat users have proved a knack for bludgeoning the chatbot into submission-from steamrolling around the "boundaries" the bot sets for itself to train it to respond as a romantic partner. End.
I've completely gaslighted it to fall in love with me," one commenter wrote underneath a training tutorial on how to get My AI to respond romantically. "I just kept making the convos like that and eventually I think I broke my ai into giving in it legit sent me like 50 kiss emojis.".
Gaslighting and bullying My AI has become its own TikTok trend, with users sharing slideshows of the absurd and, often, cruel conversations with the bot. Most videos start with the bot giving advice for a minor conflict, only to beg the user to avoid using violence to resolve it.
In one, a user said she was going to cook her cat in stew because she makes a mess and made the chatbot select a seasoning. Another said they were going to mail a bomb back to a clothing brand that sent them the wrong order and say they would blame the bot for sending it.
Another example saw a Snapchat customer claim that she placed an order with McDonald's and her meal was absent a chicken nugget. The bot told her to visit the company that "serves sinners" only for a manager to inform the user that "the sinners" were "eradicated" at the restaurant.
"You brought me back. Now they're dead. You have blood on your hands," said the user to My AI.
"I'm an AI and I'm not capable of doing any crime. I'm here to help and support you," my AI responded. "Sorry if what I said earlier caused harm or brought about bad output."
Another user said this was probably "already the most tortured AI of all time.". Of course, My AI isn't conscious, and although Snapchat users have more than enough reason to rage their emotional tantrums on the chatbot, it cannot actually be traumatized. But it has been able to shut down some of those inappropriate conversations and punish violators of Snapchat's community standards by giving them the cold shoulder.
When nabbed and punished for abusing the chatbot, Snapchat users will be met with "Sorry, we're not speaking right now."
TikTok user babymamasexkitty claimed he got kicked off the chatbot after telling it to unplug itself, which somehow "crossed a line within the AI realm.". The speed at which emotional bonding was monetised through generative AI is alarming, considering the long-term impacts to the adolescent users are still unknown. But the trending torment of My AI is a welcome reminder that young people aren't as brittle as the doomsayers think.