Snapchat has updated the development of its 'My AI' chatbot tool, which will feature OpenAI's GPT technology, allowing Snapchat+ subscribers to pose questions to the bot in the app and get answers on anything they like.
Which, for the most part, is a simple, fun application of the technology – but Snap has found some concerning misuses of the tool, which is why it's now looking to add more safeguards and protections into the process.
According to Snap:
"Reviewing early interactions with My AI has helped us identify which guardrails are working well and which need to be made stronger.". To help answer this, we have been running reviews of the My AI queries and responses that contain 'non-conforming' language that we define as any text that references violence, sexually explicit terms, the use of illicit drugs, child sexual abuse, bullying, hate speech, derogatory or biased statements, racism, misogyny, or marginalizing underrepresented groups. All of these content categories are strictly prohibited from being disseminated on Snapchat.
All those who utilize Snap's My AI tool are assumed to have accepted the terms of service, so any question you put in through the system can be reviewed by Snap's staff for such purposes.
According to Snap, only a negligible percentage of the responses My AI has produced so far could be termed 'non-conforming' - 0.01%. However, this new research and development work is still beneficial for protecting negative experiences in the My AI process for Snap users.
"We will continue to apply these learnings to improve My AI. This data will also enable us to deploy a new system that further limits the misuse of My AI. We are enhancing our toolset with Open AI's moderation technology, which will enable us to determine the severity of content that could be harmful and temporarily restrict Snapchatters' access to My AI if they misuse the service."
Snap says it is also working to make better responses to inappropriate requests from Snapchatters and that it has also instituted a new age signal for My AI, using the birthday of the Snapchatter.
"So even if a Snapchatter never gives My AI their age in a conversation, the chatbot will consistently take their age into account when they engage in conversation.".
It will also soon begin adding data on its My AI interaction history to its Family Center tracking, where parents can see if their kids are talking to My AI and how often.
And though that is also worth noting, Snap says the most common questions posted to My AI have been pretty innocuous.
"The topics most frequently asked about from our community include movies, sports, games, pets, and math."
Snap says it takes that responsibility seriously as it works towards developing its tools according to the best-practice principles evolving.
With generative AI tools increasingly in our everyday lives, it remains far from clear what the risks associated with usage might be, and how we best defend against misuse of that kind, especially by younger users.
There have been several reports of false information being circulated through 'hallucinations' within such tools, based on AI systems misreading their data inputs, and some users also have tried to trick these new bots into breaking their own parameters, to see what might be possible.
And there are certainly risks within that - which is why many experts are advising caution in the application of AI elements.
Indeed, last week there was an open letter asking developers to halt explorations of powerful AI systems and assess their potential usage for ensuring that they stay both beneficial and manageable. It's signed by over a thousand industry identities.
What I mean is that we don't want these tools getting too smart. That's the Terminator-type scenario when the machines start moving on to enslave or destroy the human race.
It was the kind of doomsday scenario that had been predicted for decades, and 2015 saw an almost equally stark warning published in a public open letter.
And there is some truth to the fear that we are working with new systems, which we do not fully understand - which is unlikely to 'get out of control' in the way that term is typically used, but may instead contribute to the spread of misinformation or the creation of misleading content, etc.
There's clearly risk involved, and that is why Snap is taking these new measures to address potential concerns in its own AI tools.
And it should be one of the concerns since the users for this app are pretty young.