The Generative AI Surge Presents a Greater Risk of Harm for Many Users.

As platforms encourage users to interact with AI, the potential mental health implications must be taken into account.
The Generative AI Surge Presents a Greater Risk of Harm for Many Users.

As I have noted before, it is astonishing how we seem to have learned very little from the very negative impacts that sprang from the rise of social media, and now we are poised to make the very same mistakes again in the rollout of generative AI.

Because while generative AI has the capacity to provide a range of benefits in a range of ways, there are also potential negative implications of increasing our reliance on digital characters for relationships, advice, companionship, and more.

And yet, big tech companies are racing ahead, eager to win out in the AI race, no matter the cost.

Or more likely, without consideration of the impacts. Since they haven't arisen yet, and till they do, we can plausibly assume that everything's going to be okay. Which, once again, is what happened with social media, with Facebook for instance being able to "move fast and break things" till a decade later, when its execs were hauled before congress to explain the negative impacts of its systems on people's mental health.
And yet, during Meta's push to get more people using its generative AI tools, it seems the company is now prompting users to chat with its custom AI bots, including "gay bestie" and "therapist."

Not sure entrusting your mental health to an unpredictable AI bot is a safe way to go, and Meta actively pushing in-stream such doesn't seem like such a huge risk, especially at Meta's scale.
I do not understand why everybody would be interested in making clones of themselves or anything else for that matter; however, Meta is investing its billions of users to use its generative AI processes, apparently embracing the idea that Mark Zuckerberg, Meta CEO, is convinced will be the next phase of social media interaction.

Indeed, in a recent interview, Zuckerberg explained that:

Every aspect of what we do will be modified in some way through AI. Feeds are going to shift from, you know, they were already friend content, and now they're mostly creators, but soon a whole lot of them are going to be AI generated.
So Zuckerberg's view is that we are going to increasingly interact more with AI bots than real humans, an idea Meta reinforced this month by hiring Michael Sayman, the developer of a social platform entirely populated by AI bots.
Sure, there's probably some benefit in using AI bots to logic-check your thinking, or prompt you with alternate angles that you might not have considered. But relying on AI bots for social engagement seems very problematic - and potentially harmful in many ways.

For example, The New York Times reported this week that the mother of a boy who, at age 14, had taken his life after months of building a relationship with an AI chatbot has now brought suit against the company developing AI chatbots, accusing it of being responsible for her son's death.

He was glued to a chatbot based on Daenerys Targaryen of Game of Thrones. To him, this created an artificial relationship that seems to have delinked him from reality. Increasingly, that alienated him from the real world and may ultimately have led to his death.

Some will say that this is an extreme case, playing with variables across the board. But I would bet it won't be the last; it's also reflective of a broader concern about moving too fast with AI development and pushing people to build relationships with entities that don't exist-there are going to be expanded mental health impacts.

And yet the race in AI pushes ahead at warp speed.

Moreover, the more evolved VR technology poses an exponentially increased threat to mental health as users will be engaging in interfaces even more immersive than those on social media apps. And in this regard, Meta is also trying to get more people involved in lowered access to an age that is already considered too young.

On the other side, senators are proposing age restrictions on the social media apps because of years of evidence of problematic trends on social media sites.

Will we have to wait for the same before regulators look at the potential dangers of these new technologies, then seek to impose restrictions in retrospect?

If that's the case, then a lot of damage is going to come from the next tech push. And while moving fast is important for technological development, it's not like we don't understand the potential dangers that can result.

Blog
|
2024-10-25 04:07:17