Meta and Microsoft have joined a new framework focused on the responsible use of AI.

The group aims to create wider industry regulations on the use of generative AI.
Meta and Microsoft have joined a new framework focused on the responsible use of AI.

More commonly used and exploding generative AI tools create more concerns regarding the dangers these processes might possess, and what regulatory measures could be created to start protecting people from copyright violation, misinformation, defamation, and much more.

And while broader government regulation is the ideal step, which again requires global cooperation—something that, as past digital media applications have often shown, is very challenging to establish, as these different approaches and opinions lead towards differing responsibilities and actions in practice.

As such, it will probably be left to smaller industry groups and individual companies to implement control measures and rules to mitigate the risks that generative AI tools present.

This is why this might be a very important move: today, Meta and Microsoft, which has now become one of the most important investors in OpenAI, joined the Partnership on AI (PAI) Responsible Practices for Synthetic Media initiative, which focuses on establishing industry agreement on responsible practices in the development, creation, and sharing of media created via generative AI.

According to PAI:

“The first-of-its-kind Framework was launched in February by PAI and backed by an inaugural cohort of launch partners including Adobe, BBC, CBC/Radio-Canada, Bumble, OpenAI, TikTok, WITNESS, and synthetic media startups Synthesia, D-ID, and Respeecher. Framework partners will gather later this month at PAI’s 2023 Partner Forum to discuss implementation of the Framework through case studies and to create additional practical recommendations for the field of AI and Media Integrity.”

PAI says that the group will also work to clarify their guidance on responsible synthetic media disclosure, while also addressing the technical, legal, and social implications of recommendations around transparency.

As noted, this is a rapidly rising area of importance, which US Senators are now also looking to get on top of before it gets too big to regulate.

Two Republican Senator Josh Hawley and Democrat Senator Richard Blumenthal had introduced new legislation to strip the Section 230 protections that social media companies get by facilitating the spread of AI-generated content. Under this, platforms would face liabilities for spreading such material.
There is still much to be worked out in that bill, and it will be tough to get approved. But the fact that it's even being proposed underlines the rising concerns that regulatory authorities have, particularly around the adequacy of existing laws to cover generative AI outputs.

PAI is far from the only organization trying to develop AI guidelines. Google has already released its 'Responsible AI Principles,' while LinkedIn and Meta have issued guiding rules over their use of the same, with the latter two likely reflecting much of what this new group will be aligned with, given that they're both (effectively) signatories to the framework.

It's an important area to consider, and like misinformation in social apps, it really shouldn't come down to a single company, and a single exec, making calls on what is and is not acceptable, which is why industry groups like this offer some hope of more wide-reaching consensus and implementation.

But even so, it'll take some time - and we don't even know the full risks associated with generative AI as yet. The more it gets used, the more challenges will arise, and over time, we'll need adaptive rules to tackle potential misuse, and combat the rise of spam and junk being churned out through the misuse of such systems.

Blog
|
2024-12-01 00:13:30