Google Implements New Disclosures for AI-Generated Photos, Though They May Not Be Immediately Apparent.

Starting next week, if you click into a photo in Google Photos, you will notice the app is exposing disclosures on when a photo was actually edited with one of its AI features, including Magic Editor, Magic Eraser, and Zoom Enhance.
Google Implements New Disclosures for AI-Generated Photos, Though They May Not Be Immediately Apparent.

 

Starting next week, if you click into a photo in Google Photos, you will notice the app is exposing disclosures on when a photo was actually edited with one of its AI features, including Magic Editor, Magic Eraser, and Zoom Enhance.

Google calls it its new step "to increase transparency," but in itself, a manipulated AI picture isn't very clear. There won't be any visible watermarks appearing inside the frame of the photo saying it's a picture created by an AI. So, if you see an AI-edited Google photo on social media, in a text message, or even while scrolling through your photos app, you won't immediately know that the photo is synthesized.

Google announced the new disclosure for AI photos in a blog post on Thursday, some two months after Google unveiled its new Pixel 9 phones fully packed with these AI photo-editing features. And the disclosures appear to respond to the criticism Google received for widely distributing these AI tools without any easily readable by humans human-readable visual watermarks.

As for Best Take and Add Me-the other new photo-editing features from Google, which don't use generative AI- Google Photos will now also automatically note that those photos have been edited in their metadata but not under Details. The features take a series of photos together to make them appear as a clean image.

Those new tags don't exactly solve the main issue people have with Google's AI editing features: the lack of visual watermarks in the frame of a photo (at least ones you can see at a glance) may help people not feel deceived, but Google doesn't have them. We asked Google if it would consider adding visual watermarks to its images, and they didn't rule it out.

This work is not done," said Michael Marconi, communications manager at Google Photos in an email to TechCrunch. "We'll keep collecting feedback, improving and refining our safeguards and assessing further solutions to add more transparency around generative AI edits.".

So far, every photo edited by Google AI already discloses that it's edited by AI in the photo's metadata. Now there is also another easier-to-find disclosure under the Details tab on Google Photos. But most people don't go to see the metadata or details tab for pictures they view online. They just look and scroll away without much more investigation.

Fair enough, visual watermarks within the frame of an AI-generated photo are not much of a bright idea either. One can easily crop or remove them, and here we go again.

The more Google AI image tools are in use, the more synthetic content the people will be consuming online. It is getting harder to tell whether it is real or pure deception. The approach Google has taken-that is, metadata watermarks-advantages on platforms to inform users that they are viewing AI-generated content. Meta is already doing that on Facebook and Instagram, and Google says that it also plans to flag AI images in Search later this year. But other platforms have been slower to catch up.

Blog
|
2024-10-25 17:46:51