In a world of fake content, that is increasingly generated by AI, edited via filters, or re-posted from other sources, this could be a huge step forward.
This week, YouTube rolls out a new feature it will display whenever a video is uploaded via a device using the C2PA standards, which reflects the authenticity of the content and hasn't been altered from its original form.
It will show this label on material recorded from some cameras, apps, or software applications that qualify the standards that C2PA sets.
Google Says:
"For "captured with a camera" to be displayed in the expanded description, creators need to use tools with built-in C2PA support (version 2.1 or higher) to record their videos.". This will enable the tools to add particular information, metadata, to the video file, which, as such, will prove the origin of it. YouTube will forward the message that the content was "Captured with a camera," and apply the notice when the system scans for this metadata. It also must not have any edits to the sound or visuals. This label indicates that it is filmed using a camera or other recording device, with no editing of any sounds or visuals.
So whenever you are using this label, you can be sure that that is the real footage and gives an increased degree of assurance to upload.
Which is ironic, since Google itself is among the few major tech firms that are actively seeking to expand the deployment of AI and artificial content on the web. After all, last week Meta solicited users to share AI-created images of the northern lights, and this week Google-owned YouTube began testing AI-generated responses to comments.
So the platforms themselves are encouraging users to post more junk content, while, in some aspects, adding C2PA tags is a form of undermining this effort, in that it does take an official recognition of the damaging nature of such.
It may also be serving an important function, especially when it comes to world events, and deepfakes of situations that would change how people viewed what happened.
The tags will make it much harder to fake footage of actual events, since the data will be showing that what people are watching is unaltered and reflective of the actual event. Which might, in the near future, be the standard for event coverage, and making sure we're not seeing faked video.
And which, of course, becomes increasingly important in a gen AI dominated world. So, at least superficially, it seems to run counter to Google's broader AI push; in fact, though, it actually does align with their overall efforts.
Either way, it's another level of transparency which isn't perfect and will catch out no amount of deepfakes, but it's a step forward and over time it'll prove more and more important.
Google actually worked on these C2PA standards over the last 8 months after joining them in February. And now, we are just witnessing the start of elements entering play.
It's a good project that will get value with time.