After introducing the “About this image” tool to Circle to Search, Google has announced plans to expand this feature to more services, including Search and YouTube. This update will use a new verification standard to help users tell the difference between AI-generated images and real ones.
In a blog post, Google explained that it will start labeling images in Google Search based on their origin, such as marking those that have been AI-edited or manipulated. This information will be available in the “About this image” tab across all platforms, much like the feature that’s already in use with Circle to Search on Pixel and Samsung Galaxy devices. This functionality is expected to be available on more Android devices soon.
Google also mentioned that this feature will work with images containing C2PA metadata. C2PA stands for the Coalition for Content Provenance and Authenticity, a new standard that promotes the responsible use of AI, with Google playing a key role.
Various types of media, such as images, videos, and audio files, that carry this metadata will show details about their origin and any modifications made. This could include information about the device or software used to create them.
CP2A can complement Google’s SynthID
The new C2PA standard works closely with Google’s own SynthID, a watermarking technology developed by Google DeepMind that can mark and identify AI-generated content. C2PA, on the other hand, provides a broader framework or set of rules for how these technologies, like SynthID, should be used to ensure content authenticity.
While SynthID focuses on embedding invisible watermarks to detect AI content, C2PA helps organize and manage these kinds of technologies, creating a stronger system when used together for verifying if content is real or AI-generated.
In addition to improving image search results, Google plans to integrate C2PA into its advertising systems. It’s also working on bringing C2PA to YouTube, where videos will display this metadata.
However, the widespread adoption of C2PA faces some challenges. Currently, only a few companies and AI services support the standard, and there are still limited devices that use watermarking technology.
There are also issues to address, such as the ease with which people can remove or alter metadata from images. Despite these hurdles, the benefits of this initiative may outweigh the challenges over time.
How do you feel about ethical and transparent use of artificial intelligence? Please let us know in the comments.