Google will label AI-generated, edited images to help users distinguish between real and altered content. Google plans to roll out a new tool that will help users identify whether an image is real, edited using software like Photoshop, or created by generative AI models. The feature will be integrated into Google Search and other services, offering more transparency about the origin of images online.
This new system will rely on the Coalition for Content Provenance and Authenticity (C2PA) standard. Major tech companies like Amazon, Microsoft, Adobe, OpenAI, and Intel are part of this coalition. The C2PA standard embeds data about an image’s origin in both hardware and software, creating a digital trail that tracks where and how the image was created.
Google has played a significant role in developing the C2PA’s latest technical standard, version 2.1. It will also use an upcoming C2PA trust list to verify the accuracy of metadata. Laurie Richardson, Vice President of Trust and Safety at Google, explained, “The trust list helps ensure that the information about an image, like the camera model used, is correct.”
Feature Integration Across Google Services
To enhance transparency, Google will label AI-generated, edited images in its search results. The feature will be part of Google’s updated “About this image” tool in Search. It will notify users if an image has been created or altered by AI. Google also plans to embed this metadata into its ad services to monitor AI-generated imagery. This will help enforce advertising policies related to AI-created content.
Richardson mentioned that Google aims to use C2PA signals to enforce policies over time. Additionally, the company is exploring how this data could be displayed to YouTube viewers for videos created using a camera. Further updates on this initiative are expected later in the year.
Challenges in Wider Adoption
While this is a big step toward curbing the misuse of AI-generated images, broader adoption of the C2PA standard remains a challenge. Currently, only a few cameras from brands like Leica and Sony support this standard. For greater success, widespread support from leading manufacturers like Nikon, Canon, and Apple is crucial.
Software compatibility is also limited at this point. Applications like Adobe Photoshop and Lightroom are among the few that can include C2PA data. Expanding support across more platforms is essential for the initiative’s long-term success.
Richardson acknowledged that while these steps are significant, content authentication is a complex issue that requires collaborative efforts across the tech industry. “There is no one-size-fits-all solution for online content,” she said, emphasizing the importance of industry cooperation.
Tackling the Rise of AI-Generated Images
The rise of AI-generated images has also raised concerns about their potential use in scams. Deepfake technology, for example, has been used in high-profile fraud cases. In one case, criminals used AI to impersonate a company executive during a video call, leading to the loss of $25 million.
In its new feature rollout, Google will label AI-generated, edited images in both Search and Google Lens. Google’s move to label AI-created images is in response to the increasing presence of such content in search results. The update will add labels to images in Google Search, Google Lens, and the Circle to Search feature on Android devices. This will provide users with more information, helping them differentiate between real and AI-generated images.
Google aims to offer more control and transparency through its “About this image” tool, ensuring users can avoid misleading content. The company is also working to integrate similar measures for videos on YouTube, with further updates anticipated later this year.
Google is not the only company working on content authenticity. Major players like Amazon, Microsoft, OpenAI, and Adobe are part of the C2PA, a collaborative effort to establish standards for fighting misinformation and image manipulation. Despite the involvement of these tech giants, the standard is still in its early stages, with limited device support.
Also read: Global Leaders Requests For AI Fund Needed to Help Developing Nations.