Google is reportedly in the process of developing a new feature for its popular photo and video sharing platform, Google Photos, that will help users identify whether an image has been generated or enhanced by artificial intelligence (AI). As AI continues to shape how digital content is created and consumed, this development comes at a crucial time. The feature, intended to address growing concerns surrounding the authenticity of digital content, especially deepfakes, will provide users with more information about the origins of images in their galleries. Here’s what we know so far.
The Rise of Deepfakes and AI-Generated Content
The increasing sophistication of AI technologies has led to a significant rise in AI-generated and AI-altered images, videos, and other digital media. While AI is revolutionizing content creation, it’s also raising concerns, particularly with the rise of deepfakes — hyper-realistic, AI-generated images and videos that often deceive viewers. Deepfakes have been used to create everything from digitally altered advertisements to fake news, and even malicious content designed to manipulate public opinion or damage reputations.
Recently, Indian actor Amitabh Bachchan filed a lawsuit against a company for using deepfake technology to create advertisements featuring his likeness without his permission. As these concerns grow, platforms like Google are taking steps to combat the spread of misinformation and ensure that users are informed about the origins of the content they consume.
How Will Google Photos Address AI-Generated Content?
According to reports from Android Authority, the upcoming feature in Google Photos is still under development but has already been discovered through hidden code in version 7.3 of the app. The clues point to new identification resource tags being added, including an “ai_info” tag that would reveal whether a particular image has been created or enhanced using AI technology. This tag could provide users with more transparency about the content they store in their galleries, helping them to distinguish between authentic and AI-generated images.
Another potential addition is the “digital_source_type” tag, which could indicate the specific AI tool or model used to generate or modify the image. This means users may be able to determine whether an image was created using Google’s own AI tools, like Gemini, or other popular platforms such as Midjourney, DALL·E, or other image generation models.
 Possible Approaches to AI Attribution in Google Photos
While details about how the AI attribution feature will be presented to users remain unclear, there are several possibilities. One potential approach could involve embedding AI-related data within the image’s metadata. Metadata, including Exchangeable Image File Format (EXIF) tags, already contains useful information such as the date an image was captured, the camera settings, and the location where the image was taken. The new AI-related tags could be embedded within the metadata, allowing users to view information about whether AI was involved in creating the image and which tools were used.
However, accessing metadata typically requires users to manually check the image’s file information. This might not be convenient for most users, especially those unfamiliar with navigating the metadata of an image. As a more visible solution, Google could consider adding on-image badges, similar to Instagram’s approach for labeling AI-generated content. These badges would be displayed directly on the image, making it easier for users to instantly recognize AI involvement.
Implications of the AI-Flagging Feature
The introduction of this feature in Google Photos aligns with a broader trend of tech companies taking responsibility for combating the spread of misinformation and deepfakes. Platforms such as Facebook, Instagram, and Twitter have already started implementing AI-detection tools to flag or remove misleading content. By enabling users to identify AI-generated images in their galleries, Google is addressing a major concern in the digital age: the increasing difficulty in distinguishing between real and manipulated content.
This move is likely part of a broader effort by Google to build trust in AI and ensure that users remain informed about the nature of the digital media they consume. As AI-generated content becomes more widespread, the ability to identify it will be critical for combating the potential for manipulation and deception. Furthermore, Google’s focus on transparency aligns with growing calls for tech companies to be more proactive in preventing the spread of disinformation, particularly during politically sensitive times or in relation to public figures.
Although Google has not officially confirmed the release date for this AI attribution feature, its presence in the app’s code suggests that it could be launched in the near future. This feature would represent a significant upgrade to Google Photos, adding a new layer of transparency to users’ galleries and giving them more control over the authenticity of the content they store and share.
As AI continues to evolve, tools like these will become increasingly important in the fight against deepfakes and digital manipulation. Whether through metadata tags or more visible labels, Google’s efforts to integrate AI attribution in Google Photos could set a new standard for how digital content is managed and understood in the age of AI.