Social Media

Light
Dark

DeepMind partners with Google Cloud to watermark AI-generated images

In collaboration with Google Cloud, Google DeepMind, the AI research division of Google, is introducing a tool designed for watermarking and identifying AI-generated images. However, this tool is specifically intended for images produced by Google’s own image-generating model.

Referred to as “SynthID,” this tool is currently in beta and accessible to a limited group of users on Vertex AI, Google’s platform for constructing AI applications and models. SynthID functions by embedding a digital watermark directly into an image’s pixels. While this watermark remains nearly imperceptible to the human eye, it can be detected by algorithms. Notably, SynthID is exclusively compatible with Imagen, Google’s text-to-image model, which is only accessible within Vertex AI.

In a previous announcement, Google had stated its intention to incorporate metadata to indicate visual content generated by AI models. SynthID, however, takes this concept a step further.

DeepMind, in a blog post, underscores the importance of being able to recognize AI-generated content due to the potential for misuse and misinformation that arises from such creations. By identifying AI-generated media, individuals can better comprehend their interactions with such content, ultimately aiding in the mitigation of misinformation.

According to DeepMind, SynthID is robust even in the face of modifications such as image filtering, color adjustments, and significant compression. The tool relies on two AI models—one for watermarking and another for identification—both trained on a diverse range of images.

While SynthID cannot definitively identify watermarked images, it can differentiate between images likely to contain a watermark and those less likely to have one.

Although SynthID has its limitations against extreme image alterations, it is considered a promising approach for responsible management of AI-generated content. Additionally, there’s potential for its adaptation to other AI models and mediums beyond images, such as audio, video, and text.

Watermarking techniques for generative art are not new. Other companies, like Imatag and Steg.AI, have developed similar watermarking tools. The tech industry is increasingly under pressure to establish ways of indicating AI-generated works. Regulatory bodies like China’s Cyberspace Administration and U.S. senators are advocating for transparency in AI-generated content.

Microsoft, Shutterstock, Midjourney, and OpenAI have also committed to watermarking AI-generated content in various ways.

Despite these efforts, a universally accepted watermarking standard has yet to emerge. SynthID, like other technologies, is specific to certain image generators and may not be suitable for open-source AI image generators. DeepMind is contemplating the possibility of offering SynthID to third parties in the future, but adoption remains uncertain.

Leave a Reply

Your email address will not be published. Required fields are marked *