Spot The Fake: Google Launches New Watermarking Tech

Martina Bretous
Martina Bretous

Updated:

Published:

Where were you when you saw that image of Pope Francis in a white puffer jacket and jeweled crucifix, looking like he stepped out of a streetwear runway show?

google releases new watermarking technology

We’ve seen deepfakes before, but few had quite this impact. And now, Google has released a tool that will prevent images like this from spreading unchecked.

Click Here to Subscribe to HubSpot's AI Newsletter

How does it work?

Deep Mind, an AI research lab Google acquired years back, recently announced the launch of SynthID, a watermarking tool designed specifically to spot AI-generated images.

“Being able to identify AI-generated content is critical to empowering people with knowledge of when they’re interacting with generated media and for helping prevent the spread of misinformation,” the Deep Mind team said.

Unlike traditional watermarking techniques, which often rely on visible watermarks or metadata that can get lost, Deep Mind embeds a digital watermark into the pixels of an image.

So, even if you alter the image – cropping, resizing, filters, brightness adjustments – the watermark remains. The human eye won’t spot it, but detection software can identify it.

Not specific enough? Well, that’s all the team is willing to share about the tech.

“The more you reveal about the way it works, the easier it’ll be for hackers and nefarious entities to get around it,” said CEO Demis Hassabis to The Verge.

Currently in beta, SynthID is available to Imagen users (a Google Cloud product) who use Vertex AI, a cloud-based machine learning platform. Customers will be able to responsibly create, share, and identify AI-generated images.

Hassabis adds that the technology isn’t foolproof against “extreme image manipulations,” but it’s progress in the right direction.

What’s next?

The team at Deep Mind is working on expanding access to SynthID to make it available to third parties and integrate it into more Google products.

This announcement came shortly after Google, and six top AI players, attended a White House summit and pledged to invest in AI safety tools and research for responsible use.

In a statement, the White House requested new watermark technology as a way for AI companies to earn public trust. And according to a Verge report, the software will likely extend to audio and visual content.

This summit continued the government’s effort to combat deep fakes. In 2021, the Department of Homeland Security and Governmental Affairs Committee passed the Deepfake Task Force Act, which is exactly what it sounds like.

On the light end of the spectrum, you have deep fakes used to style the Pope in the latest fashion trends. On the dark end, they can lead to political instability, fraud, and stock manipulation.

In 2021, Adobe cofounded the nonprofit Coalition for Content Provenance and Authenticity (C2PA). The coalition exists to standardize how media content is labeled and to combat misinformation. They will serve as a seal of approval, showing consumers that an asset was not manipulated or altered.

Due to the AI boom, C2PA’s membership has grown 56% in the past six months, according to an MIT Technology Review article.

Shutterstock announced in late July that it would integrate C2PA’s technical protocol into its AI systems and creativity tools, including the AI image generator.

The takeaway: With government pressure mounting, the big and small AI players will need to prioritize efforts toward responsible AI. Whether you’re using or creating the tools, there’s more oversight on the horizon.

Click Here to Subscribe to HubSpot's AI Newsletter

Related Articles

A weekly newsletter covering AI and business.

SUBSCRIBE

Marketing software that helps you drive revenue, save time and resources, and measure and optimize your investments — all on one easy-to-use platform

START FREE OR GET A DEMO