Google now adds watermarks to all its AI-generated content

This is another in a year-long series of stories identifying how the burgeoning use of artificial intelligence is impacting our lives — and ways we can work to make those impacts as beneficial as possible.

Chatbots talk to people using artificial intelligence, or AI. They can explain computer code or help students with their homework. Google runs Gemini, one such bot. The company also runs bots that generate music, images and video. Now, Google’s putting an invisible signature — called a watermark — onto everything its bots generate.

A watermark is like an artist’s signature on a painting or a stamp on an official document. In this case, it marks text or other content as something that a Google bot has created. But unlike a signature or a stamp, it’s completely invisible to the eye.

This banknote’s old-school watermarking is built into the paper by thinning parts of the material as it’s made. Things can be printed in ink atop the paper, but the light-and-dark patterning seen here won’t change. Holding the paper up to a light reveals the mark.kool99/iStock/Getty Images Plus

Pushmeet Kohli and a team of researchers at Google DeepMind announced the new watermarking tool October 23 in Nature. They call it SynthID.

These watermarks do not take away from “the quality, accuracy, creativity or speed of the text generation,” Kohli said in a statement. Same goes for images or other media. But watermarking makes it easier for people to tell if some content came from the AI model that powers a specific bot.

Google is the first large tech company to openly watermark AI-generated material. And it made its SynthID code freely available for anyone to access. So other companies that run similar bots would be able to use this new tool.

“I am excited to see Google taking this step for the tech community,” says Furong Huang. She’s a computer scientist at the University of Maryland in College Park.

Junfeng Yang also thinks it “is really significant” that Google’s starting to use SynthID on all its bots. Yang is a computer scientist at Columbia University in New York City. Neither he nor Huang took part in developing this new watermark.

Winning the tournament

Watermarking bot-made content is not a new idea, says Yang. Kohli and his team just came up with a cleverer way to do it. It involves a tournament. To understand it, you need to know a little about how a generative AI model works.

Take a text-generating AI model like the one behind Google’s Gemini. This type of workspace assistant studies lots of existing text to learn how likely certain words are to appear together. Then, it can write its own text in response to users’ prompts.

To generate that text, the AI model first looks at the words in a prompt. Let’s say the prompt is: “my favorite tropical fruit is …” It then predicts what words are most likely to come next. A regular AI model would simply select a very likely next word, such as “mango” or “banana,” and move on. Other words, like “papaya” and “lychee,” were possible too, but less likely.

Explainer: What is generative AI?

SynthID adds an additional step — the tournament — before selecting a word. This pits a bunch of possible next words against each other. The most likely next words get a bunch of spots in the tournament. In our example, “mango” and “banana” would both show up lots of times. “Lychee” and “papaya” would only show up once or twice.

During the tournament, pairs of words compete against each other over multiple rounds. Only the winners advance. It’s just like March Madness in basketball. But here, a series of math calculations give each word its score. This score is based on a secret watermarking key. Perhaps the scoring makes “mango” beat “banana.” This would give “mango” an edge among these two very likely words. The watermarking key might also give lychee a high score. But because mango shows up in the tournament more often, it’s still more likely to emerge as the overall winner.

So the tournament setup still favors the selection of very likely words. And that is important. Some other methods of watermarking make unusual words more likely to show up in responses. This can lead to mistakes or low-quality output.

These tournaments happen over and over again, for every word in the generated text.

SynthID adds an invisible watermark to AI-generated images, video, audio or text. Google developed the tool and is adding it to content created with its AI models. “SynthID is just one tool we are using to make sure generative AI tools are built with safety in mind,” says a company video.

Later, when someone wants to check text for a watermark, they just need a tool with the watermarking key. This tool does the math to find the scores for each word in the text. If the words in the text have fairly low scores, the text likely didn’t go through this math-based tournament.

If the words in a text have very high scores, though, that means these words would have won their tournaments. So the text was likely watermarked.

In the above example, finding “mango” after “my favorite tropical fruit is …” would be evidence of a watermark. Finding “banana” would not be, because this word’s score was low. The longer the text is, the more words the watermarking key can score, and the more obvious the use of a watermark will be.

A massive live experiment

Google’s DeepMind team thought their tournament approach would lead to high-quality watermarked text. But to make sure, they ran some experiments.

One happened live, as regular people were using Gemini. Their prompts got randomly routed through either the regular bot or one that added watermarks. This happened for 20 million interactions.

Gemini users can rate the bot’s responses with a thumbs-up or thumbs-down. The researchers found that the ratings did not differ in any significant way when watermarking was used.

That was great news. But the researchers didn’t stop there. They did one more experiment. They had a small group of people compare watermarked and non-watermarked responses side by side. These people “rated the quality across several categories,” explains Yang. For instance, they rated how factual, helpful and grammatically correct the text was. Again, there was no significant difference in which they preferred.

One more tool in the toolbox

Detecting AI-generated content is important for many reasons. Teachers might want to know if students are trying to cheat using AI. Social-media sites might want to prevent AI-generated misinformation or spam from spreading.

Gemini’s watermark “is not a complete solution” to these problems, notes Yang. That’s because testing for the watermark can only tell you whether the content had been created using one of Google’s bots. “There will be other models that don’t have watermarks,” notes Yang. “How do you recognize them?”

Governments could pass laws to require watermarks on AI-generated content. But even then, “truly malicious people” will find ways around this, says Yang. For example, they might find a way to remove watermarking code.

Yang’s team developed a tool called Raidar that can help detect AI-generated text, even when it hasn’t been watermarked. But “watermarking enables much more reliable detection,” says Huang. She is currently helping to run a competition called Erasing the Invisible. In this contest, teams compete to try removing watermarks from images. The goal is to discover which watermarks hold up best when people try to mess with them.

Kohli and colleagues call SynthID “an important building block for developing more reliable AI-identification tools.”

Categories:

Leave a Reply

Your email address will not be published. Required fields are marked *