×
Skip to main content

TikTok Rolls Out Labels for AI-Generated Content

AI "can potentially confuse or mislead viewers if they're not aware content was generated or edited with AI," TikTok said

TikTok announced new tools to help creators label content that was generated by artificial intelligence. In addition, the company said on Tuesday (Sept. 19) that it plans to “start testing ways to label AI-generated content automatically.”

“AI enables incredible creative opportunities, but can potentially confuse or mislead viewers if they’re not aware content was generated or edited with AI,” the company wrote. “Labeling content helps address this, by making clear to viewers when content is significantly altered or modified by AI technology.”

Related

As AI technology has become better — at generating credible-looking images or mimicking pop stars’ voices, for example — and more popular, regulators have expressed increasing concern about the technology’s potential for mis-use. 

In July, President Biden’s administration announced that seven leading AI companies made voluntary commitments “to help move toward safe, secure, and transparent development of AI technology.” One key point: “The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system. This action enables creativity with AI to flourish but reduces the dangers of fraud and deception.”

Voluntary commitments are, of course, voluntary, which is likely why TikTok also announced that it will “begin testing an ‘AI-generated’ label that we eventually plan to apply automatically to content that we detect was edited or created with AI.” Tools to determine whether an image has been crafted by AI already exist, and some are better than others. In June, The New York Times tested five programs, finding that the “services are advancing rapidly, but at times fall short.”

The challenge is that as detection technology improves, so does the tech for evading detection. Cynthia Rudin, a computer science and engineering professor at Duke University, told the paper that “every time somebody builds a better generator, people build better discriminators, and then people use the better discriminator to build a better generator. The generators are designed to be able to fool a detector.”

Related

Similar detection efforts are being discussed in the music industry as it debates how to weigh AI-generated songs relative to tracks that incorporate human input.

“You have technologies out there in the market today that can detect an AI-generated track with 99.9% accuracy, versus a human-created track,” Believe co-founder and CEO Denis Ladegaillerie said in April. “We need to finalize the testing, we need to deploy,” he added, “but these technologies exist.” 

The streaming service Deezer laid out its own plan to “develop tools to detect AI-generated content” in June. “From an economic point of view, what matters most is [regulating] the things that really go viral, and usually those are the AI-generated songs that use fake voices or copied voices without approval,” Deezer CEO Jeronimo Folgueira told Billboard this summer.

Moises, another AI-technology company, dove into the fray as well, announcing its own set of new tools on Aug. 1. “There’s definitely a lot of chatter” about this, Matt Henninger, Moises’ vp of sales and business development told Billboard. “There’s a lot of testing of different products.”