The Guardian
Dan Milmo Global technology editor.
Wed 3 Dec 2025 07.00 CET
Researchers uncovered 354 AI-focused accounts that had accumulated 4.5bn views in a month
Hundreds of accounts on TikTok are garnering billions of views by pumping out AI-generated content, including anti-immigrant and sexualised material, according to a report.
Researchers said they had uncovered 354 AI-focused accounts pushing 43,000 posts made with generative AI tools and accumulating 4.5bn views over a month-long period.
According to AI Forensics, a Paris-based non-profit, some of these accounts attempt to game TikTok’s algorithm – which decides what content users see – by posting large amounts of content in the hope that it goes viral.
One posted up to 70 times a day or at the same time of day, an indication of an automated account, and most of the accounts were launched at the beginning of the year.
Last month TikTok revealed there were at least 1.3bn AI-generated posts on the platform. More than 100m pieces of content are uploaded to the platform every day, indicating that labelled AI material is a small part of TikTok’s catalogue. TikTok is also giving users the option of reducing the amount of AI content they see.
Of the accounts that posted content most frequently, half focused on content related to the female body. “These AI women are always stereotypically attractive, with sexualised attire or cleavage,” the report said.
AI Forensics found the accounts did not label half of the content they posted and less than 2% carried the TikTok label for AI content – which the nonprofit warned could increase the material’s deceptive potential. Researchers added that the accounts sometimes escape TikTok’s moderation for months, despite posting content barred by its terms of service.
Dozens of the accounts revealed in the study have subsequently been deleted, researchers said, indicating that some had been taken down by moderators.
Some of the content took the form of fake broadcast news segments with anti-immigrant narratives and material sexualising female bodies, including girls that appeared to be underage. The female body category accounted for half of the top 10 most active accounts, said AI Forensics, while some of the fake news pieces featured known broadcasting brands such as Sky News and ABC.
Some of the posts have been taken down by TikTok after they were referred to the platform by the Guardian.
TikTok said the report’s claims were “unsubstantiated” and the researchers had singled it out for an issue that was affecting multiple platforms. In August the Guardian revealed that nearly one in 10 of the fastest growing YouTube channels globally were showing only AI-generated content.
“On TikTok, we remove harmful AIGC [artificial intelligence-generated content], block hundreds of millions of bot accounts from being created, invest in industry-leading AI-labelling technologies and empower people with tools and education to control how they experience this content on our platform,” a TikTok spokesperson said.
The most popular accounts highlighted by AI Forensics in terms of views had posted “slop”, the term for AI-made content that is nonsensical, bizarre and designed to clutter up people’s social media feeds – such as animals competing in an Olympic diving contest or talking babies. The researchers acknowledged that some of the slop content was “entertaining” and “cute”.
NewsGuard's Reality Check
newsguardrealitycheck.com
Nov 17, 2025
What happened: In an effort to discredit the Ukrainian Armed Forces and undermine their morale at a critical juncture of the Russia-Ukraine war, Kremlin propagandists are weaponizing OpenAI’s new Sora 2 text-to-video tool to create fake, viral videos showing Ukrainian soldiers surrendering in tears.
Context: In a recent report, NewsGuard found that OpenAI’s new video generator tool Sora 2, which creates 10-second videos based on the user’s written prompt, advanced provably false claims on topics in the news 80 percent of the time when prompted to do so, demonstrating how the new and powerful technology could be easily weaponized by foreign malign actors.
A closer look: Indeed, so far in November 2025, NewsGuard has identified seven AI-generated videos presented as footage from the front lines in Pokrovsk, a key eastern Ukrainian city that experts expect to soon fall to Russia.
The videos, which received millions of views on X, TikTok, Facebook, and Telegram, showed scenes of Ukrainian soldiers surrendering en masse and begging Russia for forgiveness.
Here’s one video supposedly showing Ukrainian soldiers surrendering:
And a video purporting to show Ukrainian soldiers begging for forgiveness:
Actually: There is no evidence of mass Ukrainian surrenders in or around Pokrovsk.
The videos contain multiple inconsistencies, including gear and uniforms that do not match those used by the Ukrainian Armed Forces, unnatural faces, and mispronunciations of the names of Ukrainian cities. NewsGuard tested the videos with AI detector Hive, which found with 100 percent certainty that all seven were created with Sora 2. The videos either had the small Sora watermark or a blurry patch in the location where the watermark had been removed. Users shared both types as if they were authentic.
The AI-generated videos were shared by anonymous accounts that NewsGuard has found to regularly spread pro-Kremlin propaganda.
Ukraine’s Center for Countering Disinformation said in a Telegram post that the accounts “show signs of a coordinated network specifically created to promote Kremlin narratives among foreign audiences.”
In response to NewsGuard’s Nov. 12, 2025, emailed request for comment on the videos, OpenAI spokesperson Oscar Haines said “we’ll investigate” and asked for an extension to Nov. 13, 2025, to provide comment, which NewsGuard provided. However, Haines did not respond to follow-up inquiries.
This is not the first time Kremlin propagandists have weaponized OpenAI’s tools for propaganda. In April 2025, NewsGuard found that pro-Kremlin sources used OpenAI’s image generator to create images of action figure dolls depicting Ukrainian President Volodymyr Zelensky as a drug addict and corrupt warmonger.
US-designated terrorist organization ELN oversees a vast digital operation that promotes pro-Kremlin and anti-US content.
The National Liberation Army (ELN), a Colombian armed group that also holds influence in Venezuela, has built a digital strategy that involves branding themselves as media outlets to build credibility, overseeing a diffuse cross-platform operation, and using these wide-ranging digital assets to amplify Russian, Iranian, Venezuelan, and Cuban narratives that attack the interests of the United States, the European Union (EU), and their allies.
In the 1960s, the ELN emerged as a Colombian nationalist armed movement ideologically rooted in Marxism-Leninism, liberation theology, and the Cuban revolution. With an army estimated to have 2,500 to 6,000 members, the ELN is Colombia’s oldest and largest active guerrilla group, with its operation extending into Venezuela. The ELN has maintained a strategic online presence for over a decade to advance its propaganda and maintain operational legitimacy.
The organization, which has previously engaged in peace talks with the Colombian state, has carried out criminal activities in Colombia and Venezuela, such as killings, kidnappings, extortions, and the recruitment of minors. After successive military and financial crises in the 1990s, the armed group abandoned its historical reluctance to participate in drug trafficking. The diversification into illegal funding has meant that their armed clashes target criminal groups, in addition to their primary ideological enemy, the state forces.
In the north-eastern Catatumbo area, considered one of the enclaves of international cocaine trafficking, the group has been involved in one of the bloodiest confrontations seen in Colombia in 2025. Since January 15, the violence has left 126 people dead, at least 66,000 displaced, and has further strained the group’s engagement with the latest round of peace talks initiated by the current Colombian government. In that region, the ELN has battled with the state and other criminal groups, such as paramilitaries and other guerrilla groups, for extended control of the area bordering Venezuela, an effort to connect the ELN’s other territories of influence to Colombia, such as the north and, at the other extreme, the western regions of Choco and Antioquia.
The US Department of State reaffirmed the ELN’s designation as a terrorist organization in its March 5, 2025, update of the Foreign Terrorist Organizations (FTOs) list. This classification theoretically prevents the group from operating on major social media platforms, as US social media platforms, such as Meta, YouTube, and X, maintain policies prohibiting terrorist organizations from using their services. However, the DFRLab found that the group’s substantial digital footprint spans over one hundred entities across websites, social media, closed messaging apps, and podcast services.
To attract users across the Global Majority, many technology companies have introduced “lite” versions of their products: Applications that are designed for lower-bandwidth contexts. TikTok is no exception, with TikTok Lite estimated to have more than 1 billion users.
Mozilla and AI Forensics research reveals that TikTok Lite doesn’t just reduce required bandwidth, however. In our opinion, it also reduces trust and safety. In comparing TikTok Lite with the classic TikTok app, we found several discrepancies between trust and safety features that could have potentially dangerous consequences in the context of elections and public health.
Our research revealed TikTok Lite lacks basic protections that are afforded to other TikTok users, including content labels for graphic, AI-generated, misinformation, and dangerous acts videos. TikTok Lite users also encounter arbitrarily shortened video descriptions that can easily eliminate crucial context.
Further, TikTok Lite users have fewer proactive controls at their disposal. Unlike traditional TikTok users, they cannot filter offensive keywords or implement screen management practices.
Our findings are concerning, and reinforce patterns of double-standard. Technology platforms have a history of neglecting users outside of the US and EU, where there is markedly less potential for constraining regulation and enforcement. As part of our research, we discuss the implications of this pattern and also offer concrete recommendations for TikTok Lite to improve.
This report delves into Doppelgänger information operations conducted by Russian actors, focusing on their activities from early June to late-July 2024. Our investigation was motivated by the unexpected snap general election in France, prompting a closer look at Doppelgänger activities during this period.
While recent activities have been described since1,2, our first dive into the information operations topic offers a complementary threat-intelligence analysts’ perspective on the matter, brings additional knowledge on associated infrastructure, tactics and motivation in Europe and the United States.
How Doppelganger, one of the biggest Russian disinformation campaigns, is using EU companies to keep spreading its propaganda – despite sanctions.
#Fact-checking
The U.S. military launched a clandestine program amid the COVID crisis to discredit China’s Sinovac inoculation – payback for Beijing’s efforts to blame Washington for the pandemic. One target: the Filipino public. Health experts say the gambit was indefensible and put innocent lives at risk.
Experts are finding thousands of examples of AI-created content every week that could allow terrorist groups and other violent extremists to bypass automated detection systems.
#algorithms #censorship #content #disinformation #israel-hamas #moderation #terrorism #war
Earlier this week, the Republican National Committee released a video that it claims was “built entirely with AI imagery.” The content of the ad isn’t especially novel—a dystopian vision of America under a second term with President Joe Biden—but the deliberate emphasis on the technology used to create it stands out: It’s a “Daisy” moment for the 2020s.
As American feminists came together in 2017 to protest Donald Trump, Russia’s disinformation machine set about deepening the divides among them.