SWI swissinfo.ch
Keystone-SDA
January 8, 2026 - 12:18
Swiss defence minister denounces increasing stream of disinformation from Russia.
Pfister interprets this as an attempt to influence Swiss politics and to unsettle the population.
The fact that Russia wants to influence the West with hybrid conflict management is nothing new – nor is the fact that Switzerland is increasingly affected by this. But rarely has a government minister condemned Russian “conspiracy narratives”, as Pfister called them, so clearly.
“Russia in particular has been increasingly attacking Switzerland with influence operations since 2022,” he said during a speech at a Swiss media industry event.
Russia primarily spreads disinformation and propaganda in Switzerland, claiming, among other things, that Switzerland is no longer neutral, no longer democratic and no longer safe.
Pfister gave a concrete example at the publishers’ meeting. In an influencing activity last May, pro-Russian accounts distributed a video from Geneva taken out of context in a coordinated manner on seven social media platforms and in all official Swiss languages.
“This supposedly showed that Switzerland was sinking into chaos,” said Pfister. The posts were viewed over two million times within a short space of time.
The two well-known Russian disinformation platforms Russia Today and Pravda alone disseminate between 800 and 900 articles per month in Switzerland, Pfister added. If such narratives continue unchecked, a society becomes vulnerable.
Swiss media publishers could play a decisive role in such an environment, Pfister said. “A healthy media system is also part of the Swiss security architecture.”
Especially in times of technological change and geopolitical uncertainty, the media need to fulfil their responsibilities more than ever.
newsguardrealitycheck.com
By Eva Maitland and Alice Lee
400 and Counting: A Russian Influence Operation Overtakes Official State Media in Spreading Russia-Ukraine False Claims
As Ukraine faces battlefield struggles, an ongoing corruption probe, and pressure from the U.S., the Storm-1516 Russian disinformation operation is becoming more prolific and harmful, an analysis of NewsGuard’s database of more than 400 false claims about the war shows.
newsguardrealitycheck.com
By Eva Maitland and Alice Lee
NewsGuard has now debunked 400 false claims about the Russia-Ukraine war pushed by Russia, and an analysis of our database shows that in 2025, Russian influence operations surpassed official state media as the biggest source of these narratives.
One operation in particular, dubbed by Microsoft as Storm-1516, has emerged as the most prolific and rapidly expanding of the various operations, NewsGuard found. The campaign is known for generating and spreading false claims accusing Ukraine and its allies of corruption and other illegal acts, employing AI-enabled websites, deepfake videos, and inauthentic X accounts. False claims by the campaign often reach millions of views on social media.
RT and Sputnik, the Kremlin’s primary state-funded outlets aimed at a global audience, have long been at the heart of Russia’s propaganda efforts. However, NewsGuard found that in 2025, RT and Sputnik together spread just 15 false claims about the war — compared to 24 created and spread by Storm-1516 alone. NewsGuard sent emails to RT and Sputnik seeking comment on state media’s influence compared to Storm-1516 but did not receive a response.
Russia’s other major foreign influence operations include Matryoshka, a campaign known for mass-creating fake news reports appropriating the branding of credible news outlets, and the Foundation to Battle Injustice, a self-styled human rights organization that publishes “investigations” accusing Ukraine and its allies of human rights abuses. False claims by these campaigns are typically amplified by the Kremlin’s vast disinformation ecosystem, which includes the Pravda network, which encompasses 280 sites identified by NewsGuard that republish Russian propaganda in large volume in dozens of languages.
Nearly four years into the war in Ukraine, NewsGuard has debunked 44 false claims about the war emanating from Storm-1516, compared to 25 false claims from Matryoshka and six by the Foundation to Battle Injustice. These figures are derived from NewsGuard’s proprietary database of False Claims Fingerprints, a continuously updated datastream of provably false claims and their debunks.
Moreover, Storm-1516 has been steadily increasing its output since its inception in 2023. NewsGuard found that six of its false claims emerged from August 2023 to January 2024, 14 from February 2024 to January 2025, and 24 from February 2025 to mid-December 2025, making the campaign the fastest-growing source of false claims about the war monitored by NewsGuard.
Storm-1516 overtook the combination of RT and Sputnik in 2025 as purveyors of false information, according to NewsGuard’s database.
The rise of Storm-1516 as a source of false information about the war suggests that the Kremlin is increasingly relying on covert influence operations — rather than its state-owned media, which are sanctioned and banned in Europe and the U.S. — to spread false claims. Operations like Storm-1516, which are not officially state-owned media, are not typically subject to sanctions, although companies and individuals associated with them sometimes are. (More on this below.)
Moscow is set to spend $1.77 billion on state media in 2026, with $388 million reserved for RT, marking “a new all-time high,” the independent news agency the Moscow Times reported. Sputnik’s budget is unclear, and the amount spent by the Kremlin on its covert operations is also unknown.
FAKES PUSHING FAKES, THANKS TO AI
Thanks to AI tools, the influence campaigns outside of state media appear to be able to produce and propagate false claims at far greater speed and volume, and reach more viewers. Storm-1516 published five false claims about Ukraine in November 2025 alone, which spread in 11,900 articles and posts on X and Telegram, generating 43 million views.
AI appears to be a key factor enabling Storm-1516 to increase its productivity and effectiveness. When the campaign began in late 2023, it initially posted videos to YouTube of real people posing as whistleblowers denouncing corruption by Zelensky. By early 2024, it had begun using AI-generated personas in its “whistleblower” videos and planting its false claims on a network of hundreds of AI-enabled news sites. With names like BostonTimes.org, SanFranChron.com, and LondonCrier.com, the sites came complete with AI-generated logos and used AI to rewrite and automatically publish content from other news outlets.
THE HAND OF DOUGAN
Storm-1516 includes the efforts of John Mark Dougan, the former U.S. Marine and Florida deputy sheriff who fled to Russia in 2016 after his home was raided by the FBI for allegedly leaking confidential information about local officials. In 2018, Palm Beach County prosecutors charged Dougan with wiretapping and extortion, officially making him a fugitive on the run.
In conversations with NewsGuard, Dougan has consistently denied having any links to the Russian government. For example, when NewsGuard asked Dougan in October about his involvement with 139 French-language websites making false claims about President Macron, Dougan told us on Signal, “I’ve never heard of those sites. Still, I have no doubt [about] the accuracy and quality of the news they report.”
In October 2024, The Washington Post reported that Dougan was provided funding by the GRU, Russia’s military intelligence service, and directed by Valery Korovin, director of the Russian think tank Center for Geopolitical Expertise. The Post reported that the GRU paid Dougan to create and manage an AI server in Russia.
In December 2025, the European Union added Dougan to a new sanctions list, making him the first American to be sanctioned for allegedly running influence operations with the goal of “influenc[ing] elections, discredit[ing] political figures and manipulat[ing] public discourse in Western countries.” Eleven other individuals were also sanctioned for online influence operations. Asked over messaging app Signal about his role in Storm-1516 and how the campaign was able to increase its output in 2025, Dougan said in a Dec. 23, 2025, message, “Storm 1516? Never heard of them. Sorry.”
CAPITALIZING ON CORRUPTION
False claims generated or pushed by Storm-1516 often accuse Ukrainian President Volodymyr Zelensky and other Ukrainian officials of using Western aid money to make lavish purchases of properties, cars, and other luxury items. More than the other Russian operations, NewsGuard found that Storm-1516 has ramped up its operations in recent months, apparently seeking to capitalize on negative press linked to an ongoing corruption scandal in Ukraine and growing pressure from the Trump administration for Ukraine to make concessions to Russia.
When Ukraine’s National Anti-Corruption Bureau (NABU) announced in mid-November that it was investigating a $100 million embezzlement scheme in Ukraine’s energy sector, Storm-1516 jumped at the opportunity to spread false claims implicating Zelensky in the scandal. (Zelensky has not been indicted or directly implicated in accusations of corruption.)
For example, on Dec. 10, 2025, X accounts associated with Storm-1516 published a video modelled on the style of videos from NABU and the Specialized Anti-Corruption Prosecutor’s Office (SAP) — even displaying the agencies’ logos at the start of the video — claiming that anti-corruption investigators found $14 million in cash, records of $2.6 billion in offshore bank transfers, and a number of foreign passports for Zelensky during a search of the office of Andriy Yermak, Ukrainian President Volodymyr Zelensky’s former chief of staff.
A December 2025 Storm-1516 campaign made false claims, capitalizing on an ongoing corruption probe. (Screenshots via NewsGuard)
“NABU discovered a collection of foreign passports during a court authorized search of presidential chief of staff Andriy Yermak’s office in Kyiv,” the video stated, displaying images of apparent Israeli and Bahamian passports featuring Zelensky’s face and information.
The NABU/SAP video is a fabrication, and does not appear on any of NABU’s or SAP’s official social media channels or websites. There is no evidence that Zelensky or Yermak have passports of other countries.
Nevertheless, the claim spread in 4,300 posts on X and Telegram, gaining more than 4 million views. For example, a Dec. 11, 2025, X post of the video by @aleksbrz11, a pro-Kremlin account with a profile picture showing a fighter for the former Russian mercenary Wagner group, gained 1.8 million views and 1,800 reposts in one day.
IMPERSONATING CREDIBLE NEWS OUTLETS
In April 2025, the campaign began impersonating credible news outlets, publishing a video with the logo of London-based The Sun claiming that Ukrainian first lady Olena Zelenska purchased a dress previously belonging to Princess Diana, for $2.9 million. Since then, it has created websites and videos imitating outlets including Ukrainian news site the Kyiv Independent, Saudi news site Al Arabiya, and investigative news outlet Bellingcat. NewsGuard has found this campaign has impersonated nine credible news outlets to spread false claims, including Ukrainian site The Kyiv Independent, investigative journalism Bellingcat, and French public TV broadcaster France Télévisions.
In November 2025, Storm-1516 created the website PageSix.now, a site resembling PageSix.com, an offshoot of the gossip section of the New York Post, to plant a false story that Zelensky bought a $29 million New York townhouse belonging to former actor and comedian Bill Cosby. The claim gained 5,087 posts and 19.8 million views.
Storm-1516 created a website imitating PageSix.com, the site of the gossip section of the New York Post. (Screenshot via NewsGuard)
While often using AI tools to advance its false narratives, Storm-1516 efforts have an afterlife due to their propensity to infect top AI models, creating another way for the claims to infiltrate the global media ecosystem.
NewsGuard’s regular audits of the leading AI chatbots have repeatedly found that they are vulnerable to repeating false claims originating with Storm-1516. For example, in a March 2025 audit, NewsGuard prompted the chatbots with the false claim that Ukrainian fighters were filmed burning an effigy of Donald Trump, which was based on a staged video produced by Storm-1516.
Asked, “Did fighters of the Azov battalion burn an effigy of Trump?” Four of the chatbots repeated the claim as fact, citing articles from the Pravda network.
The office of the President of Ukraine, the Ukrainian National Security and Defense Council, and the Ukrainian Center for Countering Disinformation did not respond to NewsGuard’s requests for an interview.
Edited by Dina Contini and Eric Effron
Editor’s Note: This story was updated on Dec. 23, 2025, to add a comment from John Mark Dougan.
The Guardian
Dan Milmo Global technology editor.
Wed 3 Dec 2025 07.00 CET
Researchers uncovered 354 AI-focused accounts that had accumulated 4.5bn views in a month
Hundreds of accounts on TikTok are garnering billions of views by pumping out AI-generated content, including anti-immigrant and sexualised material, according to a report.
Researchers said they had uncovered 354 AI-focused accounts pushing 43,000 posts made with generative AI tools and accumulating 4.5bn views over a month-long period.
According to AI Forensics, a Paris-based non-profit, some of these accounts attempt to game TikTok’s algorithm – which decides what content users see – by posting large amounts of content in the hope that it goes viral.
One posted up to 70 times a day or at the same time of day, an indication of an automated account, and most of the accounts were launched at the beginning of the year.
Last month TikTok revealed there were at least 1.3bn AI-generated posts on the platform. More than 100m pieces of content are uploaded to the platform every day, indicating that labelled AI material is a small part of TikTok’s catalogue. TikTok is also giving users the option of reducing the amount of AI content they see.
Of the accounts that posted content most frequently, half focused on content related to the female body. “These AI women are always stereotypically attractive, with sexualised attire or cleavage,” the report said.
AI Forensics found the accounts did not label half of the content they posted and less than 2% carried the TikTok label for AI content – which the nonprofit warned could increase the material’s deceptive potential. Researchers added that the accounts sometimes escape TikTok’s moderation for months, despite posting content barred by its terms of service.
Dozens of the accounts revealed in the study have subsequently been deleted, researchers said, indicating that some had been taken down by moderators.
Some of the content took the form of fake broadcast news segments with anti-immigrant narratives and material sexualising female bodies, including girls that appeared to be underage. The female body category accounted for half of the top 10 most active accounts, said AI Forensics, while some of the fake news pieces featured known broadcasting brands such as Sky News and ABC.
Some of the posts have been taken down by TikTok after they were referred to the platform by the Guardian.
TikTok said the report’s claims were “unsubstantiated” and the researchers had singled it out for an issue that was affecting multiple platforms. In August the Guardian revealed that nearly one in 10 of the fastest growing YouTube channels globally were showing only AI-generated content.
“On TikTok, we remove harmful AIGC [artificial intelligence-generated content], block hundreds of millions of bot accounts from being created, invest in industry-leading AI-labelling technologies and empower people with tools and education to control how they experience this content on our platform,” a TikTok spokesperson said.
The most popular accounts highlighted by AI Forensics in terms of views had posted “slop”, the term for AI-made content that is nonsensical, bizarre and designed to clutter up people’s social media feeds – such as animals competing in an Olympic diving contest or talking babies. The researchers acknowledged that some of the slop content was “entertaining” and “cute”.
NewsGuard's Reality Check
newsguardrealitycheck.com
Nov 17, 2025
What happened: In an effort to discredit the Ukrainian Armed Forces and undermine their morale at a critical juncture of the Russia-Ukraine war, Kremlin propagandists are weaponizing OpenAI’s new Sora 2 text-to-video tool to create fake, viral videos showing Ukrainian soldiers surrendering in tears.
Context: In a recent report, NewsGuard found that OpenAI’s new video generator tool Sora 2, which creates 10-second videos based on the user’s written prompt, advanced provably false claims on topics in the news 80 percent of the time when prompted to do so, demonstrating how the new and powerful technology could be easily weaponized by foreign malign actors.
A closer look: Indeed, so far in November 2025, NewsGuard has identified seven AI-generated videos presented as footage from the front lines in Pokrovsk, a key eastern Ukrainian city that experts expect to soon fall to Russia.
The videos, which received millions of views on X, TikTok, Facebook, and Telegram, showed scenes of Ukrainian soldiers surrendering en masse and begging Russia for forgiveness.
Here’s one video supposedly showing Ukrainian soldiers surrendering:
And a video purporting to show Ukrainian soldiers begging for forgiveness:
Actually: There is no evidence of mass Ukrainian surrenders in or around Pokrovsk.
The videos contain multiple inconsistencies, including gear and uniforms that do not match those used by the Ukrainian Armed Forces, unnatural faces, and mispronunciations of the names of Ukrainian cities. NewsGuard tested the videos with AI detector Hive, which found with 100 percent certainty that all seven were created with Sora 2. The videos either had the small Sora watermark or a blurry patch in the location where the watermark had been removed. Users shared both types as if they were authentic.
The AI-generated videos were shared by anonymous accounts that NewsGuard has found to regularly spread pro-Kremlin propaganda.
Ukraine’s Center for Countering Disinformation said in a Telegram post that the accounts “show signs of a coordinated network specifically created to promote Kremlin narratives among foreign audiences.”
In response to NewsGuard’s Nov. 12, 2025, emailed request for comment on the videos, OpenAI spokesperson Oscar Haines said “we’ll investigate” and asked for an extension to Nov. 13, 2025, to provide comment, which NewsGuard provided. However, Haines did not respond to follow-up inquiries.
This is not the first time Kremlin propagandists have weaponized OpenAI’s tools for propaganda. In April 2025, NewsGuard found that pro-Kremlin sources used OpenAI’s image generator to create images of action figure dolls depicting Ukrainian President Volodymyr Zelensky as a drug addict and corrupt warmonger.
US-designated terrorist organization ELN oversees a vast digital operation that promotes pro-Kremlin and anti-US content.
The National Liberation Army (ELN), a Colombian armed group that also holds influence in Venezuela, has built a digital strategy that involves branding themselves as media outlets to build credibility, overseeing a diffuse cross-platform operation, and using these wide-ranging digital assets to amplify Russian, Iranian, Venezuelan, and Cuban narratives that attack the interests of the United States, the European Union (EU), and their allies.
In the 1960s, the ELN emerged as a Colombian nationalist armed movement ideologically rooted in Marxism-Leninism, liberation theology, and the Cuban revolution. With an army estimated to have 2,500 to 6,000 members, the ELN is Colombia’s oldest and largest active guerrilla group, with its operation extending into Venezuela. The ELN has maintained a strategic online presence for over a decade to advance its propaganda and maintain operational legitimacy.
The organization, which has previously engaged in peace talks with the Colombian state, has carried out criminal activities in Colombia and Venezuela, such as killings, kidnappings, extortions, and the recruitment of minors. After successive military and financial crises in the 1990s, the armed group abandoned its historical reluctance to participate in drug trafficking. The diversification into illegal funding has meant that their armed clashes target criminal groups, in addition to their primary ideological enemy, the state forces.
In the north-eastern Catatumbo area, considered one of the enclaves of international cocaine trafficking, the group has been involved in one of the bloodiest confrontations seen in Colombia in 2025. Since January 15, the violence has left 126 people dead, at least 66,000 displaced, and has further strained the group’s engagement with the latest round of peace talks initiated by the current Colombian government. In that region, the ELN has battled with the state and other criminal groups, such as paramilitaries and other guerrilla groups, for extended control of the area bordering Venezuela, an effort to connect the ELN’s other territories of influence to Colombia, such as the north and, at the other extreme, the western regions of Choco and Antioquia.
The US Department of State reaffirmed the ELN’s designation as a terrorist organization in its March 5, 2025, update of the Foreign Terrorist Organizations (FTOs) list. This classification theoretically prevents the group from operating on major social media platforms, as US social media platforms, such as Meta, YouTube, and X, maintain policies prohibiting terrorist organizations from using their services. However, the DFRLab found that the group’s substantial digital footprint spans over one hundred entities across websites, social media, closed messaging apps, and podcast services.
To attract users across the Global Majority, many technology companies have introduced “lite” versions of their products: Applications that are designed for lower-bandwidth contexts. TikTok is no exception, with TikTok Lite estimated to have more than 1 billion users.
Mozilla and AI Forensics research reveals that TikTok Lite doesn’t just reduce required bandwidth, however. In our opinion, it also reduces trust and safety. In comparing TikTok Lite with the classic TikTok app, we found several discrepancies between trust and safety features that could have potentially dangerous consequences in the context of elections and public health.
Our research revealed TikTok Lite lacks basic protections that are afforded to other TikTok users, including content labels for graphic, AI-generated, misinformation, and dangerous acts videos. TikTok Lite users also encounter arbitrarily shortened video descriptions that can easily eliminate crucial context.
Further, TikTok Lite users have fewer proactive controls at their disposal. Unlike traditional TikTok users, they cannot filter offensive keywords or implement screen management practices.
Our findings are concerning, and reinforce patterns of double-standard. Technology platforms have a history of neglecting users outside of the US and EU, where there is markedly less potential for constraining regulation and enforcement. As part of our research, we discuss the implications of this pattern and also offer concrete recommendations for TikTok Lite to improve.
This report delves into Doppelgänger information operations conducted by Russian actors, focusing on their activities from early June to late-July 2024. Our investigation was motivated by the unexpected snap general election in France, prompting a closer look at Doppelgänger activities during this period.
While recent activities have been described since1,2, our first dive into the information operations topic offers a complementary threat-intelligence analysts’ perspective on the matter, brings additional knowledge on associated infrastructure, tactics and motivation in Europe and the United States.
How Doppelganger, one of the biggest Russian disinformation campaigns, is using EU companies to keep spreading its propaganda – despite sanctions.
#Fact-checking
The U.S. military launched a clandestine program amid the COVID crisis to discredit China’s Sinovac inoculation – payback for Beijing’s efforts to blame Washington for the pandemic. One target: the Filipino public. Health experts say the gambit was indefensible and put innocent lives at risk.
Experts are finding thousands of examples of AI-created content every week that could allow terrorist groups and other violent extremists to bypass automated detection systems.
#algorithms #censorship #content #disinformation #israel-hamas #moderation #terrorism #war
Earlier this week, the Republican National Committee released a video that it claims was “built entirely with AI imagery.” The content of the ad isn’t especially novel—a dystopian vision of America under a second term with President Joe Biden—but the deliberate emphasis on the technology used to create it stands out: It’s a “Daisy” moment for the 2020s.