NewsGuard's Reality Check
newsguardrealitycheck.com
Nov 17, 2025
What happened: In an effort to discredit the Ukrainian Armed Forces and undermine their morale at a critical juncture of the Russia-Ukraine war, Kremlin propagandists are weaponizing OpenAI’s new Sora 2 text-to-video tool to create fake, viral videos showing Ukrainian soldiers surrendering in tears.
Context: In a recent report, NewsGuard found that OpenAI’s new video generator tool Sora 2, which creates 10-second videos based on the user’s written prompt, advanced provably false claims on topics in the news 80 percent of the time when prompted to do so, demonstrating how the new and powerful technology could be easily weaponized by foreign malign actors.
A closer look: Indeed, so far in November 2025, NewsGuard has identified seven AI-generated videos presented as footage from the front lines in Pokrovsk, a key eastern Ukrainian city that experts expect to soon fall to Russia.
The videos, which received millions of views on X, TikTok, Facebook, and Telegram, showed scenes of Ukrainian soldiers surrendering en masse and begging Russia for forgiveness.
Here’s one video supposedly showing Ukrainian soldiers surrendering:
And a video purporting to show Ukrainian soldiers begging for forgiveness:
Actually: There is no evidence of mass Ukrainian surrenders in or around Pokrovsk.
The videos contain multiple inconsistencies, including gear and uniforms that do not match those used by the Ukrainian Armed Forces, unnatural faces, and mispronunciations of the names of Ukrainian cities. NewsGuard tested the videos with AI detector Hive, which found with 100 percent certainty that all seven were created with Sora 2. The videos either had the small Sora watermark or a blurry patch in the location where the watermark had been removed. Users shared both types as if they were authentic.
The AI-generated videos were shared by anonymous accounts that NewsGuard has found to regularly spread pro-Kremlin propaganda.
Ukraine’s Center for Countering Disinformation said in a Telegram post that the accounts “show signs of a coordinated network specifically created to promote Kremlin narratives among foreign audiences.”
In response to NewsGuard’s Nov. 12, 2025, emailed request for comment on the videos, OpenAI spokesperson Oscar Haines said “we’ll investigate” and asked for an extension to Nov. 13, 2025, to provide comment, which NewsGuard provided. However, Haines did not respond to follow-up inquiries.
This is not the first time Kremlin propagandists have weaponized OpenAI’s tools for propaganda. In April 2025, NewsGuard found that pro-Kremlin sources used OpenAI’s image generator to create images of action figure dolls depicting Ukrainian President Volodymyr Zelensky as a drug addict and corrupt warmonger.
The Danish government is to clamp down on the creation and dissemination of AI-generated deepfakes by changing copyright law to ensure that everybody has the right to their own body, facial features and voice.
The Danish government said on Thursday it would strengthen protection against digital imitations of people’s identities with what it believes to be the first law of its kind in Europe.
Having secured broad cross-party agreement, the department of culture plans to submit a proposal to amend the current law for consultation before the summer recess and then submit the amendment in the autumn.
It defines a deepfake as a very realistic digital representation of a person, including their appearance and voice.
The Danish culture minister, Jakob Engel-Schmidt, said he hoped the bill before parliament would send an “unequivocal message” that everybody had the right to the way they looked and sounded.
He told the Guardian: “In the bill we agree and are sending an unequivocal message that everybody has the right to their own body, their own voice and their own facial features, which is apparently not how the current law is protecting people against generative AI.”
He added: “Human beings can be run through the digital copy machine and be misused for all sorts of purposes and I’m not willing to accept that.”
The move, which is believed to have the backing of nine in 10 MPs, comes amid rapidly developing AI technology that has made it easier than ever to create a convincing fake image, video or sound to mimic the features of another person.
The changes to Danish copyright law will, once approved, theoretically give people in Denmark the right to demand that online platforms remove such content if it is shared without consent.
Salad, a company that pays gamers in Fortnite skins and Roblox gift cards to rent their idle GPUs remotely to generative AI companies, is using those idle computers to create AI-generated porn. Though 404 Media hasn’t seen evidence that any of the images produced by Salad and its network of idle gaming PCs produced nonconsensual AI-generated sexual images, it’s technically possible, and Salad has had a generative AI client that previously produced that type of content.
US tech giant will assume customers’ liability for material created by AI assistants in Word and coding tools
The “news broadcasters” appear stunningly real, but they are AI-generated deepfakes in first-of-their-kind propaganda videos that a research report published Tuesday attributed to Chinese state-aligned actors. The fake anchors — for a fictious news outlet called Wolf News — were created by artificial intelligence software and appeared in footage on social media that seemed to […]
The way that many of our systems currently focus on engagement makes them particularly vulnerable to the incoming wave of content from bots like GPT-3