cyble.com
December 8, 2025
China-nexus groups rapidly exploited React2Shell (CVE-2025-55182). Learn how the React Server Components flaw was weaponized within minutes of disclosure.
React2Shell (CVE-2025-55182) was exploited within minutes by China-nexus groups, exposing critical weaknesses in React Server Components.
The vulnerability disclosure cycle has entered a new era, one where the gap between publication and weaponization is measured in minutes, not days. It has been confirmed that China-nexus threat actors began actively exploiting a critical React Server Components flaw, React2Shell, only hours after its public release.
The vulnerability, tracked as CVE-2025-55182, impacts React Server Components across React 19.x and Next.js 15.x/16.x deployments using the App Router and carries a CVSS 10.0 severity rating, enabling unauthenticated remote code execution (RCE).
CISA immediately added the flaw to its Known Exploited Vulnerabilities catalog, stating:
“CISA has added one new vulnerability to its Known Exploited Vulnerabilities (KEV) Catalog, based on evidence of active exploitation.”
The Researcher’s PoCs and the Mechanism of Exploitation
Lachlan Davidson, who has been attributed with finding this flaw, published the original PoCs on GitHub, explaining:
“As public PoCs are circulating and Google’s Scanner uses a variation of my original submitted PoC, it’s finally a responsible time to share my original PoCs for React2Shell.”
Davidson released three PoCs, 00-very-first-rce-poc, 01-submitted-poc.js, and 02-meow-rce-poc, and summarized the attack chain:
“$@x gives you access to a Chunk”
“We plant its then on our own object”
“The JS runtime automatically unravels nested promises”
“We now re-enter the parser, but with control of a malicious fake Chunk object”
“Planting things on _response lets us access a lot of gadgets”
“RCE”
He also noted that “the publicly recreated PoC… did otherwise use the same _formData gadget that mine did”, though the chaining primitive in his then implementation was not universally adopted.
Rapid Weaponization by China-Nexus Groups
AWS detected exploitation beginning within hours of public disclosure on December 3, based on telemetry from its MadPot honeypot infrastructure. The actors included:
Earth Lamia, known for targeting financial, logistics, and government sectors across Latin America, MENA, and Southeast Asia.
Jackpot Panda, primarily focused on East and Southeast Asian organizations aligned with domestic security interests.
AWS stated, “China continues to be the most prolific source of state-sponsored cyber threat activity, with threat actors routinely operationalizing public exploits within hours or days of disclosure.”
Attackers overwhelmingly prioritized speed over precision, firing flawed and incomplete public PoCs at large swaths of the internet in a high-volume scanning wave. Many PoCs made unrealistic assumptions, such as assuming exposed fs, vm, or child_process modules that never appear in real deployments.
Yet this volume-based strategy still identifies edge-case vulnerable configurations.
Technical Analysis: React2Shell in the RSC Flight Protocol
CRIL (Cyble Research and Intelligence Labs) found that at its core, CVE-2025-55182 (React2Shell) is an unsafe deserialization flaw in the React Server Components Flight protocol. It affects:
react-server-dom-webpack
react-server-dom-parcel
react-server-dom-turbopack
Across React versions 19.0.0–19.2.0, patched in 19.0.1, 19.1.2, and 19.2.1.
Next.js is additionally vulnerable under CVE-2025-66478, impacting all versions from 14.3.0-canary.77, all unpatched 15.x builds, and all 16.x releases before 16.0.7.
Attack telemetry showed:
Automated scanners with user-agent randomization
Parallel exploitation of CVE-2025-1338
Immediate PoC adoption regardless of accuracy
Manual exploitation attempts, including whoami, id, and /etc/passwd reads
File write attempts such as /tmp/pwned.txt
A concentrated cluster originating from 183[.]6.80.214 executed 116 requests over 52 minutes, demonstrating active operator involvement.
Cloudflare’s Emergency Downtime While Mitigating React2Shell
The severity of React2Shell (CVE-2025-55182) was spotlighted when Cloudflare intentionally took down part of its own network to apply emergency defenses. The outage affected 28% of Cloudflare-served HTTP traffic early Friday.
Cloudflare CTO Dane Knecht clarified that the disruption “was not caused, directly or indirectly, by a cyberattack… Instead, it was triggered by changes being made to our body parsing logic while attempting to detect and mitigate an industry-wide vulnerability disclosed this week in React Server Components.”
This incident unfolded as researchers observed attackers hammering the vulnerability, alongside waves of legitimate and fraudulent proofs of concept circulating online.
Global Warnings Ring-In
The Australian Cyber Security Centre (ACSC) issued a public notice, stating, “This alert is relevant to all Australian businesses and organizations… ASD’s ACSC is aware of a critical vulnerability in React Server Components… Organizations should review their networks for vulnerable instances of these packages and upgrade to fixed versions.”
Organizations must assume that scanning React2Shell is continuous and widespread. ACSC outlined some Immediate steps for mitigation.
Update all React/Next.js deployments: Verify versions against vulnerable ranges and upgrade to patched releases.
Enable AWS WAF interim protection rules: These block known exploit sequences during patching windows.
Review logs for exploitation indicators: Look for malformed RSC payloads, next-action or rsc-actionid headers, and repeated sequential failures.
Inspect backend systems for post-exploitation behavior: Unexpected execution, unauthorized file writes, or suspicious commands.
Conclusion
The exploitation of React2Shell (CVE-2025-55182) shows how quickly high-severity vulnerabilities in critical and widely adopted components can be weaponized. China-nexus groups and opportunistic actors began targeting the flaw within minutes of disclosure, using shared infrastructure and public PoCs, accurate or not, to launch high-volume attacks. Organizations using React or Next.js App Router must patch immediately and monitor for iterative, operator-driven activity.
Given this tempo, organizations need intelligence and automation that operate in real time. Cyble, ranked #1 globally in Cyber Threat Intelligence Technologies by Gartner Peer Insights, provides AI-native security capabilities through platforms such as Cyble Vision and Blaze AI. These systems identify threats early, correlate IOCs across environments, and automate response actions.
Schedule a personalized demo to evaluate how AI-native threat intelligence can strengthen your security posture against vulnerabilities like React2Shell.
Indicators of Compromise
206[.]237.3.150
45[.]77.33.136
143[.]198.92.82
183[.]6.80.214
MITRE ATT&CK Techniques
Tactic Technique ID Technique Name
Initial Access T1190 Exploit Public-Facing Application
Privilege Escalation T1068 Exploitation for Privilege Escalation
bleepingcomputer.com
By Lawrence Abrams
December 6, 2025
Over 77,000 Internet-exposed IP addresses are vulnerable to the critical React2Shell remote code execution flaw (CVE-2025-55182), with researchers now confirming that attackers have already compromised over 30 organizations across multiple sectors.
React2Shell is an unauthenticated remote code execution vulnerability that can be exploited via a single HTTP request and affects all frameworks that implement React Server Components, including Next.js, which uses the same deserialization logic.
React disclosed the vulnerability on December 3, explaining that unsafe deserialization of client-controlled data inside React Server Components enables attackers to trigger remote, unauthenticated execution of arbitrary commands.
Developers are required to update React to the latest version, rebuild their applications, and then redeploy to fix the vulnerability.
On December 4, security researcher Maple3142 published a working proof-of-concept demonstrating remote command execution against unpatched servers. Soon after, scanning for the flaw accelerated as attackers and researchers began using the public exploit with automated tools.
Over 77,000 vulnerable IP addresses
Shadowserver Internet watchdog group now reports that it has detected 77,664 IP addresses vulnerable to the React2Shell flaw, with approximately 23,700 in the United States.
The researchers determined that IP addresses were vulnerable using a detection technique developed by Searchlight Cyber/Assetnote, where an HTTP request was sent to servers to exploit the flaw, and a specific response was checked to confirm whether a device was vulnerable.
GreyNoise also recorded 181 distinct IP addresses attempting to exploit the flaw over the past 24 hours, with most of the traffic appearing automated. The researchers say the scans are primarily originating from the Netherlands, China, the United States, Hong Kong, and a small number of other countries.
Palo Alto Networks reports that more than 30 organizations have already been compromised through the React2Shell flaw, with attackers exploiting the vulnerability to run commands, conduct reconnaissance, and attempt to steal AWS configuration and credential files.
These compromises include intrusions linked to known state-associated Chinese threat actors.
Widespread exploitation of React2Shell
Since its disclosure, researchers and threat intelligence companies have observed widespread exploitation of the CVE-2025-55182 flaw.
GreyNoise reports that attackers frequently begin with PowerShell commands that perform a basic math function to confirm the device is vulnerable to the remote code execution flaw.
These tests return predictable results while leaving minimal signs of exploitation:
powershell -c "4013841979"
powershell -c "4032043488"
Once remote code execution was confirmed, attackers were seen executing base64-encoded PowerShell commands that download additional scripts directly into memory.
powershell -enc <base64>
One observed command executes a second-stage PowerShell script from the external site (23[.]235[.]188[.]3), which is used to disable AMSI to bypass endpoint security and deploy additional payloads.
According to VirusTotal, the PowerShell script observed by GreyNoise installs a Cobalt Strike beacon on the targeted device, giving threat actors a foothold on the network.
Amazon AWS threat intelligence teams also saw rapid exploitation hours after the disclosure of the React CVE-2025-55182 flaw, with infrastructure associated with China-linked APT hacking groups known as Earth Lamia and Jackpot Panda.
In this exploitation, the threat actors perform reconnaissance on vulnerable servers by using commands such as whoami and id, attempting to write files, and reading /etc/passwd.
Palo Alto Networks also observed similar exploitation, attributing some of it to UNC5174, a Chinese state-sponsored threat actor believed to be tied to the Chinese Ministry of State Security.
"Unit 42 observed threat activity we assess with high confidence is consistent with CL-STA-1015 (aka UNC5174), a group suspected to be an initial access broker with ties to the Chinese Ministry of State Security," Justin Moore, Senior Manager at Palo Alto Networks Unit 42, told BleepingComputer via email.
"In this activity, we observed the deployment of Snowlight and Vshell malware, both highly consistent with Unit 42 knowledge of CL-STA-1015 (also known as UNC5174)."
The deployed malware in these attacks is:
Snowlight: A malware dropper that allows remote attackers to drop additional payloads on breached devices.
Vshell: A backdoor commonly used by Chinese hacking groups for remote access, post-exploitation activity, and to move laterally through a compromised network.
The rush to patch
Due to the severity of the React flaw, companies worldwide have rushed to install the patch and apply mitigations.
Yesterday, Cloudflare rolled out emergency detections and mitigations for the React flaw in its Web Application Firewall (WAF) due to its widespread exploitation and severity.
However, the update inadvertently caused an outage affecting numerous websites before the rules were corrected.
CISA has also added CVE-2025-55182 to the Known Exploited Vulnerabilities (KEV) catalog, requiring federal agencies to apply patches by December 26, 2025, under Binding Operational Directive 22-01.
Organizations using React Server Components or frameworks built on top of them are advised to apply updates immediately, rebuild and redeploy their applications, and review logs for signs of PowerShell or shell command execution.
| The Guardian - theguardian.com
Tess McClure
Tue 2 Dec 2025 03.02 CET
For days before the explosions began, the business park had been emptying out. When the bombs went off, they took down empty office blocks and demolished echoing, multi-cuisine food halls. Dynamite toppled a four-storey hospital, silent karaoke complexes, deserted gyms and dorm rooms.
So came the end of KK Park, one of south-east Asia’s most infamous “scam centres”, press releases from Myanmar’s junta declared. The facility had held tens of thousands of people, forced to relentlessly defraud people around the world. Now, it was being levelled piece by piece.
But the park’s operators were long gone: apparently tipped off that a crackdown was coming, they were busily setting up shop elsewhere. More than 1,000 labourers had managed to flee across the border, and some 2,000 others had been detained. But up to 20,000 labourers, likely trafficked and brutalised, had disappeared. Away from the junta’s cameras, scam centres like KK park have continued to thrive.
So monolithic has the multi-billion dollar global scam industry become that experts say we are entering the era of the “scam state”. Like the narco-state, the term refers to countries where an illicit industry has dug its tentacles deep into legitimate institutions, reshaping the economy, corrupting governments and establishing state reliance on an illegal network.
The raids on KK Park were the latest in a series of highly publicised crackdowns on scam centres across south-east Asia. But regional analysts say these are largely performative or target middling players, amounting to “political theatre” by officials who are under international pressure to crack down on them but have little interest in eliminating a wildly profitable sector.
“It’s a way of playing Whack-a-Mole, where you don’t want to hit a mole,” says Jacob Sims, visiting fellow at Harvard University’s Asia Centre and expert on transnational and cybercrime in the Mekong.
In the past five years scamming, says Sims, has mutated from “small online fraud rings into an industrial-scale political economy”.
“In terms of gross GDP, it’s the dominant economic engine for the entire Mekong sub-region,” he says, “And that means that it’s one of the dominant – if not the dominant – political engine.”
Government spokespeople in Myanmar, Cambodia and Laos did not respond to questions from the Guardian, but Myanmar’s military has previously said it is “working to completely eradicate scam activities from their roots”. The Cambodian government has also described allegations it is home to one of “the world’s largest cybercrime networks supported by the powerful” as “baseless” and “irresponsible”.
Morphing in less than a decade from a world of misspelled emails and implausible Nigerian princes, the industry has become a vast, sophisticated system, raking in tens of billions from victims around the world.
At its heart are “pig-butchering” scams – where a relationship is cultivated online before the scammer pushes their victim to part with their money, often via an “investment” in cryptocurrency. Scammers have harnessed increasingly sophisticated technology to fool targets: using generative AI to translate and drive conversations, deepfake technology to conduct video calls, and mirrored websites to mimic real investment exchanges. One survey found victims were conned for an average of $155,000 (£117,400) each. Most reported losing more than half their net worth.
Those huge potential profits have driven the industrialisation of the scam industry. Estimates of the industry’s global size now range from $70bn into the hundreds of billions – a scale that would put it on a par with the global illicit drug trade. The centres are typically run by transnational criminal networks, often originating from China, but their ground zero has been south-east Asia.
By late 2024, cyber scamming operations in Mekong countries were generating an estimated $44bn (£33.4bn) a year, equivalent to about 40% of the combined formal economy. That figure is considered conservative, and on the rise. “This is a massive growth area,” says Jason Tower, from the Global Initiative against Transnational Organised Crime. “This has become a global illicit market only since 2021 – and we’re now talking about a $70bn-plus-per-year illicit market. If you go back to 2020, it was nowhere near that size.”
In Cambodia, one company alleged by the US government to run scam compounds across the country had $15bn of cryptocurrency targeted in a Department of Justice (DOJ) seizure last month – funds equal to almost half of Cambodia’s economy.
With such huge potential profits, infrastructure has rapidly been built to facilitate it. The hubs thrive in conflict zones and along lawless and poorly regulated border areas. In Laos, officials have told local media around 400 are operating in the Golden Triangle special economic zone. Cyber Scam Monitor – a collective that monitors scamming Telegram channels, police reports, media and satellite data to identify scam compounds – has located 253 suspected sites across Cambodia. Many are enormous, and operating in public view.
The scale of the compounds is itself an indication of how much the states hosting them have been compromised, experts claim.
“These are massive pieces of infrastructure, set up very publicly. You can go to borders and observe them. You can even walk into some of them,” says Tower. “The fact this is happening in a very public way shows just the extreme level of impunity – and the extent to which states are not only tolerating this, but actually, these criminal actors are becoming state embedded.”
Thailand’s deputy finance minister resigned this October following allegations of links to scam operations in Cambodia, which he denies. Chen Zhi, who was recently hit by joint UK and US sanctions for allegedly masterminding the Prince Group scam network, was an adviser to Cambodia’s prime minister. The Prince Group said it “categorically rejects” claims the company or its chairman have engaged in any unlawful activity. In Myanmar, scam centres have become a key financial flow for armed groups. In the Philippines, ex-mayor Alice Guo, who ran a massive scam centre while in office, has just been sentenced to life in prison.
Across south-east Asia, scam masterminds are “operating at a very high level: they’re obtaining diplomatic credentials, they’re becoming advisers … It is massive in terms of the level of state involvement and co-optation,” Tower says.
“It’s quite unprecedented that you have an illicit market of this nature, that is causing global harm, where there’s blatant impunity, and it’s happening in this public way.”
wired.com
Andy Greenberg
The Big Story
Dec 4, 2025 12:00 PM
Privacy stalwart Nicholas Merrill spent a decade fighting an FBI surveillance order. Now he wants to sell you phone service—without knowing almost anything about you.
Nicholas Merrill has spent his career fighting government surveillance. But he would really rather you didn’t call what he’s selling now a “burner phone.”
Yes, he dreams of a future where anyone in the US can get a working smartphone—complete with cellular coverage and data—without revealing their identity, even to the phone company. But to call such anonymous phones “burners” suggests that they’re for something illegal, shady, or at least subversive. The term calls to mind drug dealers or deep-throat confidential sources in parking garages.
With his new startup, Merrill says he instead wants to offer cellular service for your existing phone that makes near-total mobile privacy the permanent, boring default of daily life in the US. “We're not looking to cater to people doing bad things,” says Merrill. “We're trying to help people feel more comfortable living their normal lives, where they're not doing anything wrong, and not feel watched and exploited by giant surveillance and data mining operations. I think it’s not controversial to say the vast majority of people want that.”
That’s the thinking behind Phreeli, the phone carrier startup Merrill launched today, designed to be the most privacy-focused cellular provider available to Americans. Phreeli, as in, “speak freely,” aims to give its user a different sort of privacy from the kind that can be had with end-to-end encrypted texting and calling tools like Signal or WhatsApp. Those apps hide the content of conversations, or even, in Signal’s case, metadata like the identities of who is talking to whom. Phreeli instead wants to offer actual anonymity. It can’t help government agencies or data brokers obtain users’ identifying information because it has almost none to share. The only piece of information the company records about its users when they sign up for a Phreeli phone number is, in fact, a mere ZIP code. That’s the minimum personal data Merrill has determined his company is legally required to keep about its customers for tax purposes.
By asking users for almost no identifiable information, Merrill wants to protect them from one of the most intractable privacy problems in modern technology: Despite whatever surveillance-resistant communications apps you might use, phone carriers will always know which of their customers’ phones are connecting to which cell towers and when. Carriers have frequently handed that information over to data brokers willing to pay for it—or any FBI or ICE agent that demands it with a court order
Merrill has some firsthand experience with those demands. Starting in 2004, he fought a landmark, decade-plus legal battle against the FBI and the Department of Justice. As the owner of an internet service provider in the post-9/11 era, Merrill had received a secret order from the bureau to hand over data on a particular user—and he refused. After that, he spent another 15 years building and managing the Calyx Institute, a nonprofit that offers privacy tools like a snooping-resistant version of Android and a free VPN that collects no logs of its users’ activities. “Nick is somebody who is extremely principled and willing to take a stand for his principles,” says Cindy Cohn, who as executive director of the Electronic Frontier Foundation has led the group’s own decades-long fight against government surveillance. “He's careful and thoughtful, but also, at a certain level, kind of fearless.”
More recently, Merrill began to realize he had a chance to achieve a win against surveillance at a more fundamental level: by becoming the phone company. “I started to realize that if I controlled the mobile provider, there would be even more opportunities to create privacy for people,” Merrill says. “If we were able to set up our own network of cell towers globally, we can set the privacy policies of what those towers see and collect.”
Building or buying cell towers across the US for billions of dollars, of course, was not within the budget of Merrill’s dozen-person startup. So he’s created the next best thing: a so-called mobile virtual network operator, or MVNO, a kind of virtual phone carrier that pays one of the big, established ones—in Phreeli’s case, T-Mobile—to use its infrastructure.
The result is something like a cellular prophylactic. The towers are T-Mobile’s, but the contracts with users—and the decisions about what private data to require from them—are Phreeli’s. “You can't control the towers. But what can you do?” he says. “You can separate the personally identifiable information of a person from their activities on the phone system.”
Signing up a customer for phone service without knowing their name is, surprisingly, legal in all 50 states, Merrill says. Anonymously accepting money from users—with payment options other than envelopes of cash—presents more technical challenges. To that end, Phreeli has implemented a new encryption system it calls Double-Blind Armadillo, based on cutting-edge cryptographic protocols known as zero-knowledge proofs. Through a kind of mathematical sleight of hand, those crypto functions are capable of tasks like confirming that a certain phone has had its monthly service paid for, but without keeping any record that links a specific credit card number to that phone. Phreeli users can also pay their bills (or rather, prepay them, since Phreeli has no way to track down anonymous users who owe them money) with tough-to-trace cryptocurrency like Zcash or Monero.
| The Jerusalem Post
jpost.com
ByJERUSALEM POST STAFF
NOVEMBER 26, 2025 21:02
A new directive would restrict IDF-issued devices to iPhones for lieutenant colonels, reducing the risk of intrusions for senior officers.
The Israel Defense Forces will tighten rules on mobile devices for senior officers and prohibit Android phones on IDF-issued lines, Army Radio reported on Wednesday.
Under the expected order, commanders from the rank of lieutenant colonel and above will be permitted to use only Apple iPhones for official communications. The step is aimed at reducing the risk of intrusions on senior officers’ handsets, according to the report.
Under the plan, the IDF would standardize operating systems at senior echelons to simplify security controls and updates. The IDF has not publicly detailed timelines or exceptions, and there was no immediate comment on whether the policy will cover personal devices used for work.
Why the IDF is acting now
Israeli security officials have long warned that hostile actors use social platforms and messaging apps to target soldiers’ phones and track troop movements. The IDF previously cautioned that Hamas used WhatsApp to solicit information from troops on the Gaza border, urging soldiers to report suspicious messages to commanders.
Military intelligence has also exposed repeated “honeypot” schemes in which operatives posed as women online to lure personnel into installing malware, most notably in Operation HeartBreaker. Analysts noted that such campaigns sought access to contacts, photos, and real-time location data on soldiers’ devices.
IDF staged scenarios mimicking Hezbollah-linked 'honeypots'
The new step follows earlier efforts to harden mobile use across the force, including training and internal drills designed to raise officers’ awareness of social-engineering tactics. In recent years, the IDF even staged scenarios mimicking Hezbollah-linked “honeypots” to stress-test units’ digital discipline.
Army Radio said the directive is expected to be issued in the coming days, with implementation applying to officers from lieutenant colonel up to the general staff. The reported move aligns with a broader push to curb inadvertent exposure from social media and ubiquitous messaging apps that can reveal patterns of life.
In 2019, the IDF warned troops that Hamas was using WhatsApp to gather data on IDF movement near Gaza and instructed soldiers to flag suspicious contacts to their chains of command.
blog.pypi.org
Mike Fiedler
PyPI Admin, Safety & Security Engineer (PSF)
Shai-Hulud is a great worm, not yet a snake. Attack on npm ecosystem may have implications for PyPI.
PyPI and Shai-Hulud: Staying Secure Amid Emerging Threats
An attack on the npm ecosystem continues to evolve, exploiting compromised accounts to publish malicious packages. This campaign, dubbed Shai-Hulud, has targeted large volumes of packages in the JavaScript ecosystem, exfiltrating credentials to further propagate itself.
PyPI has not been exploited, however some PyPI credentials were found exposed in compromised repositories. We've revoked these tokens as a precaution, there's no evidence they have been used maliciously. This post raises awareness about the attack and encourages proactive steps to secure your accounts, especially if you're using build platforms to publish packages to PyPI.
How does this relate to PyPI?
This week, a security researcher disclosed long-lived PyPI credentials exposed as part of the Shai-Hulud campaign. The credentials were found in GitHub repositories (stored as repository secrets), and were still valid. We saw an attack with insecure workflow settings for Ultralytics in 2024.
While the campaign primarily targets npm, some projects use monorepo setups, publishing both JavaScript packages to npmjs.com and Python packages to PyPI from the same repository. When attackers compromise these repositories, they can extract credentials for multiple platforms.
We investigated the reported credentials and found they were associated with accounts that hadn't published recently. We've revoked these credentials and reached out to affected users to advise them to rotate any remaining tokens.
What can I do to protect my PyPI account?
Here are security practices to protect your PyPI account:
Use Trusted Publishing: If you are using a build platform to publish packages to PyPI, consider using a Trusted Publisher. This eliminates the need to manage long-lived authentication tokens, reducing the risk of credential exposure. Trusted Publishing uses short-lived, scoped tokens for each build, minimizing the impact of any potential compromise. This approach has risen in popularity, with other registries like Crates.io, RubyGems, and npmjs.com adopting similar models.
When using GitHub Actions, consider layering in additional security measures, like requiring human approval via GitHub Environments before publishing. This blog post from pyOpenSci has detailed guidance on adding manual review steps to GitHub Actions workflows.
Audit your workflows for misconfiguration: Review your GitHub Actions workflows for any potential security issues. Tools like zizmor and CodeQL can help identify vulnerabilities in your CI/CD pipelines. Adopt scanning as automated actions for the repository to catch future issues.
Review your account activity: Regularly check your PyPI account activity for any unauthorized actions. If you notice any suspicious activity, report it to the PyPI security team immediately.
Taking any of these steps helps mitigate the risk of compromise and keeps packages secure.
themoscowtimes.com
Dec. 2, 2025
Hundreds of Porsche vehicles across Russia have been rendered undriveable after a failure in their factory-installed satellite security system, according to reports from owners and dealerships.
Drivers in Moscow, Krasnodar and other cities began reporting sudden engine shutdowns and fuel-delivery blockages last week, effectively immobilizing their vehicles.
Rolf, Russia’s largest dealership group, said service requests spiked on Friday as cars lost connection to their onboard alarm modules, which are linked via satellite.
The outage affects all Porsche models and engine types, and any vehicle could potentially lock itself automatically, a Rolf representative told the RBC news website.
“It’s possible this was done deliberately,” the representative was quoted as saying, though no evidence has emerged to support that claim.
Owners’ groups say the problem appears tied to the Vehicle Tracking System, or VTS, which is an onboard security module.
The Russian Porsche Macan Club said some drivers had restored function by disabling or rebooting the VTS, while others reported success after disconnecting their car batteries for up to 10 hours, according to the Telegram channel Mash.
Rolf said specialists were still investigating the root cause of the problem. Porsche’s office in Russia and its global headquarters in Germany have not yet commented on the system failure.
Porsche halted deliveries and suspended its commercial operations in Russia after the full-scale invasion of Ukraine in February 2022. However, the company still retains ownership of three subsidiaries in the country, which it has so far been unable to sell.
The Guardian
Dan Milmo Global technology editor.
Wed 3 Dec 2025 07.00 CET
Researchers uncovered 354 AI-focused accounts that had accumulated 4.5bn views in a month
Hundreds of accounts on TikTok are garnering billions of views by pumping out AI-generated content, including anti-immigrant and sexualised material, according to a report.
Researchers said they had uncovered 354 AI-focused accounts pushing 43,000 posts made with generative AI tools and accumulating 4.5bn views over a month-long period.
According to AI Forensics, a Paris-based non-profit, some of these accounts attempt to game TikTok’s algorithm – which decides what content users see – by posting large amounts of content in the hope that it goes viral.
One posted up to 70 times a day or at the same time of day, an indication of an automated account, and most of the accounts were launched at the beginning of the year.
Last month TikTok revealed there were at least 1.3bn AI-generated posts on the platform. More than 100m pieces of content are uploaded to the platform every day, indicating that labelled AI material is a small part of TikTok’s catalogue. TikTok is also giving users the option of reducing the amount of AI content they see.
Of the accounts that posted content most frequently, half focused on content related to the female body. “These AI women are always stereotypically attractive, with sexualised attire or cleavage,” the report said.
AI Forensics found the accounts did not label half of the content they posted and less than 2% carried the TikTok label for AI content – which the nonprofit warned could increase the material’s deceptive potential. Researchers added that the accounts sometimes escape TikTok’s moderation for months, despite posting content barred by its terms of service.
Dozens of the accounts revealed in the study have subsequently been deleted, researchers said, indicating that some had been taken down by moderators.
Some of the content took the form of fake broadcast news segments with anti-immigrant narratives and material sexualising female bodies, including girls that appeared to be underage. The female body category accounted for half of the top 10 most active accounts, said AI Forensics, while some of the fake news pieces featured known broadcasting brands such as Sky News and ABC.
Some of the posts have been taken down by TikTok after they were referred to the platform by the Guardian.
TikTok said the report’s claims were “unsubstantiated” and the researchers had singled it out for an issue that was affecting multiple platforms. In August the Guardian revealed that nearly one in 10 of the fastest growing YouTube channels globally were showing only AI-generated content.
“On TikTok, we remove harmful AIGC [artificial intelligence-generated content], block hundreds of millions of bot accounts from being created, invest in industry-leading AI-labelling technologies and empower people with tools and education to control how they experience this content on our platform,” a TikTok spokesperson said.
The most popular accounts highlighted by AI Forensics in terms of views had posted “slop”, the term for AI-made content that is nonsensical, bizarre and designed to clutter up people’s social media feeds – such as animals competing in an Olympic diving contest or talking babies. The researchers acknowledged that some of the slop content was “entertaining” and “cute”.
| TechCrunch
Zack Whittaker
10:55 AM PST · December 3, 2025
Marquis said ransomware hackers stole reams of banking customer data, containing personal information and financial records, as well as Social Security numbers, belonging to hundreds of thousands of people. The number of affected people is expected to rise.
Fintech company Marquis is notifying dozens of U.S. banks and credit unions that they had customer data stolen in a cyberattack earlier this year.
Details of the cyberattack emerged this week after Marquis filed data breach notices with several U.S. states confirming its August 14 incident as a ransomware attack.
Texas-based Marquis is a marketing and compliance provider that allows banks and other financial institutions to collect and visualize all of their customer data in one place. The company counts more than 700 banking and credit union customers on its website. As such, Marquis has access to and stores large amounts of data belonging to consumer banking customers across the United States.
At least 400,000 people are so far confirmed affected by the data breach, according to legally required disclosures filed in the states of Iowa, Maine, Texas, Massachusetts, and New Hampshire that TechCrunch has reviewed.
Texas has the largest number of state residents so far who had data stolen in the breach, affecting at least 354,000 people.
Marquis said in its notice with Maine’s attorney general that banking customers with the Maine State Credit Union accounted for the majority of its data breach notifications, or around one-in-nine people who are known to be affected throughout the state.
The number of individuals affected by the breach is expected to rise as more data breach notifications roll in from other states.
Marquis said the hackers stole customer names, dates of birth, postal addresses, and financial information, such as bank account, debit, and credit card numbers. Marquis said the hackers also stole customers’ Social Security numbers.
According to its most recent notices, Marquis blamed the ransomware attack on hackers who exploited a vulnerability in its SonicWall firewall. The vulnerability was considered a zero-day, meaning the flaw was not known to SonicWall or its customers before it was maliciously exploited by hackers.
Marquis did not attribute the ransomware attack to a particular group, but the Akira ransomware gang was reportedly behind the mass-hacks targeting SonicWall customers at the time.
TechCrunch asked Marquis if it is aware of the total number of people affected by the breach, and if Marquis received any communication from the hackers or if the company paid a ransom, but we did not hear back by the time of publication.
sicuranext.com
Claudio Bono
01 Dec 2025
Earlier this year, our CTI team set out to build something we'd been thinking about for a while: a phishing intelligence pipeline that could actually keep up with the threat. We combined feeds from hundreds of independent sources with our own real-time hunt for suspicious SSL/TLS certificates. The goal was simple: get better visibility into what attackers are actually doing, not what they were doing six months ago.
Last quarter's numbers hit harder than we expected: 42,000+ validated URLs and domains, all actively serving phishing kits, command-and-control infrastructure, or payload delivery.
This isn't your grandfather's phishing problem. We're not talking about misspelled PayPal domains and broken English. What we're seeing is organized, efficient, and frankly, impressive in all the wrong ways. This research breaks down the infrastructure, TTPs, and operational patterns behind modern phishing—and what it means for anyone trying to defend against it.
Finding #1: All Roads Lead to Cloudflare
Here's the headline: 68% of all phishing infrastructure we tracked lives on Cloudflare.
Provider Domains % of Total
Cloudflare 17,202 68.0%
GCP 3,414 13.5%
AWS 2,185 8.6%
Azure 1,355 5.4%
This isn't random. Cloudflare's free tier is a gift to threat actors—zero upfront cost, world-class DDoS protection (yes, really), and proxy services that completely mask origin servers. Good luck tracking down the actual host when everything's bouncing through Cloudflare's edge network.
We're seeing thousands malicious domains clustered on AS13335 alone. That's Cloudflare's primary ASN, and it's become the de facto home base for phishing operations worldwide.
The CDN Divide: Two Strategies, One Ecosystem
When we looked at the 12,635 unique IPs hosting these IOCs, a clear pattern emerged. The threat landscape has forked:
51.54% direct hosting – Think disposable infrastructure. Spin it up fast, burn it down faster. Perfect for smishing blasts and hit-and-run campaigns.
48.46% CDN/proxy-protected: The long game. These setups are built to survive, leveraging CDNs (92% Cloudflare, naturally) for origin obfuscation and anti-takedown resilience.
Here's the problem: your IP-based blocking protection? It works on roughly half the threat landscape. The other half just laughs at you from behind Cloudflare's proxy. You need URL filtering, domain heuristics, and TLS fingerprinting now. IP blocks alone are a coin flip.
And before anyone says "these domains must be unstable", we saw a 96.16% mean DNS resolution rate. These operators run infrastructure like a Fortune 500 company. High availability, minimal downtime, proper DevOps hygiene. It's professional-grade crime.
Finding #2: Abusing Trust at Scale
Forget .xyz and .tk domains. Attackers have moved upmarket.
TLD Count Why They Use It
.com 11,324 Universal legitimacy
.dev 7,389 Targets developers
.app 2,992 Mobile/SaaS impersonation
.io 2,425 Tech sector credibility
.cc 1,745 Cheap, minimal oversight
The surge in .dev and .app domains tells you everything. Attackers aren't just going after your CFO anymore: they're targeting developers. Fake GitHub OAuth flows, spoofed Vercel deployment pages, bogus npm package sites. They're hunting credentials from the people who actually understand security, betting (correctly) that a something.dev domain gets less scrutiny than something-phishing.tk.
Free Hosting: The Perfect Cover
Now pair this with free hosting platforms, and you get a disaster: 72% of domains in our dataset used obfuscation via legitimate services.
Vercel: 1,942 domains
GitHub Pages: 1,540 domains
GoDaddy Sites: 734 domains
Webflow: 669 domains
Try explaining to your CISO why you need to block github.io or vercel.app. You can't. Your developers need those. Your business uses those. Attackers know this, and they're weaponizing it. Domain reputation systems collapse when every phishing page sits under a trusted parent domain.
Finding #3: PhaaS and the Industrialization of Crime
We need to stop calling these "phishing kits." That undersells what we're dealing with.
What we're seeing is Phishing-as-a-Service (PhaaS): full-stack criminal SaaS platforms. Services like Caffeine - now offline - and W3LL offer subscription-based access to complete attack infrastructure: hosting, templates, exfiltration pipelines, even customer support. They've turned phishing into a commodity anyone can buy.
The real nightmare feature? MFA bypass. Kits like EvilProxy and Tycoon 2FA don't bother stealing passwords anymore. They operate as adversary-in-the-middle (AitM) proxies, sitting between the victim and the legitimate service. User authenticates, kit intercepts, passes creds through to the real site, then steals the resulting session cookie. No password needed. No MFA challenge. Just instant account access.
These platforms also ship with serious evasion tech:
Geofencing to block security researchers by IP range
User-Agent Based Cloaking that targets devices by browser user agent: often the final landing page is only visible on mobile devices browsers
DevTools detection (open F12, page immediately stop working)
Cloudflare CAPTCHA to filter out automated scanners
Over the past four months, we clustered 20 distinct phishing clusters based on shared infrastructure fingerprints: same rotated IPs, same registrars, identical evasion patterns and obfuscation methods. This isn't a bunch of script kiddies copying code. It's coordinated, engineered operations with centralized data management and exfiltration workflows.
Almost 60% of the observed IOCs are deemed to be linked with PhaaS, this means a global tendency to separate those who produce and manage actual infrastructure from those (often non-technical users) who use it (for a fee), hoping to make a significant profit by reselling stolen data.
Finding #4: Meta in the Crosshairs
If there's one target dominating the landscape, it's Meta. 10,267 mentions: 42% of all brand impersonation we tracked.
Brand Mentions Attack Type
Meta 10,267 Facebook/Instagram/WhatsApp creds
Amazon 2,617 Payment data, account takeover
Netflix 2,450 Subscription scams
PayPal 1,993 Financial fraud, redirects
Stripe 1,571 Merchant account compromise
Why Meta? Three billion users. Multiple attack surfaces. Credential reuse across platforms. It's target-rich and full of high-value accounts. The focus on Stripe and PayPal shows attackers aren't just after creds anymore: they're after money. Direct financial fraud, merchant compromise, payment interception.
What This Means for Defense
The era of "just block the domain" is over. We're up against industrialized, adaptive, professionally-run adversaries. Deterministic detection is dead. You can't regex your way out of this anymore, defenses need to evolve:
CDN-aware detection – IP blocking is 50% effective at best
Behavioral analysis – Focus on session anomalies, not just domains
TLS fingerprinting – Track certificate patterns and issuance velocity
Hunt for PhaaS indicators – Cluster campaigns by shared infrastructure
User education that doesn't suck – Stop educating people talking about domain typosquotting or http vs https concepts: teach people what real-scenario looks like in practice.
This isn't FUD. This is what 42,000 live phishing sites look like when you actually go hunting for them. The threat is real, it's organized, and it's not slowing down.
What Comes Next: Diving Deep into the Criminal Engine
In our next in-depth analysis, we will reveal the real infrastructure that powers this industrialization. We will guide you step by step through a modern and complex PhaaS platform, demonstrating exactly how the TTPs described in this article function in a real operational environment.
| The Record from Recorded Future News
therecord.media
Jonathan Greig
December 1st, 2025
A recent cyberattack on South Korea’s largest cryptocurrency exchange was allegedly conducted by a North Korean government-backed hacking group.
Yonhap News Agency reported on Friday that South Korean government officials are involved in the investigation surrounding $30 million worth of cryptocurrency that was stolen from Upbit on Wednesday evening.
On Friday, South Korean officials told the news outlet that North Korea’s Lazarus hacking group was likely involved in the theft based on the tactics used to break into the cryptocurrency platform and the methods deployed to launder the stolen funds.
Investigators believe the hackers impersonated administrators at Upbit before transferring about $30 million.
In a statement, the company called the theft an “abnormal withdrawal” and said it is in the process of investigating the attack.
Oh Kyung-seok, CEO of parent company Dunamu, added that the platform has suspended deposits and withdrawals.
All losses will be covered by Upbit. The attack came one day after South Korean internet giant Naver purchased Dunamu for $10 billion.
“After detecting the abnormal withdrawal, Upbit immediately conducted an emergency security review of the relevant network and wallet systems,” the CEO said. “To prevent further abnormal transfers, all assets have been transferred to a secure cold wallet.”
Upbit tracked some of the stolen funds to another wallet on Thursday and is trying to freeze some of the assets so they cannot be moved further.
Investigators noted that the attack bears the hallmarks of a previous incident in 2019 when about $40 million was stolen from Upbit. That attack was also attributed to Lazarus — one of the most prolific state-backed hacking groups.
Lazarus is allegedly organized within the North Korean Reconnaissance General Bureau and has stolen billions worth of cryptocurrency over the last nine years, with blockchain monitoring firm Chainalysis saying hacking groups connected to North Korea’s government stole $1.3 billion worth of cryptocurrency across 47 incidents in 2024.
The group is accused of stealing $1.5 billion from Dubai-based crypto platform Bybit in February. The United Nations said last year that it is tracking dozens of incidents over a five-year period that have netted North Korea $3 billion.
bleepingcomputer.com
By Bill Toulas
December 1, 2025
The popular open-source SmartTube YouTube client for Android TV was compromised after an attacker gained access to the developer's signing keys, leading to a malicious update being pushed to users.
The compromise became known when multiple users reported that Play Protect, Android's built-in antivirus module, blocked SmartTube on their devices and warned them of a risk.
The developer of SmartTube, Yuriy Yuliskov, admitted that his digital keys were compromised late last week, leading to the injection of malware into the app.
Yuliskov revoked the old signature and said he would soon publish a new version with a separate app ID, urging users to move to that one instead.
SmartTube is one of the most widely downloaded third-party YouTube clients for Android TVs, Fire TV sticks, Android TV boxes, and similar devices.
Its popularity stems from the fact that it is free, can block ads, and performs well on underpowered devices.
A user who reverse-engineered the compromised SmartTube version number 30.51 found that it includes a hidden native library named libalphasdk.so [VirusTotal]. This library does not exist in the public source code, so it is being injected into release builds.
"Possibly a malware. This file is not part of my project or any SDK I use. Its presence in the APK is unexpected and suspicious. I recommend caution until its origin is verified," cautioned Yuliskov on a GitHub thread.
The library runs silently in the background without user interaction, fingerprints the host device, registers it with a remote backend, and periodically sends metrics and retrieves configuration via an encrypted communications channel.
All this happens without any visible indication to the user. While there's no evidence of malicious activity such as account theft or participation in DDoS botnets, the risk of enabling such activities at any time is high.
Although the developer announced on Telegram the release of safe beta and stable test builds, they have not reached the project's official GitHub repository yet.
Also, the developer has not provided full details of what exactly happened, which has created trust issues in the community.
Yuliskov promised to address all concerns once the final release of the new app is pushed to the F-Droid store.
Until the developer transparently discloses all points publicly in a detailed post-mortem, users are recommended to stay on older, known-to-be-safe builds, avoid logging in with premium accounts, and turn off auto-updates.
Impacted users are also recommended to reset their Google Account passwords, check their account console for unauthorized access, and remove services they don't recognize.
At this time, it is unclear exactly when the compromise occurred or which versions of SmartTube are safe to use. One user reported that Play Protect doesn't flag version 30.19, so it appears safe.
BleepingComputer has contacted Yuliskov to determine which versions of the SmartTube app were compromised, and he responded with the following:
"Some of the older builds that appeared on GitHub were unintentionally compromised due to malware present on my development machine at the time they were created. As soon as I noticed the issue in late November, I immediately wiped the system and cleaned the environment, including the GitHub repository."
"I became aware of the malware issue around version 30.47, but as users reported lately it started around version 30.43. So, for my understanding the compromised versions are: 30.43-30.47."
"After cleaning the environment, a couple of builds were released using the previous key (prepared on the clean system), but from version 30.55 onward I switched to a new key for full security. The differing hashes for 30.47 Stable v7a are likely the result of attempts to restore that build after cleaning the infected system."
Update 12/2 - Added developer comment and information.
nextron-systems.com - Nextron Systems
by Marius BenthinNov 28, 2025
Over the last weeks we’ve been running a new internal artifact-scanning service across several large ecosystems. It’s still growing feature-wise, LLM scoring and a few other bits are being added, but the core pipeline is already pulling huge amounts of stuff every week – Docker Hub images, PyPI packages, NPM modules, Chrome extensions, VS Code extensions. Everything gets thrown through our signature set that’s built to flag obfuscated JavaScript, encoded payloads, suspicious command stubs, reverse shells, and the usual “why is this here” indicators.
The only reason this works at the scale we need is THOR Thunderstorm running in Docker. That backend handles the heavy lifting for millions of files, so the pipeline just feeds artifacts into it at a steady rate. Same component is available to customers; if someone wants to plug this kind of scanning into their own CI or ingestion workflow, Thunderstorm can be used exactly the way we use it internally.
We review millions of files; most of the noise is the classic JS-obfuscation stuff that maintainers use to “protect” code; ok… but buried in the noise you find the things that shouldn’t be there at all. And one of those popped up this week.
Our artifact scanning approach
We published an article this year about blind spots in security tooling and why malicious artifacts keep slipping through the standard AV checks. That’s the gap this whole setup is meant to cover. AV engines choke on obfuscated scripts, and LLMs fall over as soon as you throw them industrial-scale volume. Thunderstorm sits in the middle – signature coverage that hits encoded payloads, weird script constructs, stagers, reverse shells, etc., plus the ability to scale horizontally in containers.
The workflow is simple:
pull artifacts from Docker Hub, PyPI, NPM, the VS Code Marketplace, Chrome Web Store;
unpack them into individual files;
feed them into Thunderstorm;
store all hits;
manually review anything above a certain score.
We run these scans continuously. The goal is to surface the obviously malicious uploads quickly and not get buried in the endless “maybe suspicious” noise.
The finding: malicious VS Code extension with Rust implants
While reviewing flagged VS Code extensions, Marius stumbled over an extension named “Icon Theme: Material”, published under the account “IconKiefApp”. It mimics the legitimate and extremely popular Material Icon Theme extension by Philipp Kief. Same name pattern, same visuals, but not the same author.
The fake extension had more than 16,000 installs already.
Inside the package we found two Rust implants: one Mach-O, one Windows PE. The paths looked like this:
icon-theme-materiall.5.29.1/extension/dist/extension/desktop/
The Mach-O binary contains a user-path string identical in style to the GlassWorm samples reported recently by Koi (VT sample link below). The PE implant shows the same structure. Both binaries are definitely not part of any real icon-theme extension.
The malicious extension:
https://marketplace.visualstudio.com/items?itemName=Iconkieftwo.icon-theme-materiall
The legitimate one:
https://marketplace.visualstudio.com/items?itemName=PKief.material-icon-theme
Related GlassWorm sample:
https://www.virustotal.com/gui/file/eafeccc6925130db1ebc5150b8922bf3371ab94dbbc2d600d9cf7cd6849b056e
IOCs
VS Code Extension
0878f3c59755ffaf0b639c1b2f6e8fed552724a50eb2878c3ba21cf8eb4e2ab6
icon-theme-materiall.5.29.1.zip
Rust Implants
6ebeb188f3cc3b647c4460c0b8e41b75d057747c662f4cd7912d77deaccfd2f2
(os.node) PE
fb07743d139f72fca4616b01308f1f705f02fda72988027bc68e9316655eadda
(darwin.node) MACHO
Signatures
YARA rules that triggered on the samples:
SUSP_Implant_Indicators_Jul24_1
SUSP_HKTL_Gen_Pattern_Feb25_2
Status
We already reported the malicious extension to Microsoft. The previous version, 5.29.0, didn’t contain any implants. The publisher then pushed a new update, version 5.29.1, on 28 November 2025 at 11:34, and that one does include the two Rust implants.
As of now (28 November, 14:00 CET), the malicious 5.29.1 release is still online. We expect Microsoft to remove the extension from the Marketplace. We’ll share more details once we’ve fully unpacked both binaries and mapped the overlaps with the GlassWorm activity.
Closing
This is exactly the kind of thing the artifact-scanner was built for. Package ecosystems attract opportunistic uploads; VS Code extensions are no different. We’ll keep scanning the big ecosystems and publish findings when they’re clearly malicious. If you maintain an extension or a package registry and want to compare detections with us, feel free to reach out; we’re adding more sources week by week.
Update 29.11.2025
Since we published the initial post, a full technical analysis of the Rust implants contained in the malicious extension has been completed. The detailed breakdown is now available in our follow-up article: “Analysis of the Rust implants found in the malicious VS Code extension”.
That post describes how the implants operate on Windows and macOS, their command-and-control mechanism via a Solana-based wallet, the encrypted-payload delivery, and fallback techniques including a hidden Google Calendar-based channel.
Readers who want full technical context, IOCs and deeper insight are encouraged to review the new analysis.
Post-mortem of Shai-Hulud attack on November 24th, 2025
Oliver Browne
Nov 26, 2025
PostHog news - posthog.com
At 4:11 AM UTC on November 24th, a number of our SDKs and other packages were compromised, with a malicious self-replicating worm - Shai-Hulud 2.0. New versions were published to npm, which contained a preinstall script that:
Scanned the environment the install script was running in for credentials of any kind using Trufflehog, an open-source security tool that searches codebases, Git histories, and other data sources for secrets.
Exfiltrated those credentials by creating a new public repository on GitHub and pushing the credentials to it.
Used any npm credentials found to publish malicious packages to npm, propagating the breach.
By 9:30 AM UTC, we had identified the malicious packages, deleted them, and revoked the tokens used to publish them. We also began the process of rolling all potentially compromised credentials pre-emptively, although we had not at the time established how our own npm credentials had been compromised (we have now, details below).
The attack only affected our Javascript SDKs published in npm. The most relevant compromised packages and versions were:
posthog-node 4.18.1, 5.13.3 and 5.11.3
posthog-js 1.297.3
posthog-react-native 4.11.1
posthog-docusaurus 2.0.6
posthog-react-native-session-replay@1.2.2
@posthog/agent@1.24.1
@posthog/ai@7.1.2
@posthog/cli@0.5.15
What should you do?
If you are using the script version of PostHog you were not affected since the worm spread via the preinstall step when installing your dependencies on your development/CI/production machines.
If you are using one of our Javascript SDKs, our recommendations are to:
Look for the malicious files locally, in your home folder, or your document roots:
Terminal
find . -name "setup_bun.js" \
-o -name "bun_environment.js" \
-o -name "cloud.json" \
-o -name "contents.json" \
-o -name "environment.json" \
-o -name "truffleSecrets.json"
Check npm logs for suspicious entries:
Terminal
grep -R "shai" ~/.npm/_logs
grep -R "preinstall" ~/.npm/_logs
Delete any cached dependencies:
Terminal
rm -rf node_modules
npm cache clean --force
pnpm cache delete
Pin any dependencies to a known-good version (in our case, all the latest published versions, which have been published after we identified the attack, are known-good), and then reinstall your dependencies.
We also suggest you make use of the minimumReleaseAge setting present both in yarn and pnpm. By setting this to a high enough value (like 3 days), you can make sure you won't be hit by these vulnerabilities before researchers, package managers, and library maintainers have the chance to wipe the malicious packages.
How did it happen?
PostHog's own package publishing credentials were not compromised by the worm described above. We were targeted directly, as were a number of other major vendors, to act as a "patient zero" for this attack.
The first step the attacker took was to steal the Github Personal Access Token of one of our bots, and then use that to steal the rest of the Github secrets available in our CI runners, which included this npm token. These steps were done days before the attack on the 24th of November.
At 5:40PM on November 18th, now-deleted user brwjbowkevj opened a pull request against our posthog repository, including this commit. This PR changed the code of a script executed by a workflow we were running against external contributions, modifying it to send the secrets available during that script's execution to a webhook controlled by the attacker. These secrets included the Github Personal Access Token of one of our bots, which had broad repo write permissions across our organization. The PR itself was deleted along with the fork it came from when the user was deleted, but the commit was not.
The PR was opened, the workflow run, and the PR closed within the space of 1 minute (screenshots include timestamps in UTC+2, the author's timezone):
initial PR logs
At 3:28 PM UTC on November 23rd, the attacker used these credentials to delete a workflow run. We believe this was a test, to see if the stolen credentials were still valid (it was successful).
At 3:43 PM, the attacker used these credentials again, to create another commit masquerading, by chance, as the report's author (we believe this was a randomly chosen branch on which the author happened to be the last legitimate contributor given that the author does not possess any special privileges on his GitHub account).
This commit was pushed directly as a detached commit, not as part of a pull request or similar. In it, the attacker modified an arbitrary Lint PR workflow directly to exfiltrate all of our Github secrets. Unlike the previous PR attack, which could only modify the script called from the workflow, and as such could only exfiltrate our bot PAT, this commit had full write access to our repository given the ultra-permissive PAT which meant they could run arbitrary code on the scope of our Github Actions runners.
With that done, the attacker was able to run their modified workflow, and did so at 3:45 PM UTC:
Follow up commit workflow runs
The principal associated with these workflow actions is posthog-bot, our Github bot user, whose PAT was stolen in the initial PR. We were only able to identify this specific commit as the pivot after the fact using Github audit logs, due to the attackers deletion of the workflow run following its completion.
At this point, the attacker had our npm publishing token, and 12 hours later, at 4:11 AM UTC the following morning, published the malicious packages to npm, starting the worm.
As noted, PostHog was not the only vendor used as an initial vector for this broader attack. We expect other vendors will be able to identify similar attack patterns in their own audit logs.
Why did it happen?
PostHog is proudly open-source, and that means a lot of our repositories frequently receive external contributions (thank you).
For external contributions, we want to automatically assign reviewers depending on which parts of our codebase the contribution changed. GitHub's CODEOWNERS file is typically used for this, but we want the review to be a "soft" requirement, rather than blocking the PR for internal contributors who might be working on code they don't own.
We had a workflow, auto-assign-reviewers.yaml, which was supposed to do this, but it never really worked for external contributions since it required manual approval defeating the purpose of automatically tagging the right people without manual interference.
One of our engineers figured out this was because it triggered on: pull_request which means external contributions (which come from forks, rather than branches in the repo like internal contributions) would not have the workflow automatically run. The fix for this was changing the trigger to be on: pull_request_target, which runs the workflow as it's defined in the PR target repo/branch, and is therefore considered safe to auto-run.
Our engineer opened a PR to make this change, and also make some fixes to the script, including checking out the current branch, rather than the PR base branch, so that the diffing would work properly. This change seemed safe, as our understanding of on: pull_request_target was, roughly, "ok, this runs the code as it is in master/the target repo".
This was a dangerous misconception, for a few reasons:
on: pull_request_target only ensures the workflow is being run as defined in the PR target, not the code being run - that's controlled by the checkout step.
This particular workflow executed code from within the repo - a script called assign-reviewers.js, which was initially developed for internal (and crucially, trusted) auto-assignment, but was now being used for external assignment too.
The workflow was modified to manually checkout the git commit of the PR head, rather than the PR base, so that the diffing would work correctly for external contributions, but this meant that the code being run was controlled by the PR author.
These pieces together meant it was possible for a pull request which modified assign-reviewers.js to run arbitrary code, within the context of a trusted CI run, and therefore steal our bot token.
Why did this workflow change get merged? Honestly, security is unintuitive.
The engineer making the change thought pull_request_target ensured that the version of assign-reviewers.js being executed, a script stored in .github/scripts in the repository, would be the one on master, rather than the one in the PR.
The engineer reviewing the PR thought the same.
None of us noticed the security hole in the month and a half between the PR being merged and the attack (the PR making this change was merged on the 11th of September). This workflow change was even flagged by one of our static analysis tools before merge, but we explicitly dismissed the alert because we mistakenly thought our usage was safe.
Workflow rules, triggers and execution contexts are hard to reason about - so hard to reason about that Github is actively making changes to make them simpler and closer to our understanding above. Although, in our case, these changes would not have protected us against the initial attack.
Notably, we identified copycat attacks on the following day attempting to leverage the same vulnerability, and while we prevented those, we had to take frustratingly manual and uncertain measures to do so. The changes Github is making to the behaviour of pull_request_target would have prevented those copycats automatically for us.
How are we preventing it from happening again?
This is the largest and most impactful security incident we've ever had. We feel terrible about it, and we're doing everything we can to prevent something like this from happening again.
I won't enumerate all the process and posture changes we're implementing here, beyond saying:
We've significantly tightened our package release workflows (moving to the trusted publisher model).
Increased the scrutiny any PR modifying a workflow file gets (requiring a specific review from someone on our security team).
Switched to pnpm 10 (to disable preinstall/postinstall scripts and use minimumReleaseAge).
Re-worked our Github secrets management to make our response to incidents like this faster and more robust.
PostHog is, in many of our engineers minds, first and foremost a data company. We've grown a lot in the last few years, and for that time, our focus has always been on data security - ensuring the data you send us is safe, that our cloud environments are secure, and that we never expose personal information. This kind of attack, being leveraged as an initial vector for an ecosystem-wide worm, simply wasn't something we'd prepared for.
At a higher level, we've started to take broad security a lot more seriously, even prior to this incident. In July, we hired Tom P, who's been fully dedicated to improving our overall security posture. Both our incident response and the analysis in this post-mortem simply wouldn't have been possible without the tools and practices he's put in place, and while there's a huge amount still to do, we feel good about the progress we're making. We have to do better here, and we feel confident we will.
Given the prominence of this attack and our desire to take this work seriously, we wanted to use this as a chance to say that if you'd like to work in our security team, and write post-mortems like these (or, better still, write analysis like this about attacks you stopped), we're always looking for new talent. Email tom.p at posthog dot com, or apply directly on our careers page.
| Europol
europol.europa.eu
From 24 to 28 November 2025, Europol supported an action week conducted by law enforcement authorities from Switzerland and Germany in Zurich, Switzerland. The operation focused on taking down the illegal cryptocurrency mixing service ‘Cryptomixer’, which is suspected of facilitating cybercrime and money laundering.
Open in modalOP Olympia - this domain has been seized
Three servers were seized in Switzerland, along with the cryptomixer.io domain. The operation resulted in the confiscation of over 12 terabytes of data and more than EUR 25 million worth of the cryptocurrency Bitcoin. After the illegal service was taken over and shut down, law enforcement placed a seizure banner on the website.
A service to obfuscate the origin of criminal funds
Cryptomixer was a hybrid mixing service accessible via both the clear web and the dark web. It facilitated the obfuscation of criminal funds for ransomware groups, underground economy forums and dark web markets. Its software blocked the traceability of funds on the blockchain, making it the platform of choice for cybercriminals seeking to launder illegal proceeds from a variety of criminal activities, such as drug trafficking, weapons trafficking, ransomware attacks, and payment card fraud. Since its creation in the year 2016, over EUR 1.3 billion in Bitcoin were mixed through the service.
Deposited funds from various users were pooled for a long and randomised period before being redistributed to destination addresses, again at random times. As many digital currencies provide a public ledger of all transactions, mixing services make it difficult to trace specific coins, thus concealing the origin of cryptocurrency.
Mixing services such as Cryptomixer offer their clients anonymity and are often used before criminals redirect their laundered assets to cryptocurrency exchanges. This allows ‘cleaned’ cryptocurrency to be exchanged for other cryptocurrencies or for FIAT currency through cash machines or bank accounts.
Europol’s support
Europol facilitated the exchange of information in the framework of the Joint Cybercrime Action Taskforce (J-CAT), which is hosted at Europol’s headquarters in The Hague, the Netherlands. One of Europol’s priorities is to act as a broker of law enforcement knowledge, providing a hub through which Member States can connect and benefit from one another’s and Europol’s expertise.
Throughout the operation, the agency provided crucial support, including coordinating the involved partners and hosting operational meetings. On the action day, Europol’s cybercrime experts provided on-the-spot support and forensic assistance.
In March 2023, Europol already supported the takedown of the largest mixing service at that time, ‘Chipmixer’.
Participating countries:
Germany: Federal Criminal Police Office (Bundeskriminalamt); Prosecutor General’s Office Frankfurt am Main, Cyber Crime Centre (Generalstaatsanwaltschaft Frankfurt am Main, Zentralstelle zur Bekämpfung der Internet- und Computerkriminalität)
Switzerland: Zurich City Police (Stadtpolizei Zürich); Zurich Cantonal Police (Kantonspolizei Zürich); Public Prosecutor‘s Office Zurich (Staatsanwaltschaft Zürich)
cybersecuritynews.com
By Guru Baran - November 29, 2025
CISA has officially updated its Known Exploited Vulnerabilities (KEV) catalog to include a critical flaw affecting OpenPLC ScadaBR, confirming that threat actors are actively weaponizing the vulnerability in the wild.
The security defect, identified as CVE-2021-26829, is a Cross-Site Scripting (XSS) vulnerability rooted in the system_settings.shtm component of ScadaBR. While the vulnerability was first disclosed several years ago, its addition to the KEV catalog on November 28, 2025, signals a concerning resurgence in exploitation activity targeting industrial control environments.
The vulnerability allows a remote attacker to inject arbitrary web script or HTML via the system settings interface. When an administrator or an authenticated user navigates to the compromised page, the malicious script executes within their browser session.
Categorized under CWE-79 (Improper Neutralization of Input During Web Page Generation), this flaw poses significant risks to Operational Technology (OT) networks.
Successful exploitation could allow attackers to hijack user sessions, steal credentials, or modify critical configuration settings within the SCADA system. Given that OpenPLC is widely used for industrial automation research and implementation, the attack surface is notable.
CISA indicated that this vulnerability could impact open-source components, third-party libraries, or proprietary implementations used by various products, making it challenging to fully define the scope of the threat.
Under Binding Operational Directive (BOD) 22-01, CISA has established a strict remediation timeline for Federal Civilian Executive Branch (FCEB) agencies. These agencies are required to secure their networks against CVE-2021-26829 by December 19, 2025.
While CISA has not currently linked this specific exploit to known ransomware campaigns, the agency warns that unpatched SCADA systems remain high-value targets for sophisticated threat actors.
Mitigations
Security teams and network administrators are urged to prioritize the following actions:
Apply Mitigations: Implement vendor-supplied patches or configuration changes immediately.
Review Third-Party Usage: Determine if the vulnerable ScadaBR component is embedded in other tools within the network.
Discontinue Use: If mitigations are unavailable or cannot be applied, CISA advises discontinuing the use of the product to prevent compromise.
Organizations are encouraged to review the GitHub pull request for the fix (Scada-LTS/Scada-LTS) for code-level details.
securityweek.com
ByIonut Arghire| November 24, 2025 (7:14 AM ET)
Spanish flag carrier Iberia is notifying customers that their personal information was compromised after one of its suppliers was hacked.
In Spanish-written emails sent on Sunday, a copy of which threat intelligence provider Hackmanac shared on social media, the company said that names, email addresses, and frequent flyer numbers were stolen in the attack.
According to Iberia, no passwords or full credit card data was compromised in the attack, and the incident was addressed immediately after discovery.
The airline said it also improved customer account protections by requiring a verification code to be provided when attempting to change the email address associated with the account.
Iberia said it has notified law enforcement of the incident and that it has been investigating it together with its suppliers.
The company did not say when the data breach occurred and did not name the third-party supplier that was compromised. It is unclear if the incident is linked to recently disclosed hacking campaigns involving Salesforce and Oracle EBS customers.
It should also be noted that Iberia sent out notifications roughly one week after a threat actor boasted on a hacking forum about having stolen roughly 77 gigabytes of data from the airline’s systems.
The hacker claimed to have stolen ISO 27001 and ITAR-classified information, technical aircraft documentation, engine data, and various other internal documents.
Asking $150,000 for the data, the threat actor was marketing it as suitable for corporate espionage, extortion, or resale to governments.
Founded in 1927, Iberia merged with British Airways in 2011, forming International Airlines Group (IAG), which also owns Aer Lingus, BMI, and Vueling. Iberia currently has an all-Airbus fleet, operating on routes to 130 destinations worldwide.
interestingengineering.com
By Bojan Stojkovski
Nov 23, 2025 02:26 PM EST
A new simulation by Chinese defense researchers suggests that jamming Starlink coverage over an area the size of Taiwan is technically possible.
Instead of focusing on whether Starlink can be jammed in theory, Chinese military planners are increasingly concerned with how such a feat could be attempted in a real conflict over Taiwan. The challenge is staggering: Taiwan and its allies could rely on a constellation of more than 10,000 satellites that hop frequencies, reroute traffic and resist interference in real time.
However, a recent simulation study by Chinese researchers delivers the most detailed public attempt yet to model a potential countermeasure.
Published on November 5 in the peer-reviewed journal Systems Engineering and Electronics, the paper concludes that disrupting Starlink across an area comparable to Taiwan is technically achievable – but only with a massive electronic warfare (EW) force.
Dynamic Starlink network poses major hurdle for EW
Rather than treating Starlink as a static system, Chinese researchers emphasize that its constantly shifting geometry is the real obstacle. In their peer-reviewed study, the team from Zhejiang University and the Beijing Institute of Technology notes that the constellation’s orbital planes are continuously changing, with satellites moving in and out of view at all times.
This dynamic behavior creates extreme uncertainty for any military attempting to monitor, track or interfere with Starlink’s downlink signals, the South China Morning Post reports. Unlike older satellite networks that depend on a few big geostationary satellites parked over the equator, Starlink behaves nothing like a fixed target.
Traditional systems can be jammed by simply overpowering the signal from the ground, but Starlink changes the equation. Its satellites are low-orbit, fast-moving and deployed by the thousands. A single user terminal never stays linked to just one satellite – it rapidly switches between several, forming a constantly shifting mesh in the sky. As the researchers explain, even if one link is successfully jammed, the connection simply jumps to another within seconds, making interference far harder to sustain.
Distributed jamming swarms seen as the sole viable method
Yang’s research team explains that the only realistic countermeasure would be a fully distributed jamming strategy. Instead of using a few powerful ground stations, an attacker would need hundreds – or even thousands – of small, synchronized jammers deployed in the air on drones, balloons or aircraft. Together, these platforms would form a wide electromagnetic barrier over the combat zone.
The simulation tested realistic jamming by having each airborne jammer broadcast noise at different power levels. Researchers compared wide‑beam antennas that cover more area with less energy to narrow‑beam antennas that are stronger but require precise aiming. For every point on the ground, the model calculated whether a Starlink terminal could still maintain a usable signal.
The Chinese researchers calculated that fully suppressing Starlink over Taiwan, roughly 13,900 square miles, would require at least 935 synchronized jamming platforms, not including backups for failures, terrain interference, or future Starlink upgrades. Using cheaper 23 dBW power sources with spacing of about 3 miles would push the requirement to around 2,000 airborne units, though the team stressed the results remain preliminary since key Starlink anti‑jamming details are still confidential.
privatim
privatim.ch
lundi, 24 novembre 2025
Les logiciels basés sur le cloud n’ont jamais été aussi attractifs. Les infrastructures potentiellement accessibles à tous les utilisateurs d’Internet (appelées « clouds publics ») permettent une allocation dynamique des capacités de calcul et de stockage en fonction des besoins des clients. Cet effet d’échelle est d’autant plus important que l’infrastructure du fournisseur de cloud est étendue – et généralement internationale (par exemple les « hyperscalers » comme Microsoft, Google ou Amazon).
Outre les particuliers et les entreprises privées, de plus en plus d’organes publics font recours à des applications « Software-as-a-Service » (SaaS) de ces fournisseurs. On observe également que les fournisseurs cherchent de plus en plus à pousser leurs clients vers le cloud.
Cependant, les organes publics ont une responsabilité particulière vis-à-vis des données de leurs citoyens. Ils peuvent certes externaliser le traitement de ces données, mais ils doivent s’assurer que la protection des données et la sécurité des informations soient respectées. Avant d’externaliser des données personnelles vers des services de cloud computing, les autorités doivent donc analyser les risques particuliers dans chaque cas, indépendamment de la sensibilité des données, et les réduire à un niveau acceptable par des mesures appropriées (voir l’aide-mémoire cloud de privatim).
Pour les raisons suivantes, privatim considère que l’externalisation par les organes publics de données personnelles sensibles ou soumises à une obligation légale de garder le secret dans des solutions SaaS de grands fournisseurs internationaux n’est pas admissible dans la plupart des cas (comme notamment M365) :
La plupart des solutions SaaS n’offre pas encore de véritable chiffrement de bout en bout, ce qui empêcherait le fournisseur d’accéder aux données en clair.
Les entreprises opérant à l’échelle mondiale offrent trop peu de transparence pour que les autorités suisses puissent vérifier le respect des obligations contractuelles en matière de protection et de sécurité des données. Cela vaut aussi bien pour la mise en oeuvre de mesures techniques et la gestion des changements et des versions que pour l’engagement et le contrôle des collaborateurs et des sous-traitants, qui forment parfois de longues chaînes de fournisseurs de services externes. En outre, les fournisseurs de logiciels peuvent adapter périodiquement et unilatéralement les conditions contractuelles.
L’utilisation d’applications SaaS s’accompagne donc d’une perte de contrôle considérable. L’organe public ne peut pas influencer la probabilité d’une atteinte aux droits fondamentaux. Il peut uniquement réduire la gravité des violations potentielles en ne divulguant pas les données sensibles hors de son domaine de contrôle.
En ce qui concerne les données soumises à une obligation légale de garder le secret, il existe parfois une grande insécurité juridique quant à la mesure dans laquelle elles peuvent être transférées vers des services de cloud computing. Il n’est pas possible de faire appel à tout tiers en tant qu’auxiliaire, seulement parce que les dispositions du droit pénal relatives au secret professionnel et au secret de fonction obligent également les auxiliaires des détenteurs de secrets à garder le silence.
Les fournisseurs américains peuvent être contraints, en vertu de l’acte législatif CLOUD Act adopté en 2018, à fournir des données de leurs clients aux autorités américaines sans respecter les règles de l’entraide judiciaire internationale, même si ces données sont stockées dans des centres de données suisses.
Conclusion : l’utilisation de solutions SaaS internationales pour des données personnelles sensibles ou soumises à une obligation légale de garder le secret par des organes publics est possible uniquement si les données sont cryptées par l’organe responsable lui-même et que le fournisseur de services de cloud computing n’a pas accès à la clé.
mixpanel.com
sms-security-incident
Out of transparency and our desire to share with our community, this blog post contains key information about a recent security incident that impacted a limited number of our customers. On November 8th, 2025, Mixpanel detected a smishing campaign and promptly executed our incident response processes. We took comprehensive steps to contain and eradicate unauthorized access and secure impacted user accounts. We engaged external cybersecurity partners to remediate and respond to the incident.
We proactively communicated with all impacted customers. If you have not heard from us directly, you were not impacted. We continue to prioritize security as a core tenant of our company, products and services. We are committed to supporting our customers and communicating transparently about this incident.
What we did in response