cybernews.com
Paulina Okunytė - Journalist
Published: 29 September 2025
Last updated: 29 September 2025
An EU privacy watchdog has filed a complaint against an AI company for selling creepy “reputation reports” that scrape anyone's sensitive information online.
Noyb, a non-profit organization that enforces data protection and privacy rights in Europe, has filed a complaint against a Lithuania-based AI company.
According to the complaint, the company has been scraping social media data and forming reports that included personality traits, conversation tips, photos taken from internet sources, religious beliefs, alcohol consumption, toxic behaviour, negative press, and flagged people for “dangerous political content” or “sexual nudity.”
Whitebridge AI markets its “reputation reports” as a way to “find everything about you online.”
The company’s ads seem to target the people it profiles, using slogans like “this is kinda scary” and “check your own data.” However, anyone willing to pay for a report could get information about a profiled person without informing them.
“Whitebridge AI just has a very shady business model aimed at scaring people into paying for their own, unlawfully collected data. Under EU law, people have the right to access their own data for free,” said Lisa Steinfeld, data protection lawyer at noyb.
When complainants represented by the NGO asked to see their reports, they got nowhere until noyb bought the reports themselves.
According to the noyb representatives, who downloaded the reports, the outputs are largely of low quality and seem to be randomly generated AI texts based on “unlawfully scraped online data.”
Some of the complainant’s reports contained false warnings for “sexual nudity” and “dangerous political content,” which are considered specially protected sensitive data under Article 9 of the GDPR.
In its privacy notice, Whitebridge claims that scraping user data is legal thanks to its “freedom to conduct a business.”
The company claims to only process data from “publicly available sources.”
According to the noyb representative, most of this data is taken from social network pages that are not indexed or found on search engines. The law states that entering information on a social networking application does not constitute making it “manifestly public.”
Under GDPR, any individual can request information about their data and ask for removal. Both complainants that noyb represents filed an access request under Article 15 GDPR, but didn’t receive the desired response from Whitebridge.ai.
When the complainants asked for corrections, Whitebridge demanded a qualified electronic signature. Such a requirement is not found anywhere in EU law, states noyb.
The watchdog demands that Whitebridge comply with the complainants’ access requests and fix the false data in the reports on them.
“We also request the company to comply with its information obligations, to stop all illegal processing, and to notify the complainants of the outcome of a rectification process. Last but not least, we suggest that the authority impose a fine to prevent similar violations in the future,” wrote noyb in the statement.
Cybernews reached out to Whitebridge.ai for a comment, but a response is yet to be received. We will update the article when we receive it.
fortune.com
By Amanda Gerut
News Editor, West Coast
October 4, 2025 at 5:33 AM EDT
Using AI to create fake identities, they get remote jobs, then hide in plain sight—in Slack, on Zooms, and in corporate infrastructure.
But at a cybersecurity conference in Las Vegas this August, an analyst wearing a black hoodie and dark glasses who goes by “SttyK” broke some disappointing news to a packed crowd of researchers, executives, and government employees: That trick no longer works. “Do not [ask why] Kim Jong-un is so fat,” SttyK warned in all-caps on a presentation slide. “They all notice what you guys have noticed and improved their opsec [operation security].”
It might sound far-fetched—like the plot of a Cold War–era spy movie—but the scheme is all too real, according to the FBI and other agencies, as well as the UN, cybersecurity investigators, and nonprofits: Thousands of North Korean men trained in information technology are stealing identities, falsifying their résumés, and deceiving their way into highly paid remote tech jobs in the U.S. and other wealthy countries, using artificial intelligence to fabricate work and veil their faces and identities.
In violation of international sanctions, the scam has pried open a gusher of cash for Kim’s government, which confiscates most of the IT workers’ salaries. The FBI estimates that the program has funneled anywhere from hundreds of millions to $1 billion to the authoritarian regime in the past five years, funding ruler Kim’s ambition of building the Democratic People’s Republic of North Korea (DPRK) into a nuclear-armed force.
The afflicted include hundreds of Fortune 500 businesses, aerospace manufacturers, and U.S. financial institutions ranging from banks to tiny crypto startups, says the FBI. The North Korean workers also take on freelance gigs and subcontracting: They have posed as HVAC specialists, engineers, and architects, spinning up blueprints and municipal approvals with the help of AI.
Companies across Europe, as well as Saudi Arabia and Australia, have also been targeted. Government officials and cybersecurity investigators from the U.S., Japan, and South Korea met in Tokyo in late August to forge stronger collaborative ties to counter the incursions.
The scheme is one of the most spectacular international fraud enterprises in history, and it creates layer upon layer of risks for companies that fall for it. First, there’s the corporate security danger posed by agents of a foreign government being within a company’s internal systems.
Then there’s the legal risk that comes with violating sanctions against North Korea, even if unintentionally. U.S. and international sanctions are intended to isolate and punish the bellicose rogue state, and violations can jeopardize national security for the U.S. and its allies, according to the FBI. “This is a code red,” said U.S. Attorney for D.C. Jeanine Pirro at a press conference in July. “Your tech sectors are being infiltrated by North Korea. And when big companies are lax and they’re not doing their due diligence, they are putting America’s security at risk.”
Companies also must confront the distressing possibility that an employee—perhaps even one making a six-figure salary—could be laboring under conditions that one South Korea–based NGO has called “comparable to modern slavery.”
That’s because the North Korean men (and they are all men) who are perpetrating these deceptions are also, in a sense, victims of the brutal regime: They are separated from their families and trafficked to offshore sites to do the remote IT work, and they face the prospect of beatings, imprisonment, threats to their loved ones, and other human rights violations if they fail to make enough money for the North Korean government.
“The Call is Coming from Inside the House”
This covert weaponization of the techdependent global economy has ensnared every industry and company size. But it has proved incredibly difficult to find and prosecute members of this shadow workforce among the U.S.’s 6 million tech and IT employees. Those tracking the scheme say that agents hide in plain sight in the IT and tech departments of American companies: writing and testing code, discussing bugs, updating deliverables, and even joining video scrums and chatting via Slack. Over the past 12 months, the scheme has proliferated further, with a 220% worldwide increase in intrusions into companies, according to cybersecurity firm CrowdStrike.
Here’s how the international scam often works: North Korean workers, many living in four- or five-man clusters in China or Russia, use AI to create unique personas based on real, verified identities to evade background checks and other standard security measures. Sometimes they buy these identities from Americans, and other times they steal them outright. They craft detailed LinkedIn profiles, topped with a headshot—usually manipulated—with work histories and technical certifications.
“If this happened to these big banks, to these Fortune 500 companies, it can or is happening at your company.”
U.S. Attorney for D.C. Jeanine Pirro
Paid coconspirators in the U.S. and elsewhere physically hold on to the fraudulent workers’ company laptops and turn them on each morning so that the agents can remotely access them from other locations. The FBI has raided dozens of these sites, known as “laptop farms,” across the U.S., said CrowdStrike’s counter adversary VP Adam Meyers. And now they’re popping up overseas. “We’ve seen the operations all over,” said Meyers, “ranging from Western Europe all across to Romania and Poland.”
The broad and decentralized program, with work camps largely based in countries where there is little international cooperation among law enforcement, has so far been a frustrating game of Whac-a-Mole for law enforcement agencies, which have arrested only lower-level accomplices. “Both the Chinese and Russian governments are aware these IT workers are actively defrauding and victimizing Americans,” an FBI spokesman told Fortune. “The Chinese and Russian governments are not enforcing sanctions against these individuals operating in their country.”
Reputational risk from the intrusions has kept targeted companies largely silent so far, although federal agencies including the Department of Justice, FBI, and State Department have jointly issued dozens of public warnings to executives without naming the specific companies that have been impacted. One exception is the sneaker and apparel giant Nike, which identified itself as a victim of the scheme after discovering it had hired a North Korean operative who worked for the company in 2021 and 2022. Nike did not respond to multiple requests for comment.
“There are probably, today, somewhere between 1,000 and 10,000 fake employees working for companies around the world,” said Roger Grimes, an expert in the North Korean IT worker scheme with cybersecurity firm KnowBe4. “Most of the companies don’t talk about it when it happens—but they reach out secretly.” Grimes estimates he has spoken with executives from 50 to 75 companies that have unknowingly hired North Koreans. Even his own company is not immune: KnowBe4 last year disclosed that it unwittingly hired a North Korean worker who doctored a photo with AI and used a stolen identity.
A panel of experts convened by the UN to assess compliance with sanctions against North Korea estimates that the IT worker scheme generates between $250 million and $600 million in revenue annually from workers who transfer their earnings to the regime. The panel reported last year that IT workers in the scheme are expected to earn at least $100,000 annually. The highest earners make between $15,000 and $60,000 a month and are allowed to keep 30% of their salaries. The lowest can only keep 10%.
Businesses that hire these workers—even unintentionally—are violating regulatory and financial sanctions, which creates legal liability if U.S. law enforcement ever opted to charge companies. “The call is coming from inside the house,” said Pirro at the July press conference. “If this happened to these big banks, to these Fortune 500, brand-name, quintessential American companies, it can or is happening at your company. Corporations failing to verify virtual employees pose a security risk for all.”
She continued, speaking directly to American companies: “You are the first line of defense against the North Korean threat.”
The Motivation and the Impact
The growing awareness of the North Korean IT worker scheme has raised alarms in recent years, but its roots go back decades. A DPRK nuclear test in 2006 led to the UN’s Security Council imposing comprehensive sanctions that year, and then expanding those sanctions in 2017 to prohibit trade and ban companies from employing North Korean workers.
President Donald Trump signed into law further U.S. sanctions on North Korea during his first term. The law, “Countering America’s Adversaries Through Sanctions Act,” assumes that any goods made anywhere in the world by North Korean workers should be considered the products of “forced labor” and are forbidden from entering the U.S.
Starved of cash by international sanctions, the regime began sending agents overseas to earn money in various industries, including construction, fishing, and cigarette smuggling. They eventually moved into the lucrative field of tech. Then, when businesses turned to remote work during the pandemic, the IT scheme took off, explained cybersecurity firm DTEX Systems lead investigator Michael “Barni” Barnhart.
The IT operation functions separately from North Korea’s army of malicious hackers, who focus on ransomware and crypto heists, although cybersecurity experts believe the two teams are yoked closely enough to share intelligence and work in tandem.
Grimes is often surprised by the audacity of the IT deceptions, he said. In one instance, he told Fortune, a company thought it had hired three people, but they were actually just a single North Korean man managing three personas. He had successfully used the same photo to apply to multiple jobs but altered it to make each image slightly different—long hair, short hair, and three different names. “Once you see it, it’s so obvious what they’ve done,” said Grimes. “It takes a lot of…I’m trying to think of a better term than ‘balls,’ but it takes a lot of balls to use the same picture.”
For recruiters, inconsistencies—like candidates who claim to hail from Texas, but speak with Korean accents and seem to know nothing about their home state—are sometimes chalked up initially to cultural differences, Grimes said. But once companies are alerted to the conspiracy, it quickly becomes clear who the fraudulent hires are.
The impact of the scheme becoming more publicly known in the past couple of years has led to what the FBI described to Fortune as an escalating desperation among the workers, and a shift in tactics: There have been more attempts to steal intellectual property and data when workers are discovered and fired.
Investigators recently identified a new evolution in the operational structure, which further conceals the North Korean IT workers. They’re subcontracting out more of the actual labor to developers based in India and Pakistan, investigator Evan Gordenker of incident response firm Palo Alto Networks explained. This creates what Gordenker described as a “Matryoshka doll” effect—a proxy between the North Koreans and the company paying them, and another layer of subterfuge that makes it even harder to find the culprits.
“What they’ve found is that it’s actually fairly cheap to find someone of a similar-ish skill set in Pakistan and India,” said Gordenker. It’s an alarming sign of the criminal enterprise’s success, he added: The North Korean fraudsters are so overwhelmed with work that they need to pass some of it off.
The Recruitment of American Accomplices
One ex-North Korean IT worker who communicated via email with Fortune escaped after years inside the scheme. He lives under the alias Kim Ji-min to prevent retaliation against his family still in North Korea.
His method was to use Facebook, LinkedIn, and Upwork to pose as someone looking to hire help for a software project, he explained in an email interview facilitated and translated by PSCORE, a South Korea–based NGO that has worked with thousands of North Korean refugees. When engineers and developers responded to his listings, Kim would steal their identities and use them to apply for tech jobs. He was hired to work on e-commerce websites and in software development for a health care app, he said, though he declined to name the companies he worked for: “They had no idea we were from North Korea.”
IT workers also hang out on Discord and Reddit to create relationships with freelancers and those looking to make extra cash, particularly in the “r/overemployed” subreddit, said Gordenker. The pitch is typically simple but effective, he said: “It’s usually like, ‘I’m a Japanese developer. I’m looking to get established in the United States, and I’m looking for someone to serve as the face of my company in that country. Would you be willing to, for 200 bucks a week?’” From there, the IT workers ask the person to upload photos of their ID. Sometimes it takes only five minutes. “Some people are sort of like, ‘Oh, $200 bucks a week? Yeah. Sign me up, absolutely,’” said Gordenker. “It’s stunningly easy.”
A Maryland man, Minh Phuong Ngoc Vong, pleaded guilty in April to charges that he allowed North Korean workers to use his identity to get 13 different jobs. Court records show that he offered up his driver’s license and personal details after being approached on a video game.
The recruitment tactics can be predatory: The scheme often targets people who are down on their luck, promising them easy money for picking up a laptop or submitting to a urinalysis to pass a drug test. “They will recruit people from recovering gambling addict forums and things like that where people have debt,” Gordenker said. “They need the money badly, and that creates leverage.”
Security investigator Aidan Raney, who posed as a willing American accomplice to the scheme, learned other operational details. The agents who recruited Raney spiced up his résumé with fabricated roles at companies, and turned his headshot into a black-and-white photo so it would look different from his real LinkedIn headshot. Raney corresponded with three or four workers who all called themselves “Ben,” and the Bens submitted his details to recruiters to land him the job interviews.
“They handle essentially all the work,” said Raney, founder and CEO of security firm Farnsworth Intelligence. “What they were trying to do was use my real identity to bypass background checks and things like that, and they wanted it to be extremely close to my real-life identity.”
Sometimes the work of the American accomplice is more involved: An operation in the suburbs of Phoenix facilitated by one woman, Christina Chapman, helped North Koreans fraudulently obtain jobs at 311 companies and earned the workers $17.1 million in salaries and bonuses, according to the Department of Justice’s 2024 indictment of Chapman. The operation was the biggest laptop farm busted so far, by revenue. North Koreans used 68 stolen identities to get work, and Chapman helped them dial in remotely for interviews and calls. Chapman’s cut totaled about $177,000, prosecutors said, but after pleading guilty she has been sentenced to 8.5 years in prison for her role and ordered to forfeit earnings and pay fines worth more than she ever earned in the scheme.
Nike was one of the companies that hired an IT worker in Chapman’s network, according to a victim impact statement the company filed before her sentencing. Nike paid about $75,000 to the unnamed worker over the course of five months, the letter states. “The defendant’s decision to obtain employment through Nike, via identity theft, and subsequently launder earnings to foreign state actors, was not only a violation of law—it was a betrayal of trust,” Chris Gharst, Nike’s director of global investigations, wrote to the judge. “The incident required us to expend valuable time and resources on internal investigations.”
Criminals or victims?
Law enforcement agencies and cybersecurity investigators have tracked participants in the North Korean IT worker scheme, but so far only low-level accomplices have been arrested and charged in the U.S. The workers use artificial intelligence and stolen or purchased IDs to craft fake résumés and LinkedIn pages to apply for remote jobs. Some of their names are believed to be aliases.
AI has breathed even more life into the operation. An August 2025 report from Anthropic revealed that North Korean agents had leveraged its Claude AI assistant to prep for interviews and get jobs in development and programming. “The most striking finding is the actors’ complete dependency on AI to function in technical roles,” the report states. “These operators do not appear to be able to write code, debug programs, or even communicate professionally without Claude’s assistance.”
The scam is alarming for the companies targeted, but the North Korean laborers themselves are much worse off, according to PSCORE secretarygeneral Bada Nam. Failure to meet monthly earnings quotas results in degradation, beatings, or worse—being forced back to North Korea where the workers and their families face prison, labor camps, and abuse. The consistent access to food outside of famine-ravaged North Korea might be more desirable than in-country work assignments, but the intense competition and humiliation workers face if they don’t excel has driven some to suicide, Nam said. “Because of this system, [we] view these workers not simply as perpetrators of fraud or deception, but also as victims of forced labor and human rights violations,” said Nam. “Their situation is comparable to modern slavery. Just as global consumers have become more attentive to supply chains in order to avoid supporting child labor, we believe a similar awareness is needed regarding North Korean IT workers.”
Those pursuing and trying to expose the scale and impact of this grift include the Las Vegas conference speaker SttyK, who is in his twenties and based in Japan. He is part of a secretive network of investigators who track North Korean operatives, producing research that’s used by large cybersecurity firms. The community has learned a lot from files and manuals mistakenly uploaded without password protection to the open cloud-based tech platform GitHub, which explain how to fraudulently get a remote tech job. SttyK and his research partners have also been aided by at least one secret informant involved in the scheme.
The GitHub trove shows that there are some cultural clues to watch for, SttyK told Fortune: The North Koreans prefer British to American English in translations; they use excessive amounts of exclamation marks and heart emojis in emails; and they really love the animated comedy franchise Minions, often using images from the films as their avatars. The IT workers use Slack to communicate among themselves, and SttyK showed a message from a North Korean boss reminding teams to work at least 14 hours a day. They log in six days a week, and on their day off, the workers play volleyball, diligently recording the winners and losers in spreadsheets, the GitHub files revealed.
There are no hard-and-fast rules to the scheme, said Grimes, and the quality of the work varies significantly: Some North Koreans achieve standout job performance, leveraging it so they can recommend friends or even themselves under another identity for new roles. Others only want to get their first few paychecks before they get fired for doing poor work or not showing up. “There isn’t one way of doing things,” said Grimes. “Different teams farm out the work in different ways.”
The Perpetrators as Victims Themselves
Ironically, perhaps, the harshness of the system may actually make the agents attractive hires for U.S. companies: These are tech workers who don’t complain, take personal days, or ask for mental health breaks. Indeed, beneath the sprawling scheme lies an uncomfortable truth: The modern economy prizes efficiency, productivity, and results. And North Korean IT workers are leaning in on those tenets.
In job interviews the North Koreans give the impression they love work and don’t mind 12-hour days, Grimes said. Executives at victimized companies have sometimes said the North Koreans were their best employees. This unflagging work ethic dovetails with preconceptions about Asian immigrants’ industriousness, and often outweighs the red flags that should raise alarms. “People tell themselves all sorts of stories” to rationalize inconsistencies, said Grimes. “It’s interesting human behavior.”
Mick Baccio, president of the cybersecurity nonprofit Thrunt, went a step further, suggesting that the North Koreans infiltrating American organizations may exploit employers’ inability to distinguish between different Asian ethnic groups. “Many companies have a very Western, U.S.-centric view on the problem,” he said. “I’m half Thai and it’s hard for some people to distinguish that…It’s not malicious.”
On the North Korean side, the longtime success of the scheme relies upon complete fidelity to leadership that the regime programs into citizens from a young age, said Hyun-Seung Lee, a defector who escaped North Korea 10 years ago and knew some of the IT workers in an earlier iteration of the scheme. Lee said that asking candidates to insult Kim may actually still work to expose some agents. Even now, after all these years, Lee finds he still has an emotional reaction to hearing such a thing, he said—and IT workers could be similarly affected.
“They believe that it is their fate, their responsibility, to be loyal to the regime,” said Lee. “And they’re trying to survive.”
A hub for fraud in Arizona
Christina Chapman pleaded guilty to charges related to her role in running a “laptop farm” for the North Korean scheme in the suburbs of Phoenix. Here’s what it looked like, according to the Department of Justice indictment.
68Stolen identities
311Companies scammed
$17.1 millionSalaries and bonuses transmitted to North Kora
$177,000Chapman’s earnings for her part in the scheme
This article appears in the October/November 2025 issue of Fortune with the headline “Espionage enters the chat.”
status.salesforce.com ID# 20000224
Publié 5:58 pm CEST, Oct 02 2025 · Last updated 5:58 pm CEST, Oct 02 2025
Security Advisory: Ongoing Response to Social Engineering Threats
We are aware of recent extortion attempts by threat actors, which we have investigated in partnership with external experts and authorities. Our findings indicate these attempts relate to past or unsubstantiated incidents, and we remain engaged with affected customers to provide support. At this time, there is no indication that the Salesforce platform has been compromised, nor is this activity related to any known vulnerability in our technology.
We understand how concerning these situations can be. Protecting customer environments and data remains our top priority, and our security teams are fully engaged to provide guidance and support. As we continue to monitor the situation, we encourage customers to remain vigilant against phishing and social engineering attempts, which remain common tactics for threat actors.
For detailed guidance, please review our blog post on protecting against social engineering (https://www.salesforce.com/blog/protect-against-social-engineering) and reach out through the Salesforce Help portal if you need support.
Publié 5:58 pm CEST, Oct 02 2025 · Last updated 5:58 pm CEST, Oct 02 2025
thedrive.com Byron Hurd
Published Oct 2, 2025 10:07 AM
And to be clear, it's not just $1—it's $1 divided into four $0.25 credits.
A while back, I got an email letting me know I was eligible to be part of a class-action lawsuit against ParkMobile—one of the many self-service mobile parking apps now available just about anywhere municipal parking is worth monetizing. Seems that it did a very fashionable thing and allegedly let a lot of somebodies get access to protected customer data. As with most things like this, I thought nothing of it. Class-action payouts are often paltry at best, and insulting at worst. After following the required steps to become part of the class, I shoved the email into a folder somewhere in the dark recesses of Gmail and promptly forgot about it until this week, when I received an email notifying me of the settlement…and my $1.00 payout.
It’s important to note up front that I don’t use this particular app a lot—we’re talking about a single-digit number of transactions here. If I were a frequent flyer, so to speak, I likely wouldn’t have been so dismissive of the suit to begin with. And apparently, users who elected to take the cash payment option were eligible for up to $25—a potentially life-changing amount for the many 1920s street urchins still taking up many of America’s parking spaces.
Seriously, though—a dollar? And not even a check for a dollar, but a credit. How is this worth anybody’s time? As usual, the answer is in the fine print. See for yourself:
You’re reading that correctly. Not only is it a one-dollar credit, but I can only claim it in 25-cent increments by using ParkMobile’s services four times—something I’d probably have to go out of my way to do even once. In other words, to mitigate my inconvenience, for which ParkMobile claims no responsibility and was not found liable, the company is giving itself four more opportunities to earn my business.
Like a pat on the back, four times! Boy, do I feel compensated.
theins.ru
The Insider
2 October 2025 23:03
The hacker collective Black Mirror has released the first portion of an archive of documents from the Russian state defense corporation Rostec. The tranche contains more than 300 items. The materials detail Russia’s military and technical cooperation with foreign clients, pricing for military items, and logistics schemes aimed at evading sanctions. The published documents also include internal correspondence, presentations on overseas helicopter service centers, and agreements with international partners.
The files show that Russian companies have faced difficulties receiving payments for contracts with Algeria, Egypt, China, and India. Russian banks have been unable to issue guarantees or conduct transactions through the SWIFT system, forcing them to search for alternative settlement schemes in yuan, rubles, and euros.
The archive also contains information about an international network of service centers for Russian helicopter equipment. The documents describe existing and planned maintenance facilities in the UAE, Afghanistan, Vietnam, Bulgaria, Kazakhstan, and other countries. Particular attention is paid to the creation of an international regional logistics hub in Dubai, near Al Maktoum Airport, designed as a central node for supplying spare parts and components.
Among the materials is a letter from the Rostec holding company Concern Radio-Electronic Technologies (CRET) on pricing for military products in export contracts. The document proposes a simplified formula for setting wholesale prices, profit margins, transport expenses, and currency risks. It also discusses possible legal changes to allow more flexible use of revenues from military-technical cooperation.
The hackers said this is only the first portion of the Rostec archive, which they are releasing in what they called “fuck off exposure” mode. Black Mirror claims the documents include a list of “reliable trading partners” in several countries. These are said to have been approved by Russia’s Defense Ministry, the FSB, and the Foreign Intelligence Service (SVR) with the aim of reducing the risk of aviation and technical equipment being redirected to Ukraine through third countries.
In August, Telegram blocked Black Mirror’s channel. Attempts to access it displayed a notice that cited doxxing, defamation, and extortion as the reasons behind the ban. The Insider is not aware of the channel extorting money from anyone.
bbc.com
Josh Martinbusiness reporter
The carmaker says some of its customers' data has been stolen in a cyber-attack that targeted a third-party provider.
Renault UK has confirmed that some of its customers' data has been stolen in a cyber-attack that targeted a third-party data processing provider.
No customer financial data, such as passwords or bank account details, had been obtained, Renault said, but other personal data had been accessed and the carmaker warned customers to be vigilant.
The French-owned carmaker would not specify how many people could be affected "for ongoing security reasons" but said it did not anticipate any wider implications for the company, as none of Renault's own systems had been hacked.
It comes after rival Jaguar Land Rover and brewing giant Asahi have had production stopped by cyber-attacks on their systems.
Renault UK said affected people would be notified and that victims of the hack may include a wider pool of people who had entered competitions or shared data with the car company, without purchasing a vehicle.
The carmaker said the data that had been accessed by the cyber-attack included some or all of: customer names, addresses, dates of birth, gender, phone number, vehicle identification numbers and vehicle registration details.
A Renault spokesperson said: "The third-party provider has confirmed this is an isolated incident which has been contained, and we are working with it to ensure that all appropriate actions are being taken. We have notified all relevant authorities.
"We are in the process of contacting all affected customers, advising them of the cyber-attack and reminding them to be cautious of any unsolicited requests for personal information," they added.
Jaguar Land Rover was recently forced to stop production and take a £1.5bn loan underwritten by the government after being targeted by hackers at the end of August.
Earlier this year, M&S and the Co-Op were both hit by cybersecurity breaches that disrupted supply chains and customer orders, and accessed the data of shoppers.
GMO Flatt Security Research - flatt.tech
Posted on October 3, 2025
Introduction
Hello, I’m RyotaK (@ryotkak ), a security engineer at GMO Flatt Security Inc.
In May 2025, I participated in the Meta Bug Bounty Researcher Conference 2025. During this event, I discovered a vulnerability (CVE-2025-59489) in the Unity Runtime that affects games and applications built on Unity 2017.1 and later.
In this article, I will explain the technical aspects of this vulnerability and its impact.
This vulnerability was disclosed to Unity following responsible disclosure practices.
Unity has since released patches for Unity 2019.1 and later, as well as a Unity Binary Patch tool to address the issue, and I strongly encourage developers to download the updated versions of Unity, recompile affected games or applications, and republish as soon as possible.
For the official security advisory, please refer to Unity’s advisory here: https://unity.com/security/sept-2025-01
We appreciate Unity’s commitment to addressing this issue promptly and their ongoing efforts to enhance the security of their platform.
Security vulnerabilities are an inherent challenge in software development, and by working together as a community, we can continue to make software systems safer for everyone.
TL;DR
A vulnerability was identified in the Unity Runtime’s intent handling process for Unity games and applications.
This vulnerability allows malicious intents to control command line arguments passed to Unity applications, enabling attackers to load arbitrary shared libraries (.so files) and execute malicious code, depending on the platform.
In its default configuration, this vulnerability allowed malicious applications installed on the same device to hijack permissions granted to Unity applications.
In specific cases, the vulnerability could be exploited remotely to execute arbitrary code, although I didn’t investigate third-party Unity applications to find an app with the functionality required to enable this exploit.
Unity has addressed this issue and has updated all affected Unity versions starting with 2019.1. Developers are strongly encouraged to download them, recompile their games and applications, and republish to ensure their projects remain secure.
About Unity
Unity is a popular game engine used to develop games and applications for various platforms, including Android.
According to Unity’s website, 70% of top mobile games are built with Unity. This includes popular games like Among Us and Pokémon GO, along with many other applications that use Unity for development.
Technical Details
Note: During the analysis, I used Android 16.0 on the Android Emulator of Android Studio. The behavior and impact of this vulnerability may differ on older Android versions.
Unity’s Intent Handler
To support debugging Unity applications on Android devices, Unity automatically adds a handler for the intent containing the unity extra to the UnityPlayerActivity. This activity serves as the default entry point for applications and is exported to other applications.
https://docs.unity3d.com/6000.0/Documentation/Manual/android-custom-activity-command-line.html
adb shell am start -n "com.Company.MyGame/com.unity3d.player.UnityPlayerActivity" -e unity "-systemallocator"
As documented above, the unity extra is parsed as command line arguments for Unity.
While Android’s permission model manages feature access by granting permissions to applications, it does not restrict which intents can be sent to an application.
This means any application can send the unity extra to a Unity application, allowing attackers to control the command line arguments passed to that application.
xrsdk-pre-init-library Command Line Argument
After loading the Unity Runtime binary into Ghidra, I discovered the following command line argument:
initLibPath = FUN_00272540(uVar5, "xrsdk-pre-init-library");
The value of this command line argument is later passed to dlopen, causing the path specified in xrsdk-pre-init-library to be loaded as a native library.
lVar2 = dlopen(initLibPath, 2);
This behavior allows attackers to execute arbitrary code within the context of the Unity application, leveraging its permissions by launching them with the -xrsdk-pre-init-library argument.
Attack Scenarios
Local Attack
Any malicious application installed on the same device can exploit this vulnerability by:
Extracting the native library with the android:extractNativeLibs attribute set to true in the AndroidManifest.xml
Launching the Unity application with the -xrsdk-pre-init-library argument pointing to the malicious library
The Unity application would then load and execute the malicious code with its own permissions
Remote Exploitation via Browser
In specific cases, this vulnerability could potentially be exploited remotely although the condition .
For example, if an application exports UnityPlayerActivity or UnityPlayerGameActivity with the android.intent.category.BROWSABLE category (allowing browser launches), websites can specify extras passed to the activity using intent URLs:
intent:#Intent;package=com.example.unitygame;scheme=custom-scheme;S.unity=-xrsdk-pre-init-library%20/data/local/tmp/malicious.so;end;
At first glance, it might appear that malicious websites could exploit this vulnerability by forcing browsers to download .so files and load them via the xrsdk-pre-init-library argument.
SELinux Restrictions
However, Android’s strict SELinux policy prevents dlopen from opening files in the downloads directory, which mitigates almost all remote exploitation scenarios.
library "/sdcard/Download/libtest.so" ("/storage/emulated/0/Download/libtest.so") needed
or dlopened by "/data/app/~~24UwD8jnw7asNjRwx1MOBg==/com.DefaultCompany.com.unity.template.
mobile2D-E043IptGJDwcTqq56BocIA==/lib/arm64/libunity.so" is not accessible for the
namespace: [name="clns-9", ld_library_paths="",default_library_paths="/data/app/~~24UwD8jnw7asNjRwx1MOBg==/com.DefaultCompany.com.unity.template.
mobile2D-E043IptGJDwcTqq56BocIA==/lib/arm64:/data/app/~~24UwD8jnw7asNjRwx1MOBg==/com.DefaultCompany.com.unity.template.mobile2D-E043IptGJDwcTqq56BocIA==/base.apk!/lib/arm64-v8a", permitted_paths="/data:/mnt/expand:/data/data/com.DefaultCompany.com.unity.template.mobile2D"]
That being said, since the /data/ directory is included in permitted_paths, if the target application writes files to its private storage, it can be used to bypass this restriction.
Furthermore, dlopen doesn’t require the .so file extension. If attackers can control the content of a file in an application’s private storage, they can exploit this vulnerability by creating a file containing malicious native library binary. This is actually a common pattern when applications cache data.
For example, another vulnerability in Messenger was exploited using the application’s cache: https://www.hexacon.fr/slides/Calvanno-Defense_through_Offense_Building_a_1-click_Exploit_Targeting_Messenger_for_Android.pdf
Requirements for Remote Exploitation
To exploit this vulnerability remotely, the following conditions must be met:
The application exports UnityPlayerActivity or UnityPlayerGameActivity with the android.intent.category.BROWSABLE category
The application writes files with attacker-controlled content to its private storage (e.g., through caching)
Even without these conditions, local exploitation remains possible for any Unity application.
Demonstration
Conclusion
In this article, I explained a vulnerability in Unity Runtime that allows arbitrary code execution in almost all Unity applications on Android.
I hope this article helps you understand that vulnerabilities can exist in the frameworks and libraries you depend on, and you should always be mindful of the security implications of the features you use.
The newly formed cybercrime alliance, “Scattered LAPSUS$ Hunters,” has launched a new website detailing its claims of a massive data breach affecting Salesforce and its extensive customer base. This development is the latest move by the group, a notorious collaboration between members of the established threat actor crews ShinyHunters, Scattered Spider, and LAPSUS$. On their new site, the group is extorting Salesforce directly, threatening to leak nearly one billion records with a ransom deadline of October 10, 2025.
This situation stems from a widespread and coordinated campaign that targeted Salesforce customers throughout mid-2025. According to security researchers, the attacks did not exploit a vulnerability in Salesforce’s core platform. Instead, the threat actors, particularly those from the Scattered Spider group, employed sophisticated social engineering tactics.
The primary method involved voice phishing (vishing), where attackers impersonated corporate IT or help desk staff in phone calls to employees of target companies. These employees were then manipulated into authorizing malicious third-party applications within their company’s Salesforce environment. This action granted the attackers persistent access tokens (OAuth), allowing them to bypass multi-factor authentication and exfiltrate vast amounts of data. The alliance has now consolidated the data from these numerous breaches for this large-scale extortion attempt against Salesforce itself.
The website lists dozens of high-profile Salesforce customers allegedly compromised in the campaign. The list of alleged victims posted by the group includes:
Toyota Motor Corporations (🇯🇵): A multinational automotive manufacturer.
FedEx (🇺🇸): A global courier delivery services company.
Disney/Hulu (🇺🇸): A multinational mass media and entertainment conglomerate.
Republic Services (🇺🇸): An American waste disposal company.
UPS (🇺🇸): A multinational shipping, receiving, and supply chain management company.
Aeroméxico (🇲🇽): The flag carrier airline of Mexico.
Home Depot (🇺🇸): The largest home improvement retailer in the United States.
Marriott (🇺🇸): A multinational company that operates, franchises, and licenses lodging.
Vietnam Airlines (🇻🇳): The flag carrier of Vietnam.
Walgreens (🇺🇸): An American company that operates the second-largest pharmacy store chain in the United States.
Stellantis (🇳🇱): A multinational automotive manufacturing corporation.
McDonald’s (🇺🇸): A multinational fast food chain.
KFC (🇺🇸): A fast food restaurant chain that specializes in fried chicken.
ASICS (🇯🇵): A Japanese multinational corporation which produces sportswear.
GAP, INC. (🇺🇸): A worldwide clothing and accessories retailer.
HMH (hmhco.com) (🇺🇸): A publisher of textbooks, instructional technology materials, and assessments.
Fujifilm (🇯🇵): A multinational photography and imaging company.
Instructure.com – Canvas (🇺🇸): An educational technology company.
Albertsons (Jewel Osco, etc) (🇺🇸): An American grocery company.
Engie Resources (Plymouth) (🇺🇸): A retail electricity provider.
Kering (🇫🇷): A global luxury group that manages brands like Gucci, Balenciaga, and Brioni.
HBO Max (🇺🇸): A subscription video on-demand service.
Instacart (🇺🇸): A grocery delivery and pick-up service.
Petco (🇺🇸): An American pet retailer.
Puma (🇩🇪): A German multinational corporation that designs and manufactures athletic footwear and apparel.
Cartier (🇫🇷): A French luxury goods conglomerate.
Adidas (🇩🇪): A multinational corporation that designs and manufactures shoes, clothing, and accessories.
TripleA (aaa.com) (🇺🇸): A federation of motor clubs throughout North America.
Qantas Airways (🇦🇺): The flag carrier of Australia.
CarMax (🇺🇸): A used vehicle retailer.
Saks Fifth (🇺🇸): An American luxury department store chain.
1-800Accountant (🇺🇸): A nationwide accounting firm.
Air France & KLM (🇫🇷/🇳🇱): A major European airline partnership.
Google Adsense (🇺🇸): A program run by Google through which website publishers serve advertisements.
Cisco (🇺🇸): A multinational digital communications technology conglomerate.
Pandora.net (🇩🇰): A Danish jewelry manufacturer and retailer.
TransUnion (🇺🇸): An American consumer credit reporting agency.
Chanel (🇫🇷): A French luxury fashion house.
IKEA (🇸🇪): A Swedish-founded multinational group that designs and sells ready-to-assemble furniture.
According to the actor, the breach involves nearly 1 billion records from Salesforce and its clients. The allegedly compromised data includes:
Sensitive Personally Identifiable Information (PII)
Strategic business records that could impact market position
Data from over 100 other demand instances hosted on Salesforce infrastructure
• The Register
Mon 29 Sep 2025 // 08:01 UTC
by Danny Bradbury
Feature: Guess how much of our direct transatlantic data capacity runs through two cables in Bude?
The first transatlantic cable, laid in 1858, delivered a little over 700 messages before promptly dying a few weeks later. 167 years on, the undersea cables connecting the UK to the outside world process £220 billion in daily financial transactions. Now, the UK Parliament's Joint Committee on National Security Strategy (JCNSS) has told the government that it has to do a better job of protecting them.
The Committee's report, released on September 19, calls the government "too timid" in its approach to protecting the cables that snake from the UK to various destinations around the world. It warns that "security vulnerabilities abound" in the UK's undersea cable infrastructure, when even a simple anchor-drag can cause major damage.
There are 64 cables connecting the UK to the outside world, according to the report, carrying most of the country's internet traffic. Satellites can't shoulder the data volumes involved, are too expensive, and only account for around 5 percent of traffic globally.
These cables are invaluable to the UK economy, but they're also difficult to protect. They are heavily shielded in the shallow sea close to those points. That's because accidental damage from fishing operations and other vessels is common. On average, around 200 cables suffer faults each year. But as they get further out, the shielding is less robust. Instead, the companies that lay the cables rely on the depth of the sea to do its job (you'll be pleased to hear that sharks don't generally munch on them).
The report praises a strong cable infrastructure, and admits that in some areas at least we have the redundancy in the cable infrastructure to handle disruptions. For example, it notes that 75 percent of UK transatlantic traffic routes through two cables that come ashore in Bude, Cornwall. That seems like quite the vulnerability, but it acknowledges that we have plenty of infrastructure to route around if anything happened to them. There is "no imminent threat to the UK's national connectivity," it soothes.
But it simultaneously cautions against adopting what it describes as "business-as-usual" views in the industry. The government "focuses too much on having 'lots of cables' and pays insufficient attention to the system's actual ability to absorb unexpected shocks," it frets. It warns that "the impacts on connectivity would be much more serious," if onward connections to Europe suffered as part of a coordinated attack.
"While our national connectivity does not face immediate danger, we must prepare for the possibility that our cables can be threatened in the event of a security crisis," it says.
Reds on the sea bed
Who is the most likely to mount such an attack, if anyone? Russia seems front and center, according to experts. It has reportedly been studying the topic for years. Keir Giles, director at The Centre for International Cyber Conflict and senior consulting fellow of the Russia and Eurasia Programme at Chatham House, argues that Russia has a long history of information warfare that stepped up after it annexed Crimea in 2014.
"The thinking part of the Russian military suddenly decided 'actually, this information isolation is the way to go, because it appears to win wars for us without having to fight them'," Giles says, adding that this approach is often combined with choke holds on land-based information sources. Cutting off the population in the target area from any source of information other than what the Russian troops feed them achieves results at low cost.
In a 2021 paper he co-wrote for the NATO Cooperative Cyber Defence Centre of Excellence, he pointed to the Glavnoye upravleniye glubokovodnykh issledovaniy (Main Directorate for Deep-Water Research, or GUGI), a secretive Russian agency responsible for analyzing undersea cables for intelligence or disruption. According to the JCNSS report, this organization operates the Losharik, a titanium-hulled submarine capable of targeting cables at extreme depth.
Shenanigans under the sea
You don't need a fancy submarine to snag a cable, as long as you're prepared to do it in plain sight closer to the coast. The JNCSS report points to several incidents around the UK and the Baltics. November last year saw two incidents. In the first, Chinese-flagged cargo vessel Yi Peng 3 dragged its anchor for 300km and cut two cables between Sweden and Lithuania. That same month, the UK and Irish navies shadowed Yantar, a Russian research ship loitering around UK cable infrastructure in the Irish sea.
The following month saw Cook Islands-flagged ship Eagle S damage one power cable and three data cables linking Finland and Estonia. This May, unaffiliated vessel Jaguar approached an underseas cable off Estonia and was escorted out of the country's waters.
The real problem with brute-force physical damage from vessels is that it's difficult to prove that it's intentional. On one hand, it's perfect for an aggressor's plausible deniability, and could also be a way to test the boundaries of what NATO is willing to tolerate. On the other, it could really be nothing.
"Attribution of sabotage to critical undersea infrastructure is difficult to prove, a situation significantly complicated by the prevalence of under-regulated and illegal shipping activities, sometimes referred to as the shadow fleet," a spokesperson for NATO told us.
"I'd push back on an assertion of a coordinated campaign," says Alan Mauldin, research director at analyst company TeleGeography, which examines undersea cable infrastructure warns. He questions assumptions that the Baltic cable damage was anything other than a SNAFU.
The Washington Post also reported comment from officials on both sides of the Atlantic that the Baltic anchor-dragging was probably accidental. Giles scoffs at that. "Somebody had been working very hard to persuade countries across Europe that this sudden spate of cables being broken in the Baltic Sea, one after another, was all an accident, and they were trying to say that it's possible for ships to drag their anchors without noticing," he says.
One would hope that international governance frameworks could help. The UN Convention on the Law of the Sea [PDF] has a provision against messing with undersea cables, but many states haven't enacted the agreement. In any case, plausible deniability makes things more difficult.
"The main challenge in making meaningful governance reforms to secure submarine cables is figuring out what these could be. Making fishing or anchoring accidents illegal would be disproportionate," says Anniki Mikelsaar, doctoral researcher at Oxford University's Oxford Internet Institute. "As there might be some regulatory friction, regional frameworks could be a meaningful avenue to increase submarine cable security."
The difficulty in pinning down intent hasn't stopped NATO from stepping in. In January it launched Baltic Sentry, an initiative to protect undersea infrastructure in the region. That effort includes frigates, patrol aircraft, and naval drones to keep an eye on what happens both above and below the waves.
Preparing for the worst
Regardless of whether vessels are doing this deliberately or by accident, we have to be prepared for it, especially as cable installation shows no sign of slowing. Increasing bandwidth needs will boost global cable kilometers by 48 percent between now and 2040, says TeleGeography, adding that annual repairs will increase 36 percent between now and 2040.
"Many cable maintenance ships are reaching the end of their design life cycle, so more investment into upgrading the fleets is needed. This is important to make repairs faster," says Mikelsaar.
There are 62 vessels capable of cable maintenance today, and TeleGeography predicts that'll be enough for the next 15 years. However, it takes time to build these vessels and train the operators, meaning that we'll need to start delivering new vessels soon.
The problem for the UK is that it doesn't own any of that repair capacity, says the JNSS. It can take a long time to travel to a cable and repair it, and ships can only work on one at a time. The Committee reported that the UK doesn't own any sovereign repair capacity, and advises that it gets some, prescribing a repair ship by 2030.
"This could be leased to industry on favorable terms during peacetime and made available for Government use in a crisis," it says, adding that the Navy should establish a set of reservists that will be trained and ready to operate the vessel.
Sir Chris Bryant MP, the Minister for Data Protection and Telecoms, told the Committee it that it was being apocalyptic and "over-egging the pudding" by examining the possibility of a co-ordinated attack. "We disagree," the Committee said in the report, arguing that the security situation in the next decade is uncertain.
"Focusing on fishing accidents and low-level sabotage is no longer good enough," the report adds. "The UK faces a strategic vulnerability in the event of hostilities. Publicly signaling tougher defensive preparations is vital, and may reduce the likelihood of adversaries mounting a sabotage effort in the first place."
To that end, it has made a battery of recommendations. These include building the risk of a coordinated campaign against undersea infrastructure into its risk scenarios, and protecting the stations - often in remote coastal locations - where the cables come onto land.
The report also recommends that the Department for Science, Innovation and Technology (DSIT) ensures all lead departments have detailed sector-by-sector technical impact studies addressing widespread cable outages.
"Government works around the clock to ensure our subsea cable infrastructure is resilient and can withstand hostile and non-hostile threats," DSIT told El Reg, adding that when breaks happen, the UK has some of the fastest cable repair times in the world, and there's usually no noticeable disruption."
"Working with NATO and Joint Expeditionary Force allies, we're also ensuring hostile actors cannot operate undetected near UK or NATO waters," it added. "We're deploying new technologies, coordinating patrols, and leading initiatives like Nordic Warden alongside NATO's Baltic Sentry mission to track and counter undersea threats."
Nevertheless, some seem worried. Vili Lehdonvirta, head of the Digital Economic Security Lab (DIESL) and professor of Technology Policy at Aalto University, has noticed increased interest from governments and private sector organizations alike in how much their daily operations depend on oversea connectivity. He says that this likely plays into increased calls for digital sovereignty.
"The rapid increase in data localization laws around the world is partly explained by this desire for increased resilience," he says. "But situating data and workloads physically close as opposed to where it is economically efficient to run them (eg. because of cheaper electricity) comes with an economic cost."
So the good news is that we know exactly how vulnerable our undersea cables are. The bad news is that so does everyone else with a dodgy cargo ship and a good poker face. Sleep tight.
today.ucsd.edu UC San Diego
September 17, 2025
Story by:
Ioana Patringenaru - ipatrin@ucsd.edu
Study involving 19,500 UC San Diego Health employees evaluated the effectiveness of two different types of cybersecurity training
Cybersecurity training programs as implemented today by most large companies do little to reduce the risk that employees will fall for phishing scams–the practice of sending malicious emails posing as legitimate to get victims to share personal information, such as their social security numbers.
That’s the conclusion of a study evaluating the effectiveness of two different types of cybersecurity training during an eight-month, randomized controlled experiment. The experiment involved 10 different phishing email campaigns developed by the research team and sent to more than 19,500 employees at UC San Diego Health.
The team presented their research at the Blackhat conference Aug. 2 to 7 in Las Vegas. The team originally shared their work at the 46th IEEE Symposium on Security and Privacy in May in San Francisco.
Researchers found that there was no significant relationship between whether users had recently completed an annual, mandated cybersecurity training and the likelihood of falling for phishing emails. The team also examined the efficacy of embedded phishing training – the practice of sharing anti-phishing information after a user engages with a phishing email sent by their organization as a test. For this type of training, researchers found that the difference in failure rates between employees who had completed the training and those who did not was extremely low.
“Taken together, our results suggest that anti-phishing training programs, in their current and commonly deployed forms, are unlikely to offer significant practical value in reducing phishing risks,” the researchers write.
Why is it important to combat phishing?
Whether phishing training is effective is an important question. In spite of 20 years of research and development into malicious email filtering techniques, a 2023 IBM study identifies phishing as the single largest source of successful cybersecurity breaches–16% overall, researchers write.
This threat is particularly challenging in the healthcare sector, where targeted data breaches have reached record highs. In 2023 alone, the U.S. Department of Health and Human Services (HHS) reported over 725 large data breach events, covering over 133 million health records, and 460 associated ransomware incidents.
As a result, it has become standard in many sectors to mandate both formal security training annually and to engage in unscheduled phishing exercises, in which employees are sent simulated phishing emails and then provided “embedded” training if they mistakenly click on the email’s links.
Researchers were trying to understand which of these types of training are most effective. It turns out, as currently administered, that none of them are.
Why are cybersecurity trainings not effective?
One reason the trainings are not effective is that the majority of people do not engage with the embedded training materials, said Grant Ho, study co-author and a faculty member at the University of Chicago, who did some of this work as a postdoctoral researcher at UC San Diego. Overall, 75% of users engaged with the embedded training materials for a minute or less. One-third immediately closed the embedded training page without engaging with the material at all.
“This does lend some suggestion that these trainings, in their current form, are not effective,” said Ariana Mirian, another paper co-author, who did the work as a Ph.D. student in the research group of UC San Diego computer science professors Stefan Savage and Geoff Voelker.
study of 19,500 employees over eight months
To date, this is the largest study of the effectiveness of anti-phishing training, covering 19,500 employees at UC San Diego Health. In addition, it’s one of only two studies that used a randomized control trial method to determine whether employees would receive training, and what kind of phishing emails–or lures–they would receive.
After sending 10 different types of phishing emails over the course of eight months, the researchers found that embedded phishing training only reduced the likelihood of clicking on a phishing link by 2%. This is particularly striking given the expense in time and effort that these trainings require, the researchers note.
Researchers also found that more employees fell for the phishing emails as time went on. In the first month of the study, only 10% of employees clicked on a phishing link. By the eighth month, more than half had clicked on at least one phishing link.
In addition, researchers found that some phishing emails were considerably more effective than others. For example, only 1.82% of recipients clicked on a phishing link to update their Outlook password. But 30.8% clicked on a link that purported to be an update to UC San Diego Health’s vacation policy.
Given the results of the study, researchers recommend that organizations refocus their efforts to combat phishing on technical countermeasures. Specifically, two measures would have better return on investment: two-factor authentication for hardware and applications, as well as password managers that only work on correct domains, the researchers write.
This work was supported in part by funding from the University of California Office of the President “Be Smart About Safety” program–an effort focused on identifying best practices for reducing the frequency and severity of systemwide insurance losses. It was also supported in part by U.S. National Science Foundation grant CNS-2152644, the UCSD CSE Postdoctoral Fellows program, the Irwin Mark and Joan Klein Jacobs Chair in Information and Computer Science, the CSE Professorship in Internet Privacy and/or Internet Data Security, a generous gift from Google, and operational support from the UCSD Center for Networked Systems.
Korea JoongAng Daily
Wednesday
October 1, 2025
BY JEONG JAE-HONG [yoon.soyeon@joongang.co.kr],D
A fire at the National Information Resources Service (NIRS)'s Daejeon headquarters destroyed the government’s G-Drive cloud storage system, erasing work files saved individually by some 750,000 civil servants, the Ministry of the Interior and Safety said Wednesday.
The fire broke out in the server room on the fifth floor of the center, damaging 96 information systems designated as critical to central government operations, including the G-Drive platform. The G-Drive has been in use since 2018, requiring government officials to store all work documents in the cloud instead of on personal computers. It provided around 30 gigabytes of storage per person.
However, due to the system’s large-capacity, low-performance storage structure, no external backups were maintained — meaning all data has been permanently lost.
The scale of damage varies by agency. The Ministry of Personnel Management, which had mandated that all documents be stored exclusively on G-Drive, was hit hardest. The Office for Government Policy Coordination, which used the platform less extensively, suffered comparatively less damage.
The Personnel Ministry stated that all departments are expected to experience work disruptions. It is currently working to recover alternative data using any files saved locally on personal computers within the past month, along with emails, official documents and printed records.
The Interior Ministry noted that official documents created through formal reporting or approval processes were also stored in the government’s Onnara system and may be recoverable once that system is restored.
“Final reports and official records submitted to the government are also stored in OnNara, so this is not a total loss,” said a director of public services at the Interior Ministry.
The Interior Ministry explained that while most systems at the Daejeon data center are backed up daily to separate equipment within the same center and to a physically remote backup facility, the G-Drive’s structure did not allow for external backups. This vulnerability ultimately left it unprotected.
Criticism continues to build regarding the government's data management protocols.
Ars Technica, Dan Goodin – 30 sept. 2025 22:25
The chipmakers say physical attacks aren’t in the threat model. Many users didn’t get the memo.
In the age of cloud computing, protections baked into chips from Intel, AMD, and others are essential for ensuring confidential data and sensitive operations can’t be viewed or manipulated by attackers who manage to compromise servers running inside a data center. In many cases, these protections—which work by storing certain data and processes inside encrypted enclaves known as TEEs (Trusted Execution Enclaves)—are essential for safeguarding secrets stored in the cloud by the likes of Signal Messenger and WhatsApp. All major cloud providers recommend that customers use it. Intel calls its protection SGX, and AMD has named it SEV-SNP.
Over the years, researchers have repeatedly broken the security and privacy promises that Intel and AMD have made about their respective protections. On Tuesday, researchers independently published two papers laying out separate attacks that further demonstrate the limitations of SGX and SEV-SNP. One attack, dubbed Battering RAM, defeats both protections and allows attackers to not only view encrypted data but also to actively manipulate it to introduce software backdoors or to corrupt data. A separate attack known as Wiretap is able to passively decrypt sensitive data protected by SGX and remain invisible at all times.
Attacking deterministic encryption
Both attacks use a small piece of hardware, known as an interposer, that sits between CPU silicon and the memory module. Its position allows the interposer to observe data as it passes from one to the other. They exploit both Intel’s and AMD’s use of deterministic encryption, which produces the same ciphertext each time the same plaintext is encrypted with a given key. In SGX and SEV-SNP, that means the same plaintext written to the same memory address always produces the same ciphertext.
Deterministic encryption is well-suited for certain uses, such as full disk encryption, where the data being protected never changes once the thing being protected (in this case, the drive) falls into an attacker’s hands. The same encryption is suboptimal for protecting data flowing between a CPU and a memory chip because adversaries can observe the ciphertext each time the plaintext changes, opening the system to replay attacks and other well-known exploit techniques. Probabilistic encryption, by contrast, resists such attacks because the same plaintext can encrypt to a wide range of ciphertexts that are randomly chosen during the encryption process.
“Fundamentally, [the use of deterministic encryption] is a design trade-off,” Jesse De Meulemeester, lead author of the Battering RAM paper, wrote in an online interview. “Intel and AMD opted for deterministic encryption without integrity or freshness to keep encryption scalable (i.e., protect the entire memory range) and reduce overhead. That choice enables low-cost physical attacks like ours. The only way to fix this likely requires hardware changes, e.g., by providing freshness and integrity in the memory encryption.”
Daniel Genkin, one of the researchers behind Wiretap, agreed. “It’s a design choice made by Intel when SGX moved from client machines to server,” he said. “It offers better performance at the expense of security.” Genkin was referring to Intel’s move about five years ago to discontinue SGX for client processors—where encryption was limited to no more than 256 MB of RAM—to server processors that could encrypt terabytes of RAM. The transition required Intel to revamp the encryption to make it scale for such vast amounts of data.
“The papers are two sides of the same coin,” he added.
While both of Tuesday’s attacks exploit weaknesses related to deterministic encryption, their approaches and findings are distinct, and each comes with its own advantages and disadvantages. Both research teams said they learned of the other’s work only after privately submitting their findings to the chipmakers. The teams then synchronized the publish date for Tuesday. It’s not the first time such a coincidence has occurred. In 2018, multiple research teams independently developed attacks with names including Spectre and Meltdown. Both plucked secrets out of Intel and AMD processors by exploiting their use of performance enhancement known as speculative execution.
AMD declined to comment on the record, and Intel didn’t respond to questions sent by email. In the past, both chipmakers have said that their respective TEEs are designed to protect against compromises of a piece of software or the operating system itself, including in the kernel. The guarantees, the companies have said, don’t extend to physical attacks such as Battering RAM and Wiretap, which rely on physical interposers that sit between the processor and the memory chips. Despite this limitation, many cloud-based services continue to trust assurances from the TEEs even when they have been compromised through physical attacks (more about that later).
Intel on Tuesday published this advisory. AMD posted one here.
Battering RAM
Battering RAM uses a custom-built analog switch to act as an interposer that reads encrypted data as it passes between protected memory regions in DDR4 memory chips and an Intel or AMD processor. By design, both SGX and SEV-SNP make this ciphertext inaccessible to an adversary. To bypass that protection, the interposer creates memory aliases in which two different memory addresses point to the same location in the memory module.
The Battering-RAM interposer, containing two analog switches (bottom center), is controlled by a microcontroller (left). The switches can dynamically either pass through the command signals to the connected DIMM or connect the respective lines to ground.
The Battering-RAM interposer, containing two analog switches (bottom center), is controlled by a microcontroller (left). The switches can dynamically either pass through the command signals to the connected DIMM or connect the respective lines to ground. Credit: De Meulemeester et al.
“This lets the attacker capture a victim's ciphertext and later replay it from an alias,” De Meulemeester explained. “Because Intel's and AMD's memory encryption is deterministic, the replayed ciphertext always decrypts into valid plaintext when the victim reads it.” The PhD researcher at KU Leuven in Belgium continued:
When the CPU writes data to memory, the memory controller encrypts it deterministically, using the plaintext and the address as inputs. The same plaintext written to the same address always produces the same ciphertext. Through the alias, the attacker can't read the victim's secrets directly, but they can capture the victim's ciphertext. Later, by replaying this ciphertext at the same physical location, the victim will decrypt it to a valid, but stale, plaintext.
This replay capability is the primitive on which both our SGX and SEV attacks are built.
In both cases, the adversary installs the interposer, either through a supply-chain attack or physical compromise, and then runs either a virtual machine or application at a chosen memory location. At the same time, the adversary also uses the aliasing to capture the ciphertext. Later, the adversary replays the captured ciphertext, which, because it's running in the region the attacker has access to, is then replayed as plaintext.
Because SGX uses a single memory-encryption key for the entire protected range of RAM, Battering RAM can gain the ability to write or read plaintext into these regions. This allows the adversary to extract the processor’s provisioning key and, in the process, break the attestation SGX is supposed to provide to certify its integrity and authenticity to remote parties that connect to it.
AMD processors protected by SEV use a single encryption key to produce all ciphertext on a given virtual machine. This prevents the ciphertext replaying technique used to defeat SGX. Instead, Battering RAM captures and replays the cryptographic elements that are supposed to prove the virtual machine hasn’t been tampered with. By replaying an old attestation report, Battering RAM can load a backdoored Virtual machine that still carries the SEV-SNP certification that the VM hasn’t been tampered with.
The key benefit of Battering RAM is that it requires equipment that costs less than $50 to pull off. It also allows active decryption, meaning encrypted data can be both read and tampered with. In addition, it works against both SGX and SEV-SNP, as long as they work with DDR4 memory modules.
Wiretap
Wiretap, meanwhile, is limited to breaking only SGX working with DDR4, although the researchers say it would likely work against the AMD protections with a modest amount of additional work. Wiretap, however, allows only for passive decryption, which means protected data can be read, but data can’t be written to protected regions of memory. The cost of the interposer and the equipment for analyzing the captured data also costs considerably more than Battering RAM, at about $500 to $1,000.
Like Battering RAM, Wiretap exploits deterministic encryption, except the latter attack maps ciphertext to a list of known plaintext words that the ciphertext is derived from. Eventually, the attack can recover enough ciphertext to reconstruct the attestation key.
Genkin explained:
Let’s say you have an encrypted list of words that will be later used to form sentences. You know the list in advance, and you get an encrypted list in the same order (hence you know the mapping between each word and its corresponding encryption). Then, when you encounter an encrypted sentence, you just take the encryption of each word and match it against your list. By going word by word, you can decrypt the entire sentence. In fact, as long as most of the words are in your list, you can probably decrypt the entire conversation eventually. In our case, we build a dictionary between common values occurring within the ECDSA algorithm and their corresponding encryption, and then use this dictionary to recover these values as they appear, allowing us to extract the key.
The Wiretap researchers went on to show the types of attacks that are possible when an adversary successfully compromises SGX security. As Intel explains, a key benefit of SGX is remote attestation, a process that first verifies the authenticity and integrity of VMs or other software running inside the enclave and hasn’t been tampered with. Once the software passes inspection, the enclave sends the remote party a digitally signed certificate providing the identity of the tested software and a clean bill of health certifying the software is safe.
The enclave then opens an encrypted connection with the remote party to ensure credentials and private data can’t be read or modified during transit. Remote attestation works with the industry standard Elliptic Curve Digital Signature Algorithm, making it easy for all parties to use and trust.
Blockchain services didn’t get the memo
Many cloud-based services rely on TEEs as a foundation for privacy and security within their networks. One such service is Phala, a blockchain provider that allows the drafting and execution of smart contracts. According to the company, computer “state”—meaning system variables, configurations, and other dynamic data an application depends on—are stored and updated only in the enclaves available through SGX, SEV-SNP, and a third trusted enclave available in Arm chips known as TrustZone. This design allows these smart contract elements to update in real time through clusters of “worker nodes”—meaning the computers that host and process smart contracts—with no possibility of any node tampering with or viewing the information during execution.
“The attestation quote signed by Intel serves as the proof of a successful execution,” Phala explained. “It proves that specific code has been run inside an SGX enclave and produces certain output, which implies the confidentiality and the correctness of the execution. The proof can be published and validated by anyone with generic hardware.” Enclaves provided by AMD and Arm work in a similar manner.
The Wiretap researchers created a “testnet,” a local machine for running worker modes. With possession of the SGX attestation key, the researchers were able to obtain a cluster key that prevents individual nodes from reading or modifying contract state. With that, Wiretap was able to fully bypass the protection. In a paper, the researchers wrote:
We first enter our attacker enclave into a cluster and note it is given access to the cluster key. Although the cluster key is not directly distributed to our worker upon joining a cluster, we initiate a transfer of the key from any other node in the cluster. This transfer is completed without on-chain interaction, given our worker is part of the cluster. This cluster key can then be used to decrypt all contract interactions within the cluster. Finally, when our testnet accepted our node’s enclave as a gatekeeper, we directly receive a copy of the master key, which is used to derive all cluster keys and therefore all contract keys, allowing us to decrypt the entire testnet.
The researchers performed similar bypasses against a variety of other blockchain services, including Secret, Crust, and IntegriTEE. After the researchers privately shared the results with these companies, they took steps to mitigate the attacks.
Both Battering RAM and Wiretap work only against DDR4 forms of memory chips because the newer DDR5 runs at much higher bus speeds with a multi-cycle transmission protocol. For that reason, neither attack works against a similar Intel protection known as TDX because it works only with DDR5.
As noted earlier, Intel and AMD both exclude physical attacks like Battering RAM and Wiretap from the threat model their TEEs are designed to withstand. The Wiretap researchers showed that despite these warnings, Phala and many other cloud-based services still rely on the enclaves to preserve the security and privacy of their networks. The research also makes clear that the TEE defenses completely break down in the event of an attack targeting the hardware supply chain.
For now, the only feasible solution is for chipmakers to replace deterministic encryption with a stronger form of protection. Given the challenges of making such encryption schemes scale to vast amounts of RAM, it’s not clear when that may happen.
Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.
dronexl.co Haye Kestelooo october 2, 2025
Drone sightings Thursday evening forced Germany’s Munich airport to suspend operations, cancelling 17 flights and disrupting travel for nearly 3,000
Drone sightings Thursday evening forced Germany’s Munich airport to suspend operations, cancelling 17 flights and disrupting travel for nearly 3,000 passengers. The incident marks the latest in a concerning series of mysterious drone closures at major European airports—but whether these sightings represent genuine security threats or mass misidentification remains an urgent question.
The pattern echoes both recent suspected hybrid attacks in Scandinavia and last year’s New Jersey drone panic that turned out to be largely misidentified aircraft and celestial objects.
Munich Operations Suspended for Hours
German air traffic control restricted flight operations at Munich airport from 10:18 p.m. local time Thursday after multiple drone sightings, later suspending them entirely. The airport remained closed until 2:59 a.m. Friday (4:59 a.m. local time).
Another 15 arriving flights were diverted to Stuttgart, Nuremberg, Vienna, and Frankfurt. Flight tracking service Flightradar24 confirmed the airport would remain closed until early Friday morning.
The first arriving flight was expected at 5:25 a.m., with the first departure scheduled for 5:50 a.m., according to the airport’s website.
European Airports on Edge After Suspected Russian Incidents
The Munich closure comes just days after a wave of drone incidents shut down multiple airports across Denmark and Norway in late September. Copenhagen Airport closed for nearly four hours on September 22 after two to three large drones were spotted in controlled airspace. Oslo’s Gardermoen Airport also briefly closed that same night.
Danish Prime Minister Mette Frederiksen called those incidents “the most serious attack on Danish critical infrastructure to date” and suggested Russia could be behind the disruption. Danish authorities characterized the activity as a likely hybrid operation intended to unsettle the public and disrupt critical infrastructure.
Several more Danish airports—including Aalborg, Billund, and military bases—experienced similar incidents in the following days. Denmark is now considering whether to invoke NATO’s Article 4, which enables member states to request consultations over security concerns.
Russian President Vladimir Putin joked Thursday that he would not fly drones over Denmark anymore, though Moscow has denied responsibility for the incidents. Denmark has stopped short of saying definitively who is responsible, but Western officials point to a pattern of Russian drone violations of NATO airspace in Poland, Romania, and Estonia.
The Misidentification Problem: Lessons from New Jersey
While European officials investigate potential hybrid warfare, the incidents raise uncomfortable parallels to the New Jersey drone panic of late 2024—a mass sighting event that turned out to be largely misidentification of routine aircraft and celestial objects.
Between November and December 2024, thousands of “drone” reports flooded in from New Jersey and neighboring states. The phenomenon sparked widespread fear, congressional hearings, and even forced then-President-elect Donald Trump to cancel a trip to his Bedminster golf club.
Federal investigations later revealed the reality: most sightings were manned aircraft operating lawfully. A joint FBI and DHS statement in December noted: “Historically, we have experienced cases of mistaken identity, where reported drones are, in fact, manned aircraft or facilities.”
TSA documents released months later showed that one of the earliest incidents—which forced a medical helicopter carrying a crash victim to divert—involved three commercial aircraft approaching nearby Solberg Airport. “The alignment of the aircraft gave the appearance to observers on the ground of them hovering in formation while they were actually moving directly at the observers,” the analysis found.
Dr. Will Austin, president of Warren County Community College and a national drone expert, reviewed numerous videos during the panic. He found that “many of the reports received involve misidentification of manned aircraft.” Even Jupiter, which was particularly bright in New Jersey’s night sky that season, was mistaken for a hovering drone.
The panic had real consequences: laser-pointing incidents at aircraft spiked to 59 in December 2024—more than the 49 incidents recorded for all of 2023, according to the FAA.
Munich Already on Edge
Munich was already placed on edge this week when its popular Oktoberfest was temporarily closed due to a bomb threat, and explosives were discovered in a residential building in the city’s north.
Whether Thursday’s drone sightings represent genuine security threats similar to the suspected Russian operations in Scandinavia, or misidentified routine aircraft like in New Jersey, remains under investigation. German authorities have not released details about what was observed or where the objects may have originated.
DroneXL’s Take
We’re watching two very different scenarios collide in dangerous ways. The Denmark and Norway incidents appear to involve sophisticated actors—large drones, coordinated timing, professional operation over multiple airports and military installations. Danish intelligence has credible reasons to suspect state-sponsored hybrid warfare, particularly given documented Russian drone violations of NATO airspace in Poland and Romania.
But the New Jersey panic showed how quickly mass hysteria can spiral when people start looking up. Once the narrative took hold, every airplane on approach, every bright planet, every hobbyist quadcopter became a “mystery drone.” Federal investigators reviewed over 5,000 reports and found essentially nothing anomalous—yet 78% of Americans still believed the government was hiding something.
Munich sits uncomfortably between these realities. Is it part of the escalating pattern of suspected Russian hybrid attacks on European infrastructure? Or is it another case of observers misidentifying routine air traffic in an atmosphere of heightened anxiety?
The distinction matters enormously. Real threats require sophisticated counter-drone systems and potentially invoke NATO collective defense mechanisms. False alarms waste resources, create dangerous situations (like those laser-pointing incidents), and damage the credibility of legitimate security concerns.
Airport authorities worldwide need better drone detection technology that can definitively distinguish between aircraft types. Equally important: they need to be transparent about what they’re actually seeing, rather than leaving information vacuums that fill with speculation and fear.
Following drone sightings late on Thursday and Friday evening and further drone sightings early on Saturday morning, the start of flight operations on 4 October 2025 was delayed. Flight operations were gradually ramped up and stabilised over the course of the afternoon.
Following drone sightings late on Thursday and Friday evening and further drone sightings early on Saturday morning, the start of flight operations on 4 October 2025 has been delayed. Flight operations were gradually ramped up and stabilised over the course of the afternoon. Passengers were asked to check the status of their flight on their airline's website before travelling to the airport. Of the more than 1,000 take-offs and landings planned for Saturday, airlines cancelled around 170 flights during the day for operational reasons.
As on previous nights, Munich Airport worked with the airlines to immediately provide for passengers in the terminals. These activities will continue on Saturday evening and into Sunday night. Numerous camp beds will again be set up, and blankets, air mattresses, drinks and snacks will be distributed. In addition, some shops, restaurants and a pharmacy in the public area will extend their opening hours and remain open throughout the night. In addition to numerous employees of the airport, airlines and service providers, numerous volunteers are also on duty.
When a drone is suspected of being sighted, the safety of travellers is the top priority. Reporting chains between air traffic control, the airport and police authorities have been established for years. It is important to emphasise that the detection and defence against drones are sovereign tasks and are the responsibility of the federal and state police.
Munich Airport (www.munich-airport.com)
October 3, 2025 (Update)
On Thursday evening (October 2), several drones were sighted in the vicinity of and on the grounds of Munich Airport. The first reports were received at around 8:30 p.m. Initially, areas around the airport, including Freising and Erding, were affected.
The state police immediately launched extensive search operations with a large number of officers in the vicinity of the airport. At the same time, the federal police immediately carried out surveillance and search operations on the airport grounds. However, it has not yet been possible to identify the perpetrator.
At around 9:05 p.m., drones were reported near the airport fence. At around 10:10 p.m., the first sighting was made on the airport grounds. As a result, flight operations were gradually suspended at 10:18 p.m. for safety reasons. The preventive closure affected both runways from 10:35 p.m. onwards. The sightings ended around midnight. According to the airport operator, there were 17 flight cancellations and 15 diversions by that time. Helicopters from the federal police and the Bavarian state police were also deployed to monitor the airspace and conduct searches.
Munich Airport, in cooperation with the airlines, immediately took care of the passengers in the terminals. Camp beds were set up, and blankets, drinks, and snacks were provided. In addition, 15 arriving flights were diverted to Stuttgart, Nuremberg, Vienna, and Frankfurt. Flight operations resumed as normal today (Friday, October 3).
Responsibilities and cooperation
Within the scope of their respective tasks, the German Air Traffic Control (DFS), the state aviation security authorities, the state police forces, and the federal police are responsible for the detection and defense against drones at commercial airports.
The measures are carried out in close coordination between all parties involved and the airport operator on the basis of jointly developed emergency plans. The local state police force is responsible for preventive policing in the vicinity of the airport, while the federal police is responsible for policing on the airport grounds. Criminal prosecution is the responsibility of the state police.
Note: Please understand that for tactical reasons, the security authorities are unable to provide any further information on the systems and measures used. Further investigations will be conducted by the Bavarian police, as they have jurisdiction in this matter.
techcrunch.com - Lorenzo Franceschi-Bicchierai
Zack Whittaker
6:17 AM PDT · October 3, 2025
The hacking group claims to have stolen about a billion records from companies, including FedEx, Qantas, and TransUnion, who store their customer and company data in Salesforce.
A notorious predominantly English-speaking hacking group has launched a website to extort its victims, threatening to release about a billion records stolen from companies who store their customers’ data in cloud databases hosted by Salesforce.
The loosely organized group, which has been known as Lapsus$, Scattered Spider, and ShinyHunters, has published a dedicated data leak site on the dark web, called Scattered LAPSUS$ Hunters.
The website, first spotted by threat intelligence researchers on Friday and seen by TechCrunch, aims to pressure victims into paying the hackers to avoid having their stolen data published online.
“Contact us to regain control on data governance and prevent public disclosure of your data,” reads the site. “Do not be the next headline. All communications demand strict verification and will be handled with discretion.”
Over the last few weeks, the ShinyHunters gang allegedly hacked dozens of high-profile companies by breaking into their cloud-based databases hosted by Salesforce.
Insurance giant Allianz Life, Google, fashion conglomerate Kering, the airline Qantas, carmaking giant Stellantis, credit bureau TransUnion, and the employee management platform Workday, among several others, have confirmed their data was stolen in these mass hacks.
The hackers’ leak site lists several alleged victims, including FedEx, Hulu (owned by Disney), and Toyota Motors, none of which responded to a request for comment on Friday.
It’s not clear if the companies known to have been hacked but not listed on the hacking group’s leak site have paid a ransom to the hackers to prevent their data from being published. When reached by TechCrunch, a representative from ShinyHunters said, “there are numerous other companies that have not been listed,” but declined to say why.
At the top of the site, the hackers mention Salesforce and demand that the company negotiate a ransom, threatening that otherwise “all your customers [sic] data will be leaked.” The tone of the message suggests that Salesforce has not yet engaged with the hackers.
Salesforce spokesperson Nicole Aranda provided a link to the company’s statement, which notes that the company is “aware of recent extortion attempts by threat actors.”
“Our findings indicate these attempts relate to past or unsubstantiated incidents, and we remain engaged with affected customers to provide support,” the statement reads. “At this time, there is no indication that the Salesforce platform has been compromised, nor is this activity related to any known vulnerability in our technology.”
Aranda declined to comment further.
For weeks, security researchers have speculated that the group, which has historically eschewed a public presence online, was planning to publish a data leak website to extort its victims.
Historically, such websites have been associated with foreign, often Russian-speaking, ransomware gangs. In the last few years, these organized cybercrime groups have evolved from stealing, encrypting their victim’s data, and then privately asking for a ransom, to simply threatening to publish the stolen data online unless they get paid.
securityaffairs.com
October 04, 2025
Pierluigi Paganini
GreyNoise saw a 500% spike in scans on Palo Alto Networks login portals on Oct. 3, 2025, the highest in three months.
Cybersecurity firm GreyNoise reported a 500% surge in scans targeting Palo Alto Networks login portals on October 3, 2025, marking the highest activity in three months.
On October 3, the researchers observed that over 1,285 IPs scanned Palo Alto portals, up from a usual 200. The experts reported that 93% of the IPs were suspicious, 7% malicious.
Most originated from the U.S., with smaller clusters in the U.K., Netherlands, Canada, and Russia.
GryNoise defined the traffic targeted and structured, aimed at Palo Alto login portals and split across distinct scanning clusters.
The scans targeted emulated Palo Alto profiles, focusing mainly on U.S. and Pakistan systems, indicating coordinated, targeted reconnaissance.
GreyNoise found that recent Palo Alto scanning mirrors Cisco ASA activity, showing regional clustering and shared TLS fingerprints linked to the Netherlands infrastructure. Both used similar tools, suggesting possible shared infrastructure or operators. The overlap follows a Cisco ASA scanning surge preceding the disclosure of two zero-day vulnerabilities.
“Both Cisco ASA and Palo Alto login scanning traffic in the past 48 hours share a dominant TLS fingerprint tied to infrastructure in the Netherlands. This comes after GreyNoise initially reported an ASA scanning surge before Cisco’s disclosure of two ASA zero-days.” reads the report published by Grey Noise. “In addition to a possible connection to ongoing Cisco ASA scanning, GreyNoise identified concurrent surges across remote access services. While suspicious, we are unsure if this activity is related. “
GreyNoise noted in July spikes in Palo Alto scans sometimes preceded new flaws within six weeks; The experts are monitoring if the latest surge signals another disclosure.
“GreyNoise is developing an enhanced dynamic IP blocklist to help defenders take faster action on emerging threats.” concludes the report.
discord.com
Discord
October 3, 2025
At Discord, protecting the privacy and security of our users is a top priority. That’s why it’s important to us that we’re transparent with them about events that impact their personal information.
Discord recently discovered an incident where an unauthorized party compromised one of Discord’s third-party customer service providers.
This incident impacted a limited number of users who had communicated with our Customer Support or Trust & Safety teams.
This unauthorized party did not gain access to Discord directly.
No messages or activities were accessed beyond what users may have discussed with Customer Support or Trust & Safety agents.
We immediately revoked the customer support provider’s access to our ticketing system and continue to investigate this matter.
We’re working closely with law enforcement to investigate this matter.
We are in the process of emailing the users impacted.
At Discord, protecting the privacy and security of our users is a top priority. That’s why it’s important to us that we’re transparent with them about events that impact their personal information.
Recently, we discovered an incident where an unauthorized party compromised one of Discord’s third-party customer service providers. The unauthorized party then gained access to information from a limited number of users who had contacted Discord through our Customer Support and/or Trust & Safety teams.
As soon as we became aware of this attack, we took immediate steps to address the situation. This included revoking the customer support provider’s access to our ticketing system, launching an internal investigation, engaging a leading computer forensics firm to support our investigation and remediation efforts, and engaging law enforcement.
We are in the process of contacting impacted users. If you were impacted, you will receive an email from noreply@discord.com. We will not contact you about this incident via phone – official Discord communications channels are limited to emails from noreply@discord.com.
What happened?
An unauthorized party targeted our third-party customer support services to access user data, with a view to extort a financial ransom from Discord.
What data was involved?
The data that may have been impacted was related to our customer service system. This may include:
Name, Discord username, email and other contact details if provided to Discord customer support
Limited billing information such as payment type, the last four digits of your credit card, and purchase history if associated with your account
IP addresses
Messages with our customer service agents
Limited corporate data (training materials, internal presentations)
The unauthorized party also gained access to a small number of government‑ID images (e.g., driver’s license, passport) from users who had appealed an age determination. If your ID may have been accessed, that will be specified in the email you receive.
What data was not involved?
Full credit card numbers or CCV codes
Messages or activity on Discord beyond what users may have discussed with customer support
Passwords or authentication data
What are we doing about this?
Discord has and will continue to take all appropriate steps in response to this situation. As standard, we will continue to frequently audit our third-party systems to ensure they meet our security and privacy standards. In addition, we have:
Notified relevant data protection authorities.
Proactively engaged with law enforcement to investigate this attack.
Reviewed our threat detection systems and security controls for third-party support providers.
Taking next steps
Looking ahead, we recommend impacted users stay alert when receiving messages or other communication that may seem suspicious. We have service agents on hand to answer questions and provide additional support.
We take our responsibility to protect your personal data seriously and understand the inconvenience and concern this may cause.
theregister.com • The Register
by Jessica Lyons
Wed 1 Oct 2025 // 19:35 UTC
: Who wouldn't want root access on cluster master nodes?
9.9 out of 10 severity bug in Red Hat's OpenShift AI service could allow a remote attacker with minimal authentication to steal data, disrupt services, and fully hijack the platform.
"A low-privileged attacker with access to an authenticated account, for example as a data scientist using a standard Jupyter notebook, can escalate their privileges to a full cluster administrator," the IBM subsidiary warned in a security alert published earlier this week.
"This allows for the complete compromise of the cluster's confidentiality, integrity, and availability," the alert continues. "The attacker can steal sensitive data, disrupt all services, and take control of the underlying infrastructure, leading to a total breach of the platform and all applications hosted on it."
Red Hat deemed the vulnerability, tracked as CVE-2025-10725, "important" despite its 9.9 CVSS score, which garners a critical-severity rating from the National Vulnerability Database - and basically any other organization that issues CVEs. This, the vendor explained, is because the flaw requires some level of authentication, albeit minimal, for an attacker to jeopardize the hybrid cloud environment.
Users can mitigate the flaw by removing the ClusterRoleBinding that links the kueue-batch-user-role ClusterRole with the system:authenticated group. "The permission to create jobs should be granted on a more granular, as-needed basis to specific users or groups, adhering to the principle of least privilege," Red Hat added.
Additionally, the vendor suggests not granting broad permissions to system-level groups.
Red Hat didn't immediately respond to The Register's inquiries, including if the CVE has been exploited. We will update this story as soon as we receive any additional information.
Whose role is it anyway?
OpenShift AI is an open platform for building and managing AI applications across hybrid cloud environments.
As noted earlier, it includes a ClusterRole named "kueue-batch-user-role." The security issue here exists because this role is incorrectly bound to the system:authenticated group.
"This grants any authenticated entity, including low-privileged service accounts for user workbenches, the permission to create OpenShift Jobs in any namespace," according to a Bugzilla flaw-tracking report.
One of these low-privileged accounts could abuse this to schedule a malicious job in a privileged namespace, configure it to run with a high-privilege ServiceAccount, exfiltrate that ServiceAccount token, and then "progressively pivot and compromise more powerful accounts, ultimately achieving root access on cluster master nodes and leading to a full cluster takeover," the report said.
"Vulnerabilities offering a path for a low privileged user to fully take over an environment needs to be patched in the form of an incident response cycle, seeking to prove that the environment was not already compromised," Trey Ford, chief strategy and trust officer at crowdsourced security company Bugcrow said in an email to The Register.
In other words: "Assume breach," Ford added.
"The administrators managing OpenShift AI infrastructure need to patch this with a sense of urgency - this is a delightful vulnerability pattern for attackers looking to acquire both access and data," he said. "Security teams must move with a sense of purpose, both verifying that these environments have been patched, then investigating to confirm whether-and-if their clusters have been compromised."
bleepingcomputer.com By Sergiu Gatlan
October 3, 2025
An extortion group has launched a new data leak site to publicly extort dozens of companies impacted by a wave of Salesforce breaches, leaking samples of data stolen in the attacks.
The threat actors responsible for these attacks claim to be part of the ShinyHunters, Scattered Spider, and Lapsus$ groups, collectively referring to themselves as "Scattered Lapsus$ Hunters."
Today, they launched a new data leak site containing 39 companies impacted by the attacks. Each entry includes samples of data allegedly stolen from victims' Salesforce instances, and warns the victims to reach out to "prevent public disclosure" of their data before the October 10 deadline is reached.
The companies being extorted on the data leak site include well-known brands and organizations, including FedEx, Disney/Hulu, Home Depot, Marriott, Google, Cisco, Toyota, Gap, McDonald's, Walgreens, Instacart, Cartier, Adidas, Sake Fifth Avenue, Air France & KLM, Transunion, HBO MAX, UPS, Chanel, and IKEA.
"All of them have been contacted long ago, they saw the email because I saw them download the samples multiple times. Most of them chose to not disclose and ignore," ShinyHunters told BleepingComputer.
"We highly advise you proceed into the right decision, your organisation can prevent the release of this data, regain control over the situation and all operations remain stable as always. We highly recommend a decision-maker to get involved as we are presenting a clear and mutually beneficial opportunity to resolve this matter," they warned on the leak site.
The threat actors also added a separate entry requesting that Salesforce pay a ransom to prevent all impacted customers' data (approximately 1 billion records containing personal information) from being leaked.
"Should you comply, we will withdraw from any active or pending negotiation indiviually from your customers. Your customers will not be attacked again nor will they face a ransom from us again, should you pay," they added.
The extortion group also threatened the company, stating that it would help law firms pursue civil and commercial lawsuits against Salesforce following the data breaches and warned that the company had also failed to protect customers' data as required by the European General Data Protection Regulation (GDPR).