Cyberveillecurated by Decio
Nuage de tags
Mur d'images
Quotidien
Flux RSS
  • Flux RSS
  • Daily Feed
  • Weekly Feed
  • Monthly Feed
Filtres

Liens par page

  • 20 links
  • 50 links
  • 100 links

Filtres

Untagged links
7 résultats taggé arstechnica.com  ✕
Critics scoff after Microsoft warns AI feature can infect machines and pilfer data https://arstechnica.com/security/2025/11/critics-scoff-after-microsoft-warns-ai-feature-can-infect-machines-and-pilfer-data/
22/11/2025 13:57:23
QRCode
archive.org
thumbnail

Ars Technica - arstechnica.com
Dan Goodin Senior Security Editor
19 nov. 2025 21:25

Integration of Copilot Actions into Windows is off by default, but for how long?

Microsoft’s warning on Tuesday that an experimental AI agent integrated into Windows can infect devices and pilfer sensitive user data has set off a familiar response from security-minded critics: Why is Big Tech so intent on pushing new features before their dangerous behaviors can be fully understood and contained?

As reported Tuesday, Microsoft introduced Copilot Actions, a new set of “experimental agentic features” that, when enabled, perform “everyday tasks like organizing files, scheduling meetings, or sending emails,” and provide “an active digital collaborator that can carry out complex tasks for you to enhance efficiency and productivity.”

Hallucinations and prompt injections apply
The fanfare, however, came with a significant caveat. Microsoft recommended users enable Copilot Actions only “if you understand the security implications outlined.”

The admonition is based on known defects inherent in most large language models, including Copilot, as researchers have repeatedly demonstrated.

One common defect of LLMs causes them to provide factually erroneous and illogical answers, sometimes even to the most basic questions. This propensity for hallucinations, as the behavior has come to be called, means users can’t trust the output of Copilot, Gemini, Claude, or any other AI assistant and instead must independently confirm it.

Another common LLM landmine is the prompt injection, a class of bug that allows hackers to plant malicious instructions in websites, resumes, and emails. LLMs are programmed to follow directions so eagerly that they are unable to discern those in valid user prompts from those contained in untrusted, third-party content created by attackers. As a result, the LLMs give the attackers the same deference as users.

Both flaws can be exploited in attacks that exfiltrate sensitive data, run malicious code, and steal cryptocurrency. So far, these vulnerabilities have proved impossible for developers to prevent and, in many cases, can only be fixed using bug-specific workarounds developed once a vulnerability has been discovered.

That, in turn, led to this whopper of a disclosure in Microsoft’s post from Tuesday:

“As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs,” Microsoft said. “Additionally, agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA), where malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation.”

Microsoft indicated that only experienced users should enable Copilot Actions, which is currently available only in beta versions of Windows. The company, however, didn’t describe what type of training or experience such users should have or what actions they should take to prevent their devices from being compromised. I asked Microsoft to provide these details, and the company declined.

Like “macros on Marvel superhero crack”
Some security experts questioned the value of the warnings in Tuesday’s post, comparing them to warnings Microsoft has provided for decades about the danger of using macros in Office apps. Despite the long-standing advice, macros have remained among the lowest-hanging fruit for hackers out to surreptitiously install malware on Windows machines. One reason for this is that Microsoft has made macros so central to productivity that many users can’t do without them.

“Microsoft saying ‘don’t enable macros, they’re dangerous’… has never worked well,” independent researcher Kevin Beaumont said. “This is macros on Marvel superhero crack.”

Beaumont, who is regularly hired to respond to major Windows network compromises inside enterprises, also questioned whether Microsoft will provide a means for admins to adequately restrict Copilot Actions on end-user machines or to identify machines in a network that have the feature turned on.

A Microsoft spokesperson said IT admins will be able to enable or disable an agent workspace at both account and device levels, using Intune or other MDM (Mobile Device Management) apps.

Critics voiced other concerns, including the difficulty for even experienced users to detect exploitation attacks targeting the AI agents they’re using.

“I don’t see how users are going to prevent anything of the sort they are referring to, beyond not surfing the web I guess,” researcher Guillaume Rossolini said.

Microsoft has stressed that Copilot Actions is an experimental feature that’s turned off by default. That design was likely chosen to limit its access to users with the experience required to understand its risks. Critics, however, noted that previous experimental features—Copilot, for instance—regularly become default capabilities for all users over time. Once that’s done, users who don’t trust the feature are often required to invest time developing unsupported ways to remove the features.

Sound but lofty goals
Most of Tuesday’s post focused on Microsoft’s overall strategy for securing agentic features in Windows. Goals for such features include:

Non-repudiation, meaning all actions and behaviors must be “observable and distinguishable from those taken by a user”
Agents must preserve confidentiality when they collect, aggregate, or otherwise utilize user data
Agents must receive user approval when accessing user data or taking actions
The goals are sound, but ultimately they depend on users reading the dialog windows that warn of the risks and require careful approval before proceeding. That, in turn, diminishes the value of the protection for many users.

“The usual caveat applies to such mechanisms that rely on users clicking through a permission prompt,” Earlence Fernandes, a University of California, San Diego professor specializing in AI security, told Ars. “Sometimes those users don’t fully understand what is going on, or they might just get habituated and click ‘yes’ all the time. At which point, the security boundary is not really a boundary.”

As demonstrated by the rash of “ClickFix” attacks, many users can be tricked into following extremely dangerous instructions. While more experienced users (including a fair number of Ars commenters) blame the victims falling for such scams, these incidents are inevitable for a host of reasons. In some cases, even careful users are fatigued or under emotional distress and slip up as a result. Other users simply lack the knowledge to make informed decisions.

Microsoft’s warning, one critic said, amounts to little more than a CYA (short for cover your ass), a legal maneuver that attempts to shield a party from liability.

“Microsoft (like the rest of the industry) has no idea how to stop prompt injection or hallucinations, which makes it fundamentally unfit for almost anything serious,” critic Reed Mideke said. “The solution? Shift liability to the user. Just like every LLM chatbot has a ‘oh by the way, if you use this for anything important be sure to verify the answers” disclaimer, never mind that you wouldn’t need the chatbot in the first place if you knew the answer.”

As Mideke indicated, most of the criticisms extend to AI offerings other companies—including Apple, Google, and Meta—are integrating into their products. Frequently, these integrations begin as optional features and eventually become default capabilities whether users want them or not.

arstechnica.com EN 2025 AI Microsoft feature Critics
Researchers question Anthropic claim that AI-assisted attack was 90% autonomous https://arstechnica.com/security/2025/11/researchers-question-anthropic-claim-that-ai-assisted-attack-was-90-autonomous/
15/11/2025 16:18:03
QRCode
archive.org
thumbnail

- Ars Technica
arstechnica.com
Dan Goodin – 14 nov. 2025 13:20

The results of AI-assisted hacking aren’t as impressive as many might have us believe.

Researchers from Anthropic said they recently observed the “first reported AI-orchestrated cyber espionage campaign” after detecting China-state hackers using the company’s Claude AI tool in a campaign aimed at dozens of targets. Outside researchers are much more measured in describing the significance of the discovery.

Anthropic published the reports on Thursday here and here. In September, the reports said, Anthropic discovered a “highly sophisticated espionage campaign,” carried out by a Chinese state-sponsored group, that used Claude Code to automate up to 90 percent of the work. Human intervention was required “only sporadically (perhaps 4-6 critical decision points per hacking campaign).” Anthropic said the hackers had employed AI agentic capabilities to an “unprecedented” extent.

“This campaign has substantial implications for cybersecurity in the age of AI ‘agents’—systems that can be run autonomously for long periods of time and that complete complex tasks largely independent of human intervention,” Anthropic said. “Agents are valuable for everyday work and productivity—but in the wrong hands, they can substantially increase the viability of large-scale cyberattacks.”

“Ass-kissing, stonewalling, and acid trips”
Outside researchers weren’t convinced the discovery was the watershed moment the Anthropic posts made it out to be. They questioned why these sorts of advances are often attributed to malicious hackers when white-hat hackers and developers of legitimate software keep reporting only incremental gains from their use of AI.

“I continue to refuse to believe that attackers are somehow able to get these models to jump through hoops that nobody else can,” Dan Tentler, executive founder of Phobos Group and a researcher with expertise in complex security breaches, told Ars. “Why do the models give these attackers what they want 90% of the time but the rest of us have to deal with ass-kissing, stonewalling, and acid trips?”

Researchers don’t deny that AI tools can improve workflow and shorten the time required for certain tasks, such as triage, log analysis, and reverse engineering. But the ability for AI to automate a complex chain of tasks with such minimal human interaction remains elusive. Many researchers compare advances from AI in cyberattacks to those provided by hacking tools such as Metasploit or SEToolkit, which have been in use for decades. There’s no doubt that these tools are useful, but their advent didn’t meaningfully increase hackers’ capabilities or the severity of the attacks they produced.

Another reason the results aren’t as impressive as they’re made out to be: The threat actors—which Anthropic tracks as GTG-1002—targeted at least 30 organizations, including major technology corporations and government agencies. Of those, only a “small number” of the attacks succeeded. That, in turn, raises questions. Even assuming so much human interaction was eliminated from the process, what good is that when the success rate is so low? Would the number of successes have increased if the attackers had used more traditional, human-involved methods?

According to Anthropic’s account, the hackers used Claude to orchestrate attacks using readily available open source software and frameworks. These tools have existed for years and are already easy for defenders to detect. Anthropic didn’t detail the specific techniques, tooling, or exploitation that occurred in the attacks, but so far, there’s no indication that the use of AI made them more potent or stealthy than more traditional techniques.

“The threat actors aren’t inventing something new here,” independent researcher Kevin Beaumont said.

Even Anthropic noted “an important limitation” in its findings:

Claude frequently overstated findings and occasionally fabricated data during autonomous operations, claiming to have obtained credentials that didn’t work or identifying critical discoveries that proved to be publicly available information. This AI hallucination in offensive security contexts presented challenges for the actor’s operational effectiveness, requiring careful validation of all claimed results. This remains an obstacle to fully autonomous cyberattacks.

How (Anthropic says) the attack unfolded
Anthropic said GTG-1002 developed an autonomous attack framework that used Claude as an orchestration mechanism that largely eliminated the need for human involvement. This orchestration system broke complex multi-stage attacks into smaller technical tasks such as vulnerability scanning, credential validation, data extraction, and lateral movement.

“The architecture incorporated Claude’s technical capabilities as an execution engine within a larger automated system, where the AI performed specific technical actions based on the human operators’ instructions while the orchestration logic maintained attack state, managed phase transitions, and aggregated results across multiple sessions,” Anthropic said. “This approach allowed the threat actor to achieve operational scale typically associated with nation-state campaigns while maintaining minimal direct involvement, as the framework autonomously progressed through reconnaissance, initial access, persistence, and data exfiltration phases by sequencing Claude’s responses and adapting subsequent requests based on discovered information.”

The attacks followed a five-phase structure that increased AI autonomy through each one.

The life cycle of the cyberattack, showing the move from human-led targeting to largely AI-driven attacks using various tools, often via the Model Context Protocol (MCP). At various points during the attack, the AI returns to its human operator for review and further direction. Credit: Anthropic
The attackers were able to bypass Claude guardrails in part by breaking tasks into small steps that, in isolation, the AI tool didn’t interpret as malicious. In other cases, the attackers couched their inquiries in the context of security professionals trying to use Claude to improve defenses.

As noted last week, AI-developed malware has a long way to go before it poses a real-world threat. There’s no reason to doubt that AI-assisted cyberattacks may one day produce more potent attacks. But the data so far indicates that threat actors—like most others using AI—are seeing mixed results that aren’t nearly as impressive as those in the AI industry claim

arstechnica.com EN 2025 Anthropic claim china cyberattack
Leaker reveals which Pixels are vulnerable to Cellebrite phone hacking https://arstechnica.com/gadgets/2025/10/leaker-reveals-which-pixels-are-vulnerable-to-cellebrite-phone-hacking/
31/10/2025 22:23:35
QRCode
archive.org
thumbnail
  • Ars Technica
    Ryan Whitwam – 30 oct. 2025 21:29 | 79

Cellebrite can apparently extract data from most Pixel phones, unless they’re running GrapheneOS.

Despite being a vast repository of personal information, smartphones used to have little by way of security. That has thankfully changed, but companies like Cellebrite offer law enforcement tools that can bypass security on some devices. The company keeps the specifics quiet, but an anonymous individual recently logged in to a Cellebrite briefing and came away with a list of which of Google’s Pixel phones are vulnerable to Cellebrite phone hacking.

This person, who goes by the handle rogueFed, posted screenshots from the recent Microsoft Teams meeting to the GrapheneOS forums (spotted by 404 Media). GrapheneOS is an Android-based operating system that can be installed on select phones, including Pixels. It ships with enhanced security features and no Google services. Because of its popularity among the security-conscious, Cellebrite apparently felt the need to include it in its matrix of Pixel phone support.

The screenshot includes data on the Pixel 6, Pixel 7, Pixel 8, and Pixel 9 family. It does not list the Pixel 10 series, which launched just a few months ago. The phone support is split up into three different conditions: before first unlock, after first unlock, and unlocked. The before first unlock (BFU) state means the phone has not been unlocked since restarting, so all data is encrypted. This is traditionally the most secure state for a phone. In the after first unlock (AFU) state, data extraction is easier. And naturally, an unlocked phone is open season on your data.
At least according to Cellebrite, GrapheneOS is more secure than what Google offers out of the box. The company is telling law enforcement in these briefings that its technology can extract data from Pixel 6, 7, 8, and 9 phones in unlocked, AFU, and BFU states on stock software. However, it cannot brute-force passcodes to enable full control of a device. The leaker also notes law enforcement is still unable to copy an eSIM from Pixel devices. Notably, the Pixel 10 series is moving away from physical SIM cards.

For those same phones running GrapheneOS, police can expect to have a much harder time. The Cellebrite table says that Pixels with GrapheneOS are only accessible when running software from before late 2022—both the Pixel 8 and Pixel 9 were launched after that. Phones in both BFU and AFU states are safe from Cellebrite on updated builds, and as of late 2024, even a fully unlocked GrapheneOS device is immune from having its data copied. An unlocked phone can be inspected in plenty of other ways, but data extraction in this case is limited to what the user can access.

The original leaker claims to have dialed into two calls so far without detection. However, rogueFed also called out the meeting organizer by name (the second screenshot, which we are not reposting). Odds are that Cellebrite will be screening meeting attendees more carefully now.

We’ve reached out to Google to inquire about why a custom ROM created by a small non-profit is more resistant to industrial phone hacking than the official Pixel OS. We’ll update this article if Google has anything to say.

arstechnica.com EN 2025 Cellebrite Pixels leak
Hackers can steal 2FA codes and private messages from Android phones https://arstechnica.com/security/2025/10/no-fix-yet-for-attack-that-lets-hackers-pluck-2fa-codes-from-android-phones/
15/10/2025 08:37:00
QRCode
archive.org
thumbnail
  • Ars Technica
    Dan Goodin Senior Security Editor
    13 oct. 2025 23:36

Malicious app required to make “Pixnapping” attack work requires no permissions.

Android devices are vulnerable to a new attack that can covertly steal two-factor authentication codes, location timelines, and other private data in less than 30 seconds.

The new attack, named Pixnapping by the team of academic researchers who devised it, requires a victim to first install a malicious app on an Android phone or tablet. The app, which requires no system permissions, can then effectively read data that any other installed app displays on the screen. Pixnapping has been demonstrated on Google Pixel phones and the Samsung Galaxy S25 phone and likely could be modified to work on other models with additional work. Google released mitigations last month, but the researchers said a modified version of the attack works even when the update is installed.

Like taking a screenshot
Pixnapping attacks begin with the malicious app invoking Android programming interfaces that cause the authenticator or other targeted apps to send sensitive information to the device screen. The malicious app then runs graphical operations on individual pixels of interest to the attacker. Pixnapping then exploits a side channel that allows the malicious app to map the pixels at those coordinates to letters, numbers, or shapes.

“Anything that is visible when the target app is opened can be stolen by the malicious app using Pixnapping,” the researchers wrote on an informational website. “Chat messages, 2FA codes, email messages, etc. are all vulnerable since they are visible. If an app has secret information that is not visible (e.g., it has a secret key that is stored but never shown on the screen), that information cannot be stolen by Pixnapping.”

The new attack class is reminiscent of GPU.zip, a 2023 attack that allowed malicious websites to read the usernames, passwords, and other sensitive visual data displayed by other websites. It worked by exploiting side channels found in GPUs from all major suppliers. The vulnerabilities that GPU.zip exploited have never been fixed. Instead, the attack was blocked in browsers by limiting their ability to open iframes, an HTML element that allows one website (in the case of GPU.zip, a malicious one) to embed the contents of a site from a different domain.

Pixnapping targets the same side channel as GPU.zip, specifically the precise amount of time it takes for a given frame to be rendered on the screen.

“This allows a malicious app to steal sensitive information displayed by other apps or arbitrary websites, pixel by pixel,” Alan Linghao Wang, lead author of the research paper “Pixnapping: Bringing Pixel Stealing out of the Stone Age,” explained in an interview. “Conceptually, it is as if the malicious app was taking a screenshot of screen contents it should not have access to. Our end-to-end attacks simply measure the rendering time per frame of the graphical operations… to determine whether the pixel was white or non-white.”

Pixnapping in three steps
The attack occurs in three main steps. In the first, the malicious app invokes Android APIs that make calls to the app the attacker wants to snoop on. These calls can also be used to effectively scan an infected device for installed apps of interest. The calls can further cause the targeted app to display specific data it has access to, such as a message thread in a messaging app or a 2FA code for a specific site. This call causes the information to be sent to the Android rendering pipeline, the system that takes each app’s pixels so they can be rendered on the screen. The Android-specific calls made include activities, intents, and tasks.

In the second step, Pixnapping performs graphical operations on individual pixels that the targeted app sent to the rendering pipeline. These operations choose the coordinates of target pixels the app wants to steal and begin to check if the color of those coordinates is white or non-white or, more generally, if the color is c or non-c (for an arbitrary color c).

“Suppose, for example, [the attacker] wants to steal a pixel that is part of the screen region where a 2FA character is known to be rendered by Google Authenticator,” Wang said. “This pixel is either white (if nothing was rendered there) or non-white (if part of a 2FA digit was rendered there). Then, conceptually, the attacker wants to cause some graphical operations whose rendering time is long if the target victim pixel is non-white and short if it is white. The malicious app does this by opening some malicious activities (i.e., windows) in front of the victim app that was opened in Step 1.”

The third step measures the amount of time required at each coordinate. By combining the times for each one, the attack can rebuild the images sent to the rendering pipeline one pixel at a time.

As Ars reader hotball put it in the comments below:

Basically the attacker renders something transparent in front of the target app, then using a timing attack exploiting the GPU’s graphical data compression to try finding out the color of the pixels. It’s not something as simple as “give me the pixels of another app showing on the screen right now.” That’s why it takes time and can be too slow to fit within the 30 seconds window of the Google Authenticator app.

In an online interview, paper co-author Ricardo Paccagnella described the attack in more detail:

Step 1: The malicious app invokes a target app to cause some sensitive visual content to be rendered.

Step 2: The malicious app uses Android APIs to “draw over” that visual content and cause a side channel (in our case, GPU.zip) to leak as a function of the color of individual pixels rendered in Step 1 (e.g., activate only if the pixel color is c).

Step 3: The malicious app monitors the side effects of Step 2 to infer, e.g., if the color of those pixels was c or not, one pixel at a time.

Steps 2 and 3 can be implemented differently depending on the side channel that the attacker wants to exploit. In our instantiations on Google and Samsung phones, we exploited the GPU.zip side channel. When using GPU.zip, measuring the rendering time per frame was sufficient to determine if the color of each pixel is c or not. Future instantiations of the attack may use other side channels where controlling memory management and accessing fine-grained timers may be necessary (see Section 3.3 of the paper). Pixnapping would still work then: the attacker would just need to change how Steps 2 and 3 are implemented.

The amount of time required to perform the attack depends on several variables, including how many coordinates need to be measured. In some cases, there’s no hard deadline for obtaining the information the attacker wants to steal. In other cases—such as stealing a 2FA code—every second counts, since each one is valid for only 30 seconds. In the paper, the researchers explained:

To meet the strict 30-second deadline for the attack, we also reduce the number of samples per target pixel to 16 (compared to the 34 or 64 used in earlier attacks) and decrease the idle time between pixel leaks from 1.5 seconds to 70 milliseconds. To ensure that the attacker has the full 30 seconds to leak the 2FA code, our implementation waits for the beginning of a new 30-second global time interval, determined using the system clock.

… We use our end-to-end attack to leak 100 different 2FA codes from Google Authenticator on each of our Google Pixel phones. Our attack correctly recovers the full 6-digit 2FA code in 73%, 53%, 29%, and 53% of the trials on the Pixel 6, 7, 8, and 9, respectively. The average time to recover each 2FA code is 14.3, 25.8, 24.9, and 25.3 seconds for the Pixel 6, Pixel 7, Pixel 8, and Pixel 9, respectively. We are unable to leak 2FA codes within 30 seconds using our implementation on the Samsung Galaxy S25 device due to significant noise. We leave further investigation of how to tune our attack to work on this device to future work.

In an email, a Google representative wrote, “We issued a patch for CVE-2025-48561 in the September Android security bulletin, which partially mitigates this behavior. We are issuing an additional patch for this vulnerability in the December Android security bulletin. We have not seen any evidence of in-the-wild exploitation.”

Pixnapping is useful research in that it demonstrates the limitations of Google’s security and privacy assurances that one installed app can’t access data belonging to another app. The challenges in implementing the attack to steal useful data in real-world scenarios, however, are likely to be significant. In an age when teenagers can steal secrets from Fortune 500 companies simply by asking nicely, the utility of more complicated and limited attacks is probably of less value.

arstechnica.com EN 2025 Android Pixnapping attack
Intel and AMD trusted enclaves, a foundation for network security, fall to physical attacks https://arstechnica.com/security/2025/09/intel-and-amd-trusted-enclaves-the-backbone-of-network-security-fall-to-physical-attacks/
05/10/2025 18:46:13
QRCode
archive.org
thumbnail

Ars Technica, Dan Goodin – 30 sept. 2025 22:25

The chipmakers say physical attacks aren’t in the threat model. Many users didn’t get the memo.

In the age of cloud computing, protections baked into chips from Intel, AMD, and others are essential for ensuring confidential data and sensitive operations can’t be viewed or manipulated by attackers who manage to compromise servers running inside a data center. In many cases, these protections—which work by storing certain data and processes inside encrypted enclaves known as TEEs (Trusted Execution Enclaves)—are essential for safeguarding secrets stored in the cloud by the likes of Signal Messenger and WhatsApp. All major cloud providers recommend that customers use it. Intel calls its protection SGX, and AMD has named it SEV-SNP.

Over the years, researchers have repeatedly broken the security and privacy promises that Intel and AMD have made about their respective protections. On Tuesday, researchers independently published two papers laying out separate attacks that further demonstrate the limitations of SGX and SEV-SNP. One attack, dubbed Battering RAM, defeats both protections and allows attackers to not only view encrypted data but also to actively manipulate it to introduce software backdoors or to corrupt data. A separate attack known as Wiretap is able to passively decrypt sensitive data protected by SGX and remain invisible at all times.

Attacking deterministic encryption
Both attacks use a small piece of hardware, known as an interposer, that sits between CPU silicon and the memory module. Its position allows the interposer to observe data as it passes from one to the other. They exploit both Intel’s and AMD’s use of deterministic encryption, which produces the same ciphertext each time the same plaintext is encrypted with a given key. In SGX and SEV-SNP, that means the same plaintext written to the same memory address always produces the same ciphertext.

Deterministic encryption is well-suited for certain uses, such as full disk encryption, where the data being protected never changes once the thing being protected (in this case, the drive) falls into an attacker’s hands. The same encryption is suboptimal for protecting data flowing between a CPU and a memory chip because adversaries can observe the ciphertext each time the plaintext changes, opening the system to replay attacks and other well-known exploit techniques. Probabilistic encryption, by contrast, resists such attacks because the same plaintext can encrypt to a wide range of ciphertexts that are randomly chosen during the encryption process.

“Fundamentally, [the use of deterministic encryption] is a design trade-off,” Jesse De Meulemeester, lead author of the Battering RAM paper, wrote in an online interview. “Intel and AMD opted for deterministic encryption without integrity or freshness to keep encryption scalable (i.e., protect the entire memory range) and reduce overhead. That choice enables low-cost physical attacks like ours. The only way to fix this likely requires hardware changes, e.g., by providing freshness and integrity in the memory encryption.”

Daniel Genkin, one of the researchers behind Wiretap, agreed. “It’s a design choice made by Intel when SGX moved from client machines to server,” he said. “It offers better performance at the expense of security.” Genkin was referring to Intel’s move about five years ago to discontinue SGX for client processors—where encryption was limited to no more than 256 MB of RAM—to server processors that could encrypt terabytes of RAM. The transition required Intel to revamp the encryption to make it scale for such vast amounts of data.

“The papers are two sides of the same coin,” he added.

While both of Tuesday’s attacks exploit weaknesses related to deterministic encryption, their approaches and findings are distinct, and each comes with its own advantages and disadvantages. Both research teams said they learned of the other’s work only after privately submitting their findings to the chipmakers. The teams then synchronized the publish date for Tuesday. It’s not the first time such a coincidence has occurred. In 2018, multiple research teams independently developed attacks with names including Spectre and Meltdown. Both plucked secrets out of Intel and AMD processors by exploiting their use of performance enhancement known as speculative execution.

AMD declined to comment on the record, and Intel didn’t respond to questions sent by email. In the past, both chipmakers have said that their respective TEEs are designed to protect against compromises of a piece of software or the operating system itself, including in the kernel. The guarantees, the companies have said, don’t extend to physical attacks such as Battering RAM and Wiretap, which rely on physical interposers that sit between the processor and the memory chips. Despite this limitation, many cloud-based services continue to trust assurances from the TEEs even when they have been compromised through physical attacks (more about that later).

Intel on Tuesday published this advisory. AMD posted one here.

Battering RAM
Battering RAM uses a custom-built analog switch to act as an interposer that reads encrypted data as it passes between protected memory regions in DDR4 memory chips and an Intel or AMD processor. By design, both SGX and SEV-SNP make this ciphertext inaccessible to an adversary. To bypass that protection, the interposer creates memory aliases in which two different memory addresses point to the same location in the memory module.

The Battering-RAM interposer, containing two analog switches (bottom center), is controlled by a microcontroller (left). The switches can dynamically either pass through the command signals to the connected DIMM or connect the respective lines to ground.

The Battering-RAM interposer, containing two analog switches (bottom center), is controlled by a microcontroller (left). The switches can dynamically either pass through the command signals to the connected DIMM or connect the respective lines to ground. Credit: De Meulemeester et al.

“This lets the attacker capture a victim's ciphertext and later replay it from an alias,” De Meulemeester explained. “Because Intel's and AMD's memory encryption is deterministic, the replayed ciphertext always decrypts into valid plaintext when the victim reads it.” The PhD researcher at KU Leuven in Belgium continued:

When the CPU writes data to memory, the memory controller encrypts it deterministically, using the plaintext and the address as inputs. The same plaintext written to the same address always produces the same ciphertext. Through the alias, the attacker can't read the victim's secrets directly, but they can capture the victim's ciphertext. Later, by replaying this ciphertext at the same physical location, the victim will decrypt it to a valid, but stale, plaintext.

This replay capability is the primitive on which both our SGX and SEV attacks are built.

In both cases, the adversary installs the interposer, either through a supply-chain attack or physical compromise, and then runs either a virtual machine or application at a chosen memory location. At the same time, the adversary also uses the aliasing to capture the ciphertext. Later, the adversary replays the captured ciphertext, which, because it's running in the region the attacker has access to, is then replayed as plaintext.

Because SGX uses a single memory-encryption key for the entire protected range of RAM, Battering RAM can gain the ability to write or read plaintext into these regions. This allows the adversary to extract the processor’s provisioning key and, in the process, break the attestation SGX is supposed to provide to certify its integrity and authenticity to remote parties that connect to it.

AMD processors protected by SEV use a single encryption key to produce all ciphertext on a given virtual machine. This prevents the ciphertext replaying technique used to defeat SGX. Instead, Battering RAM captures and replays the cryptographic elements that are supposed to prove the virtual machine hasn’t been tampered with. By replaying an old attestation report, Battering RAM can load a backdoored Virtual machine that still carries the SEV-SNP certification that the VM hasn’t been tampered with.

The key benefit of Battering RAM is that it requires equipment that costs less than $50 to pull off. It also allows active decryption, meaning encrypted data can be both read and tampered with. In addition, it works against both SGX and SEV-SNP, as long as they work with DDR4 memory modules.

Wiretap
Wiretap, meanwhile, is limited to breaking only SGX working with DDR4, although the researchers say it would likely work against the AMD protections with a modest amount of additional work. Wiretap, however, allows only for passive decryption, which means protected data can be read, but data can’t be written to protected regions of memory. The cost of the interposer and the equipment for analyzing the captured data also costs considerably more than Battering RAM, at about $500 to $1,000.

Like Battering RAM, Wiretap exploits deterministic encryption, except the latter attack maps ciphertext to a list of known plaintext words that the ciphertext is derived from. Eventually, the attack can recover enough ciphertext to reconstruct the attestation key.

Genkin explained:

Let’s say you have an encrypted list of words that will be later used to form sentences. You know the list in advance, and you get an encrypted list in the same order (hence you know the mapping between each word and its corresponding encryption). Then, when you encounter an encrypted sentence, you just take the encryption of each word and match it against your list. By going word by word, you can decrypt the entire sentence. In fact, as long as most of the words are in your list, you can probably decrypt the entire conversation eventually. In our case, we build a dictionary between common values occurring within the ECDSA algorithm and their corresponding encryption, and then use this dictionary to recover these values as they appear, allowing us to extract the key.

The Wiretap researchers went on to show the types of attacks that are possible when an adversary successfully compromises SGX security. As Intel explains, a key benefit of SGX is remote attestation, a process that first verifies the authenticity and integrity of VMs or other software running inside the enclave and hasn’t been tampered with. Once the software passes inspection, the enclave sends the remote party a digitally signed certificate providing the identity of the tested software and a clean bill of health certifying the software is safe.

The enclave then opens an encrypted connection with the remote party to ensure credentials and private data can’t be read or modified during transit. Remote attestation works with the industry standard Elliptic Curve Digital Signature Algorithm, making it easy for all parties to use and trust.

Blockchain services didn’t get the memo
Many cloud-based services rely on TEEs as a foundation for privacy and security within their networks. One such service is Phala, a blockchain provider that allows the drafting and execution of smart contracts. According to the company, computer “state”—meaning system variables, configurations, and other dynamic data an application depends on—are stored and updated only in the enclaves available through SGX, SEV-SNP, and a third trusted enclave available in Arm chips known as TrustZone. This design allows these smart contract elements to update in real time through clusters of “worker nodes”—meaning the computers that host and process smart contracts—with no possibility of any node tampering with or viewing the information during execution.

“The attestation quote signed by Intel serves as the proof of a successful execution,” Phala explained. “It proves that specific code has been run inside an SGX enclave and produces certain output, which implies the confidentiality and the correctness of the execution. The proof can be published and validated by anyone with generic hardware.” Enclaves provided by AMD and Arm work in a similar manner.

The Wiretap researchers created a “testnet,” a local machine for running worker modes. With possession of the SGX attestation key, the researchers were able to obtain a cluster key that prevents individual nodes from reading or modifying contract state. With that, Wiretap was able to fully bypass the protection. In a paper, the researchers wrote:

We first enter our attacker enclave into a cluster and note it is given access to the cluster key. Although the cluster key is not directly distributed to our worker upon joining a cluster, we initiate a transfer of the key from any other node in the cluster. This transfer is completed without on-chain interaction, given our worker is part of the cluster. This cluster key can then be used to decrypt all contract interactions within the cluster. Finally, when our testnet accepted our node’s enclave as a gatekeeper, we directly receive a copy of the master key, which is used to derive all cluster keys and therefore all contract keys, allowing us to decrypt the entire testnet.

The researchers performed similar bypasses against a variety of other blockchain services, including Secret, Crust, and IntegriTEE. After the researchers privately shared the results with these companies, they took steps to mitigate the attacks.

Both Battering RAM and Wiretap work only against DDR4 forms of memory chips because the newer DDR5 runs at much higher bus speeds with a multi-cycle transmission protocol. For that reason, neither attack works against a similar Intel protection known as TDX because it works only with DDR5.

As noted earlier, Intel and AMD both exclude physical attacks like Battering RAM and Wiretap from the threat model their TEEs are designed to withstand. The Wiretap researchers showed that despite these warnings, Phala and many other cloud-based services still rely on the enclaves to preserve the security and privacy of their networks. The research also makes clear that the TEE defenses completely break down in the event of an attack targeting the hardware supply chain.

For now, the only feasible solution is for chipmakers to replace deterministic encryption with a stronger form of protection. Given the challenges of making such encryption schemes scale to vast amounts of RAM, it’s not clear when that may happen.

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

arstechnica.com EN AMD Intel trusted enclaves CPI physical-attacks chipmakers
Google discovered a new scam—and also fell victim to it https://arstechnica.com/information-technology/2025/08/google-sales-data-breached-in-the-same-scam-it-discovered/
08/08/2025 21:06:11
QRCode
archive.org
thumbnail

arstechnica.com - Disclosure comes two months after Google warned the world of ongoing spree.
In June, Google said it unearthed a campaign that was mass-compromising accounts belonging to customers of Salesforce. The means: an attacker pretending to be someone in the customer's IT department feigning some sort of problem that required immediate access to the account. Two months later, Google has disclosed that it, too, was a victim.

The series of hacks are being carried out by financially motivated threat actors out to steal data in hopes of selling it back to the targets at sky-high prices. Rather than exploiting software or website vulnerabilities, they take a much simpler approach: calling the target and asking for access. The technique has proven remarkably successful. Companies whose Salesforce instances have been breached in the campaign, Bleeping Computer reported, include Adidas, Qantas, Allianz Life, Cisco, and the LVMH subsidiaries Louis Vuitton, Dior, and Tiffany & Co.

Better late than never
The attackers abuse a Salesforce feature that allows customers to link their accounts to third-party apps that integrate data with in-house systems for blogging, mapping tools, and similar resources. The attackers in the campaign contact employees and instruct them to connect an external app to their Salesforce instance. As the employee complies, the attackers ask the employee for an eight-digit security code that the Salesforce interface requires before a connection is made. The attackers then use this number to gain access to the instance and all data stored in it.

Google said that its Salesforce instance was among those that were compromised. The breach occurred in June, but Google only disclosed it on Tuesday, presumably because the company only learned of it recently.

“Analysis revealed that data was retrieved by the threat actor during a small window of time before the access was cut off,” the company said.

Data retrieved by the attackers was limited to business information such as business names and contact details, which Google said was “largely public” already.

Google initially attributed the attacks to a group traced as UNC6040. The company went on to say that a second group, UNC6042, has engaged in extortion activities, “sometimes several months after” the UNC6040 intrusions. This group brands itself under the name ShinyHunters.

“In addition, we believe threat actors using the 'ShinyHunters' brand may be preparing to escalate their extortion tactics by launching a data leak site (DLS),” Google said. “These new tactics are likely intended to increase pressure on victims, including those associated with the recent UNC6040 Salesforce-related data breaches.”

With so many companies falling to this scam—including Google, which only disclosed the breach two months after it happened—the chances are good that there are many more we don’t know about. All Salesforce customers should carefully audit their instances to see what external sources have access to it. They should also implement multifactor authentication and train staff how to detect scams before they succeed.

arstechnica.com EN 2025 Salesforce instance Google data-breach UNC6040
Coming to Apple OSes: A seamless, secure way to import and export passkeys https://arstechnica.com/security/2025/06/apple-previews-new-import-export-feature-to-make-passkeys-more-interoperable/
16/06/2025 22:29:17
QRCode
archive.org
thumbnail

Apple OSes will soon transfer passkeys seamlessly and securely across platforms.

Apple this week provided a glimpse into a feature that solves one of the biggest drawbacks of passkeys, the industry-wide standard for website and app authentication that isn't susceptible to credential phishing and other attacks targeting passwords.

The import/export feature, which Apple demonstrated at this week’s Worldwide Developers Conference, will be available in the next major releases of iOS, macOS, iPadOS, and visionOS. It aims to solve one of the biggest shortcomings of passkeys as they have existed to date. Passkeys created on one operating system or credential manager are largely bound to those environments. A passkey created on a Mac, for instance, can sync easily enough with other Apple devices connected to the same iCloud account. Transferring them to a Windows device or even a dedicated credential manager installed on the same Apple device has been impossible.

Growing pains
That limitation has led to criticisms that passkeys are a power play by large companies to lock users into specific product ecosystems. Users have also rightly worried that the lack of transferability increases the risk of getting locked out of important accounts if a device storing passkeys is lost, stolen, or destroyed.

The FIDO Alliance, the consortium of more than 100 platform providers, app makers, and websites developing the authentication standard, has been keenly aware of the drawback and has been working on programming interfaces that will make the passkey syncing more flexible. A recent teardown of the Google password manager by Android Authority shows that developers are actively implementing import/export tools, although the company has yet to provide any timeline for their general availability. (Earlier this year, the Google password manager added functionality to transfer passwords to iOS apps, but the process is clunky.) A recent update from FIDO shows that a large roster of companies are participating in the development, including Dashlane, 1Password, Bitwarden, Devolutions, NordPass, and Okta.

arstechnica.com EN 2025 Apple passkeys import export FIDO
4946 links
Shaarli - Le gestionnaire de marque-pages personnel, minimaliste, et sans base de données par la communauté Shaarli - Theme by kalvn