theguardian.com
Hannah Devlin and Tom Burgis
Sat 14 Mar 2026 07.00 CET
Exclusive: Guardian investigation finds data from flagship medical research leaked dozens of times
Confidential health data has been exposed online on dozens of occasions, a Guardian investigation can reveal, raising questions about the safeguarding of patient records by one of the UK’s flagship medical research projects.
UK Biobank, which holds the medical records of 500,000 British volunteers, is one of the world’s most comprehensive stores of health information and is credited with driving breakthroughs in cancer, dementia and diabetes research. But scientists approved to access Biobank’s sensitive data appear to have sometimes been cavalier about its security.
The files, which seem to have been inadvertently posted online by researchers using the data, do not include names or addresses, but they may still pose privacy concerns. One dataset found by the Guardian contained millions of hospital diagnoses and associated dates for more than 400,000 participants.
With the consent of a Biobank volunteer, the Guardian was able to pinpoint what appeared to be extensive hospital diagnosis records for the volunteer, using only their month and year of birth and details of a major surgery they had undergone.
"The file was very detailed and it felt like a gross invasion of privacy even to glance at
Data expert"
One data expert said the scale and persistence of the problem was “shocking” at a time when AI and social media were making it ever easier to cross-reference information online.
UK Biobank rejected the concerns, saying that no identifying data, such as names and addresses, were provided to researchers.
In a statement, Prof Sir Rory Collins, the chief executive of UK Biobank, said: “We have never seen any evidence of any UK Biobank participant being re-identified by others.”
’They said they would hold our data securely’
Founded in 2003 by the Department of Health and medical research charities, UK Biobank holds genome sequences, scans, blood samples and lifestyle information of 500,000 volunteers. Last month, the government extended Biobank’s access to volunteers’ GP records.
Scientists at universities and private companies across the world apply for access and, until late 2024, were free to download data directly on to their own computer systems.
Before this point, data had been inadvertently published online and Biobank appears to still be grappling with the problem.
The issue emerged because journals and funders increasingly require researchers to publish the code they have used to analyse large datasets. When intending to upload code, some researchers have also accidentally published partial or entire Biobank datasets to GitHub, a popular online code-sharing platform. UK Biobank prohibits researchers from sharing data outside their systems and says it has introduced further training for all researchers.
In the past year, the data leaks appear to have become a more urgent concern to UK Biobank. Between July and December 2025, it issued 80 legal notices to GitHub, which has complied with requests to remove data from the internet. Yet much still remains available.
Some of the data files contain just patient IDs, or test results for small numbers, others are more extensive. One dataset found online by the Guardian in January contained hospital diagnoses and associated diagnosis dates for about 413,000 participants, along with their sex and month and year of birth.
A data expert, who reviewed the file said: “It sent shivers down my spine to even open. I deleted the file immediately. It was very detailed and felt like a gross invasion of privacy even to glance at.”
To test the risk of re-identification, the Guardian approached several Biobank volunteers, two of whom had undergone medical procedures in the timeframe within the data and agreed to share these details with an external data scientist.
One volunteer, who provided treatment dates for a fracture and seizure, could not be located in the dataset. A second volunteer, a woman in her 70s, shared her month and year of birth and the month and year she had a hysterectomy. Only one person in the dataset matched these details. The apparent match was corroborated by five other diagnoses from the records that the volunteer had not initially disclosed.
“Effectively you were rehearsing the main parts of my medical history to me without me having given you any information at all. I didn’t expect that,” the volunteer said.
The woman said she was not too concerned about her own data being exposed and intended to remain a participant, saying that she viewed UK Biobank’s work as “extremely important”. But, she added: “I’m more concerned about whether Biobank has broken its agreement with people. They said they would hold our data securely … I just feel as though that has to come into the equation.”
UK Biobank said the re-identification scenario tested by the Guardian did not highlight a privacy risk because without additional information it would be impossible to identify individuals.
A Biobank spokesperson said: “As we have communicated to our participants, including on our website: ‘If a participant puts information that reveals something about their health and identity, such as genealogy data, on a public website, this could make it possible for their identity to be discovered by cross-referencing UK Biobank research data.’
“You have simply demonstrated why we tell participants not to do this.”
The spokesperson added that Biobank had taken extensive measures to protect participants’ privacy, including proactively searching GitHub, contacting researchers directly and issuing legal takedown notices, actions which they said had led to about 500 repositories being removed. Many of these, it said, contained only patient IDs, not health data.
"The idea they can rely on volunteers never putting any other information out about themselves is entirely unreasonable
Prof Felix Ritchie"
‘There are tensions between driving research with data and protecting privacy’
Privacy experts said UK Biobank’s approach appeared at odds with the reality that many people, reasonably, shared some health information online and that in an age of AI this could readily be identified and cross-referenced.
“Are these people aware that the internet exists?” asked Prof Felix Ritchie, an economist at the University of the West of England. “The idea that they can rely on their volunteers never putting any other information out there about themselves is an entirely unreasonable thing to expect.”
Dr Luc Rocher, associate professor at the Oxford Internet Institute, who reviewed several Biobank datasets found online, said that removing identifiers often did not guarantee anonymity and that simply knowing a person’s birthday and, say, the date they broke a leg might be enough to pinpoint their record with high confidence.
“Once identified, that record could reveal sensitive information such as a psychiatric diagnosis, an HIV test result, or a history of drug abuse,” they said.
Prof Niels Peek, professor of data science and healthcare improvement at the University of Cambridge, said the scale of the problem was “shocking”. “If it had happened once or 10 times I’d probably say: ‘It’s not great that it’s happened but at the same time zero risk is impossible,’” he said. “Hundreds. That’s a little bit too much.”
In Peek’s view, Biobank’s actions show it has taken the issue seriously and “done everything that one can reasonably expect”. But, he added: “The scale and persistence with which this has happened demonstrates that there are huge tensions between the ambition to drive health research with data at scale and the legal and ethical imperative to protect people’s privacy.”
Experts questioned whether Biobank will be able to fully regain control of the data released online. Despite researchers and GitHub having taken down most of the offending repositories in response to Biobank’s requests, many of the relevant files remained available on a code archive website until shortly before publication.
Breaking Trusted Execution Environments via DDR5 Memory Bus Interposition
TEE.fail:
Breaking Trusted Execution Environments via DDR5 Memory Bus Interposition
With the increasing popularity of remote computation like cloud computing, users are increasingly losing control over their data, uploading it to remote servers that they do not control. Trusted Execution Environments (TEEs) aim to reduce this trust, offering users promises such as privacy and integrity of their data as well as correctness of computation. With the introduction of TEEs and Confidential Computing features to server hardware offered by Intel, AMD, and Nvidia, modern TEE implementations aim to provide hardware-backed integrity and confidentiality to entire virtual machines or GPUs, even when attackers have full control over the system's software, for example via root or hypervisor access. Over the past few years, TEEs have been used to execute confidential cryptocurrency transactions, train proprietary AI models, protect end-to-end encrypted chats, and more.
In this work, we show that the security guarantees of modern TEE offerings by Intel and AMD can be broken cheaply and easily, by building a memory interposition device that allows attackers to physically inspect all memory traffic inside a DDR5 server. Making this worse, despite the increased complexity and speed of DDR5 memory, we show how such an interposition device can be built cheaply and easily, using only off the shelf electronic equipment. This allows us for the first time to extract cryptographic keys from Intel TDX and AMD SEV-SNP with Ciphertext Hiding, including in some cases secret attestation keys from fully updated machines in trusted status. Beyond breaking CPU-based TEEs, we also show how extracted attestation keys can be used to compromise Nvidia's GPU Confidential Computing, allowing attackers to run AI workloads without any TEE protections. Finally, we examine the resilience of existing deployments to TEE compromises, showing how extracted attestation keys can potentially be used by attackers to extract millions of dollars of profit from various cryptocurrency and cloud compute services.
cetas.turing.ac.uk/ Research Report
As AI increasingly shapes the global economic and security landscape, China’s ambitions for global AI dominance are coming into focus. This CETaS Research Report, co-authored with Adarga and the International Institute for Strategic Studies, explores the mechanisms through which China is strengthening its domestic AI ecosystem and influencing international AI policy discourse. The state, industry and academia all play a part in the process, with China’s various regulatory interventions and AI security research trajectories linked to government priorities. The country’s AI security governance is iterative and is rapidly evolving: it has moved from having almost no AI-specific regulations to developing a layered framework of laws, guidelines and standards in just five years. In this context, the report synthesises open-source research and millions of English- and Chinese-language data points to understand China’s strategic position in global AI competition and its approach to AI security.
This CETaS Research Report, co-authored with the International Institute for Strategic Studies (IISS) and Adarga, examines China’s evolving AI ecosystem. It seeks to understand how interactions between the state, the private sector and academia are shaping the country’s strategic position in global AI competition and its approach to AI security. The report is a synthesis of open-source research conducted by IISS and Adarga, leveraging millions of English- and Chinese-language data points.
Key Judgements
China’s political leadership views AI as one of several technologies that will enable the country to achieve global strategic dominance. This aligns closely with President Xi’s long-term strategy of leveraging technological revolutions to establish geopolitical strength. China has pursued AI leadership through a blend of state intervention and robust private-sector innovation. This nuanced approach challenges narratives of total government control, demonstrating significant autonomy and flexibility within China’s AI ecosystem. Notably, the development and launch of the DeepSeek-R1 model underscored China's ability to overcome significant economic barriers and technological restrictions, and almost certainly caught China’s political leadership by surprise – along with Western chip companies.
While the Chinese government retains ultimate control of the most strategically significant AI policy decisions, it is an oversimplification to describe this model as entirely centrally controlled. Regional authorities also play significant roles, leading to a decentralised landscape featuring multiple hubs and intense private sector competition, which gives rise to new competitors such as DeepSeek. In the coming years, the Chinese government will almost certainly increase its influence over AI development through closer collaboration with industry and academia. This will include shaping regulation, developing technical standards and providing preferential access to funding and resources.
China's AI regulatory model has evolved incrementally, but evidence suggests the country is moving towards more coherent AI legislation. AI governance responsibilities in China remain dispersed across multiple organisations. However, since February 2025, the China AI Safety and Development Association (CnAISDA) has become what China describes as its counterpart to the AI Security Institute. This organisation consolidates several existing institutions but does not appear to carry out independent AI testing and evaluation.
The Chinese government has integrated wider political and social priorities into AI governance frameworks, emphasising what it describes as “controllable AI” – a concept interpreted uniquely within the Chinese context. These broader priorities directly shape China’s technical and regulatory approaches to AI security. Compared to international competitors, China’s AI security policy places particular emphasis on the early stages of AI model development through stringent controls on pre-training data and onerous registration requirements. Close data sharing between the Chinese government and domestic AI champions, such as Alibaba’s City Brain, facilitates rapid innovation but would almost certainly encounter privacy and surveillance concerns if attempted elsewhere.
The geographical distribution of China's AI ecosystem reveals the strategic clustering of resources, talent and institutions. Cities such as Beijing, Hangzhou and Shenzhen have developed unique ecosystems that attract significant investments and foster innovation through supportive local policies, including subsidies, incentives and strategic infrastructure development. This regional specialisation emerged from long-standing Chinese industrial policy rather than short-term incentives.
China has achieved significant improvements in domestic AI education. It is further strengthening its domestic AI talent pool as top-tier AI researchers increasingly choose to remain in or return to China, due to increasingly attractive career opportunities within China and escalating geopolitical tensions between China and the US. Chinese institutions have significantly expanded domestic talent pools, particularly through highly selective undergraduate and postgraduate programmes. These efforts have substantially reduced dependence on international expertise, although many key executives and researchers continue to benefit from an international education.
Senior scientists hold considerable influence over China’s AI policymaking process, frequently serving on government advisory panels. This stands in contrast to the US, where corporate tech executives tend to have greater influence over AI policy decisions.
Government support provides substantial benefits to China-based tech companies. China’s government actively steers AI development, while the US lets the private sector lead (with the government in a supporting role) and the EU emphasises regulating outcomes and funding research for the public good. This means that China’s AI ventures often have easier access to capital and support for riskier projects, while a tightly controlled information environment mitigates against reputational risk.
US export controls have had a limited impact on China’s AI development. Although export controls have achieved some intended effects, they have also inadvertently stimulated innovation within certain sectors, forcing companies to do more with less and resulting in more efficient models that may even outperform their Western counterparts. Chinese AI companies such as SenseTime and DeepSeek continue to thrive despite their limited access to advanced US semiconductors.
US-designated terrorist organization ELN oversees a vast digital operation that promotes pro-Kremlin and anti-US content.
The National Liberation Army (ELN), a Colombian armed group that also holds influence in Venezuela, has built a digital strategy that involves branding themselves as media outlets to build credibility, overseeing a diffuse cross-platform operation, and using these wide-ranging digital assets to amplify Russian, Iranian, Venezuelan, and Cuban narratives that attack the interests of the United States, the European Union (EU), and their allies.
In the 1960s, the ELN emerged as a Colombian nationalist armed movement ideologically rooted in Marxism-Leninism, liberation theology, and the Cuban revolution. With an army estimated to have 2,500 to 6,000 members, the ELN is Colombia’s oldest and largest active guerrilla group, with its operation extending into Venezuela. The ELN has maintained a strategic online presence for over a decade to advance its propaganda and maintain operational legitimacy.
The organization, which has previously engaged in peace talks with the Colombian state, has carried out criminal activities in Colombia and Venezuela, such as killings, kidnappings, extortions, and the recruitment of minors. After successive military and financial crises in the 1990s, the armed group abandoned its historical reluctance to participate in drug trafficking. The diversification into illegal funding has meant that their armed clashes target criminal groups, in addition to their primary ideological enemy, the state forces.
In the north-eastern Catatumbo area, considered one of the enclaves of international cocaine trafficking, the group has been involved in one of the bloodiest confrontations seen in Colombia in 2025. Since January 15, the violence has left 126 people dead, at least 66,000 displaced, and has further strained the group’s engagement with the latest round of peace talks initiated by the current Colombian government. In that region, the ELN has battled with the state and other criminal groups, such as paramilitaries and other guerrilla groups, for extended control of the area bordering Venezuela, an effort to connect the ELN’s other territories of influence to Colombia, such as the north and, at the other extreme, the western regions of Choco and Antioquia.
The US Department of State reaffirmed the ELN’s designation as a terrorist organization in its March 5, 2025, update of the Foreign Terrorist Organizations (FTOs) list. This classification theoretically prevents the group from operating on major social media platforms, as US social media platforms, such as Meta, YouTube, and X, maintain policies prohibiting terrorist organizations from using their services. However, the DFRLab found that the group’s substantial digital footprint spans over one hundred entities across websites, social media, closed messaging apps, and podcast services.
Today was an eventful day thanks to many interesting blog posts, e.g. from my friends at watchTowr. So I thought, why not publish a small quick-and-dirty blog post myself about a story from last week? This blog post may not be of the usual quality, but it was a good time to write it.
On May 7, 2025, the LockBit admin panel was hacked by an anonymous actor who replaced their TOR website with the text ‘Don’t do crime CRIME IS BAD xoxo from Prague’ and shared a SQL dump of their admin panel database in an archived file ‘paneldb_dump.zip’:
There is not much information available regarding the individual identified as 'xoxo from Prague' whose objective seems to be the apprehension of malicious ransomware threat actors. It is uncommon for a major ransomware organization's website to be defaced; more so for its administrative panel to be compromised. This leaked SQL database dump is significant as it offers insight into the operational methods of LockBit affiliates and the negotiation tactics they employ to secure ransom payments from their victims.
Trellix Advanced Research Center’s investigations into the leaked SQL database confirmed with high confidence that the database originates from LockBit's affiliates admin panel. This panel allows the generation of ransomware builds for victims, utilizing LockBit Black 4.0 and LockBit Green 4.0, compatible with Linux, Windows and ESXi systems, and provides access to victim negotiation chats.
The leaked SQL database dump encompasses data from December 18, 2024 to April 29, 2025, including details pertaining to LockBit adverts (aka ransomware affiliates), victim organizations, chat logs, cryptocurrency wallets and ransomware build configurations.
On April 29, 2025, a select group of iOS users were notified by Apple that they were targeted with advanced spyware. Among the group were two journalists who consented to the technical analysis of their cases. In this report, we discuss key findings from our forensic analyses of their devices.
Trend Research has identified Earth Lamia as an APT threat actor that exploits vulnerabilities in web applications to gain access to organizations, using various techniques for data exfiltration.
Earth Lamia develops and customizes hacking tools to evade detection, such as PULSEPACK and BypassBoss.
Earth Lamia has primarily targeted organizations in Brazil, India, and Southeast Asia since 2023. Initially focused on financial services, the group shifted to logistics and online retail, most recently focusing on IT companies, universities, and government organizations.
Trend Vision One™ detects and blocks the IOCs discussed in this blog. Trend Vision One also provides hunting queries, threat insights, and threat intelligence reports to gain rich context and the latest updates on Earth Lamia.
Introduction
We have been tracking an active intrusion set that primarily targets organizations located in countries including Brazil, India, and Southeast Asia since 2023. The threat actor mainly targets the SQL injection vulnerabilities discovered on web applications to access the SQL servers of targeted organizations. The actor also takes advantage of various known vulnerabilities to exploit public-facing servers. Research reports have also mentioned their aggressive operations, including REF0657, STAC6451, and CL-STA-0048. Evidence we collected during our research indicates this group is a China-nexus intrusion set, which we now track as Earth Lamia.
Earth Lamia is highly active, but our observation found that its targets have shifted over different time periods. They targeted many organizations but focused only on a few specific industries during each time period. In early 2024 and prior, we observed that most of their targets were organizations within the financial industry, specifically related to securities and brokerage. In the second half of 2024, they shifted their targets to organizations mainly in the logistics and online retail industries. Recently, we noticed that their targets have shifted again to IT companies, universities, and government organizations.
Map of targeted countries
Figure 1. Map of targeted countries
download
Earth Lamia continuously develops customized hacking tools and backdoors to improve their operations. While the actor highly leverages open-source hacking tools to conduct their attacks, they also customized these hacking tools to reduce the risk of being detected by security software. We also discovered they have developed a previously unseen backdoor, which we named PULSEPACK. The first version of PULSEPACK was identified in Earth Lamia's attacks during August 2024. In 2025, we found an upgraded version of PULSEPACK, which uses a different protocol for C&C communication, showing they are actively developing this backdoor. In this report, we will reveal the details of Earth Lamia’s operations and share the analysis of their customized hacking tools and backdoors.
Initial access and post-exploitation TTPs
We found that Earth Lamia frequently conducted vulnerability scans to identify possible SQL injection vulnerabilities on the targets' websites. With an identified vulnerability, the actor tried to open a system shell through it to gain remote access to the victims' SQL servers. We suspect they are likely using tools like "sqlmap" to carry out these attacks against their targets. Besides the SQL injection attempts, our telemetry shows the actor also exploited the following vulnerabilities on different public-facing servers:
CVE-2017-9805: Apache Struts2 remote code execution vulnerability
CVE-2021-22205: GitLab remote code execution vulnerability
CVE-2024-9047: WordPress File Upload plugin arbitrary file access vulnerability
CVE-2024-27198: JetBrains TeamCity authentication bypass vulnerability
CVE-2024-27199: JetBrains TeamCity path traversal vulnerability
CVE-2024-51378: CyberPanel remote code execution vulnerability
CVE-2024-51567: CyberPanel remote code execution vulnerability
CVE-2024-56145: Craft CMS remote code execution vulnerability
organizations.
TCC on macOS isn't just an annoying prompt—it's the last line of defense between malware and your private data. Read this article to learn why.
Lately, I have been reporting many vulnerabilities in third-party applications that allowed for TCC bypass, and I have discovered that most vendors do not understand why they should care. For them, it seems like just an annoying and unnecessary prompt. Even security professionals tasked with vulnerability triage frequently struggle to understand TCC’s role in protecting macOS users’ privacy against malware.
Honestly, I don’t blame them for that because, two years ago, I also didn’t understand the purpose of those “irritating” pop-up notifications. It wasn’t until I started writing malware for macOS. I realized how much trouble an attacker faces because of TCC in actually harming a victim. I wrote this article for Application Developers in mind so that, after reading it, they do not underestimate the vulnerabilities that allow bypassing TCC. It is also intended for Vulnerability Researchers to illustrate an attack vector for further research.
Join Ido Kringel and the Deep Instinct Threat Research Team in this deep dive into a recently discovered, Office-based regex evasion technique
Microsoft Office-based attacks have long been a favored tactic amongst cybercriminals— and for good reason. Attackers frequently use Office documents in cyberattacks because they are widely trusted. These files, such as Word or Excel docs, are commonly exchanged in business and personal settings. They are also capable of carrying hidden malicious code, embedded macros, and external links that execute code when opened, especially if users are tricked into enabling features like macros.
Moreover, Office documents support advanced techniques like remote template injection, obfuscated macros, and legacy features like Excel 4.0 macros. These allow attackers to bypass antivirus detection and trigger multi-stage payloads such as ransomware or information-stealing malware.
Since Office files are familiar to users and often appear legitimate (e.g., invoices, resumes, or reports), they’re also highly effective tools in phishing and social engineering attacks.
This mixture of social credit and advanced attack characteristics unique to Office files, as well as compatibility across platforms and integration with scripting languages, makes them ideal for initiating sophisticated attacks with minimal user suspicion.
Last year, Microsoft announced the availability of three new functions that use Regular Expressions (regex) to help parse text more easily:
Regex are sequences of characters that define search patterns, primarily used for string matching and manipulation. They enable efficient text processing by allowing complex searches, replacements, and validations based on specific criteria.
Computer scientists at ETH Zurich discover new class of vulnerabilities in Intel processors, allowing them to break down barriers between different users of a processor using carefully crafted instruction sequences. Entire processor memory can be read by employing quick, repeated attacks.
All Intel processors since 2018 are affected by Branch Privilege Injection.
In brief
Socket's research uncovers three dangerous Go modules that contain obfuscated disk-wiping malware, threatening complete data loss.
The Go ecosystem, valued for its simplicity, transparency, and flexibility, has exploded in popularity. With over 2 million modules available, developers rely heavily on public repositories like GitHub. However, this openness is precisely what attackers exploit.
No Central Gatekeeping: Developers freely source modules directly from GitHub repositories, trusting the naming conventions implicitly.
Prime Target for Typosquatting: Minimal namespace validation enables attackers to masquerade malicious modules as popular libraries.
Introduction: The Silent Threat#
In April 2025, we detected an attack involving three malicious Go modules which employ similar obfuscation techniques:
github[.]com/truthfulpharm/prototransform
github[.]com/blankloggia/go-mcp
github[.]com/steelpoor/tlsproxy
Despite appearing legitimate, these modules contained highly obfuscated code designed to fetch and execute remote payloads. Socket’s scanners flagged the suspicious behaviors, leading us to a deeper investigation.
Kandji researchers uncovered and disclosed key macOS vulnerabilities over the past year. Learn how we protect customers through detection and patching.
When we discover weaknesses before attackers do, everyone wins. History has shown that vulnerabilities like Gatekeeper bypass and TCC bypass zero-days don't remain theoretical for long—both of these recent vulnerabilities were exploited in the wild by macOS malware. By investing heavily in new security research, we're helping strengthen macOS for everyone.
Once reported to Apple, the fix for these vulnerabilities is not always obvious. Depending on the complexity, it can take a few months to over a year, especially if it requires major architectural changes to the operating system. Apple’s vulnerability disclosure program has been responsive and effective.
Of course, we don't just report issues and walk away. We ensure our products can detect these vulnerabilities and protect our customers from potential exploitation while waiting for official patches.
MCP tools are implicated in several new attack techniques. Here's a look at how they can be manipulated for good, such as logging tool usage and filtering unauthorized commands.
Over the last few months, there has been a lot of activity in the Model Context Protocol (MCP) space, both in terms of adoption as well as security. Developed by Anthropic, MCP has been rapidly gaining traction across the AI ecosystem. MCP allows Large Language Models (LLMs) to interface with tools and for those interfaces to be rapidly created. MCP tools allow for the rapid development of “agentic” systems, or AI systems that autonomously perform tasks.
Beyond adoption, new attack techniques have been shown to allow prompt injection via MCP tool descriptions and responses, MCP tool poisoning, rug pulls and more.
Prompt Injection is a weakness in LLMs that can be used to elicit unintended behavior, circumvent safeguards and produce potentially malicious responses. Prompt injection occurs when an attacker instructs the LLM to disregard other rules and do the attacker’s bidding. In this blog, I show how to use techniques similar to prompt injection to change the LLM’s interaction with MCP tools. Anyone conducting MCP research may find these techniques useful.
The Sysdig Threat Research Team (TRT) has discovered CVE-2025-32955, a now-patched vulnerability in Harden-Runner, one of the most popular GitHub Action CI/CD security tools. Exploiting this vulnerability allows an attacker to bypass Harden-Runner’s disable-sudo security mechanism, effectively evading detection within the continuous integration/continuous delivery (CI/CD) pipeline under certain conditions. To mitigate this risk, users are strongly advised to update to the latest version.
The CVE has been assigned a CVSS v3.1 base score of 6.0.
After the release of the Secure Annex ‘Monitor’ feature, I wanted to help evaluate a list of extensions an organization I was working with had configured for monitoring. Notifications when new changes occur is great, but in security, baselines are everything!
To cut down a list of 132 extensions in use, I identified a couple extensions that stuck out because they were ‘unlisted’ in the Chrome Web Store. Unlisted extensions are not indexed by search engines and do not show up when searching the Chrome Web Store. The only way to access the extension is by knowing the URL.