theguardian.com
Hannah Devlin and Tom Burgis
Sat 14 Mar 2026 07.00 CET
Exclusive: Guardian investigation finds data from flagship medical research leaked dozens of times
Confidential health data has been exposed online on dozens of occasions, a Guardian investigation can reveal, raising questions about the safeguarding of patient records by one of the UK’s flagship medical research projects.
UK Biobank, which holds the medical records of 500,000 British volunteers, is one of the world’s most comprehensive stores of health information and is credited with driving breakthroughs in cancer, dementia and diabetes research. But scientists approved to access Biobank’s sensitive data appear to have sometimes been cavalier about its security.
The files, which seem to have been inadvertently posted online by researchers using the data, do not include names or addresses, but they may still pose privacy concerns. One dataset found by the Guardian contained millions of hospital diagnoses and associated dates for more than 400,000 participants.
With the consent of a Biobank volunteer, the Guardian was able to pinpoint what appeared to be extensive hospital diagnosis records for the volunteer, using only their month and year of birth and details of a major surgery they had undergone.
"The file was very detailed and it felt like a gross invasion of privacy even to glance at
Data expert"
One data expert said the scale and persistence of the problem was “shocking” at a time when AI and social media were making it ever easier to cross-reference information online.
UK Biobank rejected the concerns, saying that no identifying data, such as names and addresses, were provided to researchers.
In a statement, Prof Sir Rory Collins, the chief executive of UK Biobank, said: “We have never seen any evidence of any UK Biobank participant being re-identified by others.”
’They said they would hold our data securely’
Founded in 2003 by the Department of Health and medical research charities, UK Biobank holds genome sequences, scans, blood samples and lifestyle information of 500,000 volunteers. Last month, the government extended Biobank’s access to volunteers’ GP records.
Scientists at universities and private companies across the world apply for access and, until late 2024, were free to download data directly on to their own computer systems.
Before this point, data had been inadvertently published online and Biobank appears to still be grappling with the problem.
The issue emerged because journals and funders increasingly require researchers to publish the code they have used to analyse large datasets. When intending to upload code, some researchers have also accidentally published partial or entire Biobank datasets to GitHub, a popular online code-sharing platform. UK Biobank prohibits researchers from sharing data outside their systems and says it has introduced further training for all researchers.
In the past year, the data leaks appear to have become a more urgent concern to UK Biobank. Between July and December 2025, it issued 80 legal notices to GitHub, which has complied with requests to remove data from the internet. Yet much still remains available.
Some of the data files contain just patient IDs, or test results for small numbers, others are more extensive. One dataset found online by the Guardian in January contained hospital diagnoses and associated diagnosis dates for about 413,000 participants, along with their sex and month and year of birth.
A data expert, who reviewed the file said: “It sent shivers down my spine to even open. I deleted the file immediately. It was very detailed and felt like a gross invasion of privacy even to glance at.”
To test the risk of re-identification, the Guardian approached several Biobank volunteers, two of whom had undergone medical procedures in the timeframe within the data and agreed to share these details with an external data scientist.
One volunteer, who provided treatment dates for a fracture and seizure, could not be located in the dataset. A second volunteer, a woman in her 70s, shared her month and year of birth and the month and year she had a hysterectomy. Only one person in the dataset matched these details. The apparent match was corroborated by five other diagnoses from the records that the volunteer had not initially disclosed.
“Effectively you were rehearsing the main parts of my medical history to me without me having given you any information at all. I didn’t expect that,” the volunteer said.
The woman said she was not too concerned about her own data being exposed and intended to remain a participant, saying that she viewed UK Biobank’s work as “extremely important”. But, she added: “I’m more concerned about whether Biobank has broken its agreement with people. They said they would hold our data securely … I just feel as though that has to come into the equation.”
UK Biobank said the re-identification scenario tested by the Guardian did not highlight a privacy risk because without additional information it would be impossible to identify individuals.
A Biobank spokesperson said: “As we have communicated to our participants, including on our website: ‘If a participant puts information that reveals something about their health and identity, such as genealogy data, on a public website, this could make it possible for their identity to be discovered by cross-referencing UK Biobank research data.’
“You have simply demonstrated why we tell participants not to do this.”
The spokesperson added that Biobank had taken extensive measures to protect participants’ privacy, including proactively searching GitHub, contacting researchers directly and issuing legal takedown notices, actions which they said had led to about 500 repositories being removed. Many of these, it said, contained only patient IDs, not health data.
"The idea they can rely on volunteers never putting any other information out about themselves is entirely unreasonable
Prof Felix Ritchie"
‘There are tensions between driving research with data and protecting privacy’
Privacy experts said UK Biobank’s approach appeared at odds with the reality that many people, reasonably, shared some health information online and that in an age of AI this could readily be identified and cross-referenced.
“Are these people aware that the internet exists?” asked Prof Felix Ritchie, an economist at the University of the West of England. “The idea that they can rely on their volunteers never putting any other information out there about themselves is an entirely unreasonable thing to expect.”
Dr Luc Rocher, associate professor at the Oxford Internet Institute, who reviewed several Biobank datasets found online, said that removing identifiers often did not guarantee anonymity and that simply knowing a person’s birthday and, say, the date they broke a leg might be enough to pinpoint their record with high confidence.
“Once identified, that record could reveal sensitive information such as a psychiatric diagnosis, an HIV test result, or a history of drug abuse,” they said.
Prof Niels Peek, professor of data science and healthcare improvement at the University of Cambridge, said the scale of the problem was “shocking”. “If it had happened once or 10 times I’d probably say: ‘It’s not great that it’s happened but at the same time zero risk is impossible,’” he said. “Hundreds. That’s a little bit too much.”
In Peek’s view, Biobank’s actions show it has taken the issue seriously and “done everything that one can reasonably expect”. But, he added: “The scale and persistence with which this has happened demonstrates that there are huge tensions between the ambition to drive health research with data at scale and the legal and ethical imperative to protect people’s privacy.”
Experts questioned whether Biobank will be able to fully regain control of the data released online. Despite researchers and GitHub having taken down most of the offending repositories in response to Biobank’s requests, many of the relevant files remained available on a code archive website until shortly before publication.
theguardian.com
Daniel Boffey Chief reporter
Sat 7 Mar 2026 12.00 CET
Iran’s targeting of commercial datacentres in the UAE and Bahrain signals a new frontier in asymmetric warfare
It is believed to be a first: the deliberate targeting of a commercial datacentre by the armed forces of a country at war.
At 4.30am on Sunday morning, what is thought to have been an Iranian Shahed 136 drone struck an Amazon Web Services datacentre in the United Arab Emirates, setting off a devastating fire and forcing a shutdown of the power supply. Further damage was inflicted as attempts were made to suppress the flames with water.
Soon after, a second data centre owned by the US tech company was hit. Then a third was said to be in trouble, this time in Bahrain, after an Iranian drone turned to fireball on striking land nearby.
Iranian state TV has claimed that Iran’s Islamic Revolutionary Guard Corps launched the attack “to identify the role of these centres in supporting the enemy’s military and intelligence activities”.
The network built by Jeff Bezos’s company could withstand one of its regional centres being taken out of action but not a second.
The coordinated strike had an immediate impact.
Millions of people in Dubai and Abu Dhabi woke up on Monday unable to pay for a taxi, order a food delivery, or check their bank balance on their mobile apps.
Whether there was a military impact is unclear – but the strikes swiftly brought the war directly into the lives of 11 million people in the UAE, nine out of 10 of whom are foreign nationals. Amazon has advised its clients to secure their data away from the region.
Perhaps more significantly, the strikes on this ‘next generation’ war target are now raising questions about the prospects of the UAE building on its plans, and many billions of pounds worth of US and other foreign investment, to exploit what they hope will be the ‘new oil’: artificial intelligence (AI).
“The UAE really wants to be a major AI player,” said Chris McGuire, an AI and technology competition expert who served as a White House national security council official in Joe Biden’s administration. “Their government has very strong conviction about this technology, probably stronger than any other government in the world, and if there’s going to start to be security questions around that, then they’re going to have to resolve those very quickly, somehow.”
A datacentre is a facility designed to store, manage, and operate digital data.
The growing demand by businesses for artificial intelligence (AI) and cloud computing – where firms have a pay-as-you-go relationship with the providers of servers, storage and software – is driving the need for centres that have significantly more computational power.
It requires a ready and consistent supply of very cheap electricity.
The UAE, as it seeks to diversify away from fossil fuels, has been able to point out that it has this in spades, along with a huge sovereign wealth fund ready to invest and subsidise projects.
According to Turner & Townsend’s Global Data Centre Index, the overall global cost increase of datacentre construction increased in 2025 by 5.5% – but the UAE ranks 44th in the league table of most expensive unit cost per watt out of 52.
The UAE’s geography also makes it a critical subsea cable landing point, providing access between Europe and Asia.
Then there are the geo-politics, with the US keen to keep the Gulf states away from Chinese technology.
A four-day tour by Donald Trump of Saudi Arabia, Qatar, and the UAE last May coincided with the announcement of the construction of a vast new AI campus – a partnership between the UAE and the US – for the purpose of training powerful AI models.
As part of the deal, the Trump administration eased restrictions on advanced chips sales to the Gulf. OpenAI has said the planned UAE campus could eventually serve half the world’s population.
McGuire said that this week’s events could be pivotal. “If we’re going to have large scale datacentres built out in the Middle East, we’re going have to get pretty serious about how we protect them,” he said. ‘We think about how to protect it right now, and we’re saying, ‘Oh, it means you have guards and good cybersecurity’.
“If you’re actually going to double down the Middle East, maybe it means missile defence on datacentres.”
Sean Gorman, the chief executive of Zephr.xyz, a technology firm that is a contractor to the US air force, said that the Gulf states’ ambitions would have likely been in the thoughts of military planners in Tehran.
He said: “I believe the Iranians are building on tactics they’ve seen be effective in the Ukraine conflict. Asymmetric warfare that can target critical infrastructure creates pressure on adversaries by disrupting public safety and economic activity.
“UAE and Bahrain have both been positioning themselves as global AI hubs by investing heavily in datacentres and fibre infrastructure to connect them to the rest of the world.
“If they can disrupt that infrastructure, it puts their strategic position under risk while also disrupting operations that are important to the economy. In addition, there could be an adjacent impact of defence operations, but that would likely be more luck than the primary objective.”
Gorman said the UAE had a “long track record of managing regional instability without becoming party to it” but that there were a range of risks apart from that from the air.
He said: “The UAE also has one of the most diversified submarine-cable landing environments in the Middle East, but the diversity is geographically uneven.
“There are multiple landing stations and cable systems, but many of them concentrate on the east coast at Fujairah, which creates a partial geographic chokepoint.
“In addition, there is a specific risk from Iranian cyber operations targeting US-aligned digital infrastructure in the Gulf, which presents a more concrete near-term threat to datacentre and cloud operations than geography in the traditional sense.”
Gorman said the concern would be if Iran demonstrated any further capability to target Gulf digital infrastructure as part of its retaliation.
He said: “The UAE will need to show partners that its infrastructure is defensible. This is the question investors should be asking, not whether the broader AI ambition survives.”
Vili Lehdonvirta, professor of technology policy at Aalto university and senior fellow at the Oxford Internet Institute, University of Oxford, said there were significant costs to such defences but that the danger was real.
The former chair of the US National Security Commission on AI, Eric Schmidt, suggested last year that a country falling behind in an AI arms race could bomb their adversary’s datacentres.
Lehdonvirta said he suspected that no one actually believed that datacentres “would get bombed despite such scenarios being openly floated for some time”.
“If that’s the case then from now on we might perhaps see operators of prominent datacentres like AWS [Amazon Web Services] investing in air defence, similar to how shipping operators armed up against pirates,” he said.
Where might Iran fruitfully strike next?
“The Iranians will be well aware that the fibreoptic cables that connect these datacentres to the United States and to the rest of the world run through the strait of Hormuz,” Lehdonvirta said, “although they’ll be closely watched by the US and allied forces.”
You've read 23 articles in the last
| The Guardian - theguardian.com
Tess McClure
Tue 2 Dec 2025 03.02 CET
For days before the explosions began, the business park had been emptying out. When the bombs went off, they took down empty office blocks and demolished echoing, multi-cuisine food halls. Dynamite toppled a four-storey hospital, silent karaoke complexes, deserted gyms and dorm rooms.
So came the end of KK Park, one of south-east Asia’s most infamous “scam centres”, press releases from Myanmar’s junta declared. The facility had held tens of thousands of people, forced to relentlessly defraud people around the world. Now, it was being levelled piece by piece.
But the park’s operators were long gone: apparently tipped off that a crackdown was coming, they were busily setting up shop elsewhere. More than 1,000 labourers had managed to flee across the border, and some 2,000 others had been detained. But up to 20,000 labourers, likely trafficked and brutalised, had disappeared. Away from the junta’s cameras, scam centres like KK park have continued to thrive.
So monolithic has the multi-billion dollar global scam industry become that experts say we are entering the era of the “scam state”. Like the narco-state, the term refers to countries where an illicit industry has dug its tentacles deep into legitimate institutions, reshaping the economy, corrupting governments and establishing state reliance on an illegal network.
The raids on KK Park were the latest in a series of highly publicised crackdowns on scam centres across south-east Asia. But regional analysts say these are largely performative or target middling players, amounting to “political theatre” by officials who are under international pressure to crack down on them but have little interest in eliminating a wildly profitable sector.
“It’s a way of playing Whack-a-Mole, where you don’t want to hit a mole,” says Jacob Sims, visiting fellow at Harvard University’s Asia Centre and expert on transnational and cybercrime in the Mekong.
In the past five years scamming, says Sims, has mutated from “small online fraud rings into an industrial-scale political economy”.
“In terms of gross GDP, it’s the dominant economic engine for the entire Mekong sub-region,” he says, “And that means that it’s one of the dominant – if not the dominant – political engine.”
Government spokespeople in Myanmar, Cambodia and Laos did not respond to questions from the Guardian, but Myanmar’s military has previously said it is “working to completely eradicate scam activities from their roots”. The Cambodian government has also described allegations it is home to one of “the world’s largest cybercrime networks supported by the powerful” as “baseless” and “irresponsible”.
Morphing in less than a decade from a world of misspelled emails and implausible Nigerian princes, the industry has become a vast, sophisticated system, raking in tens of billions from victims around the world.
At its heart are “pig-butchering” scams – where a relationship is cultivated online before the scammer pushes their victim to part with their money, often via an “investment” in cryptocurrency. Scammers have harnessed increasingly sophisticated technology to fool targets: using generative AI to translate and drive conversations, deepfake technology to conduct video calls, and mirrored websites to mimic real investment exchanges. One survey found victims were conned for an average of $155,000 (£117,400) each. Most reported losing more than half their net worth.
Those huge potential profits have driven the industrialisation of the scam industry. Estimates of the industry’s global size now range from $70bn into the hundreds of billions – a scale that would put it on a par with the global illicit drug trade. The centres are typically run by transnational criminal networks, often originating from China, but their ground zero has been south-east Asia.
By late 2024, cyber scamming operations in Mekong countries were generating an estimated $44bn (£33.4bn) a year, equivalent to about 40% of the combined formal economy. That figure is considered conservative, and on the rise. “This is a massive growth area,” says Jason Tower, from the Global Initiative against Transnational Organised Crime. “This has become a global illicit market only since 2021 – and we’re now talking about a $70bn-plus-per-year illicit market. If you go back to 2020, it was nowhere near that size.”
In Cambodia, one company alleged by the US government to run scam compounds across the country had $15bn of cryptocurrency targeted in a Department of Justice (DOJ) seizure last month – funds equal to almost half of Cambodia’s economy.
With such huge potential profits, infrastructure has rapidly been built to facilitate it. The hubs thrive in conflict zones and along lawless and poorly regulated border areas. In Laos, officials have told local media around 400 are operating in the Golden Triangle special economic zone. Cyber Scam Monitor – a collective that monitors scamming Telegram channels, police reports, media and satellite data to identify scam compounds – has located 253 suspected sites across Cambodia. Many are enormous, and operating in public view.
The scale of the compounds is itself an indication of how much the states hosting them have been compromised, experts claim.
“These are massive pieces of infrastructure, set up very publicly. You can go to borders and observe them. You can even walk into some of them,” says Tower. “The fact this is happening in a very public way shows just the extreme level of impunity – and the extent to which states are not only tolerating this, but actually, these criminal actors are becoming state embedded.”
Thailand’s deputy finance minister resigned this October following allegations of links to scam operations in Cambodia, which he denies. Chen Zhi, who was recently hit by joint UK and US sanctions for allegedly masterminding the Prince Group scam network, was an adviser to Cambodia’s prime minister. The Prince Group said it “categorically rejects” claims the company or its chairman have engaged in any unlawful activity. In Myanmar, scam centres have become a key financial flow for armed groups. In the Philippines, ex-mayor Alice Guo, who ran a massive scam centre while in office, has just been sentenced to life in prison.
Across south-east Asia, scam masterminds are “operating at a very high level: they’re obtaining diplomatic credentials, they’re becoming advisers … It is massive in terms of the level of state involvement and co-optation,” Tower says.
“It’s quite unprecedented that you have an illicit market of this nature, that is causing global harm, where there’s blatant impunity, and it’s happening in this public way.”
The Guardian
Dan Milmo Global technology editor.
Wed 3 Dec 2025 07.00 CET
Researchers uncovered 354 AI-focused accounts that had accumulated 4.5bn views in a month
Hundreds of accounts on TikTok are garnering billions of views by pumping out AI-generated content, including anti-immigrant and sexualised material, according to a report.
Researchers said they had uncovered 354 AI-focused accounts pushing 43,000 posts made with generative AI tools and accumulating 4.5bn views over a month-long period.
According to AI Forensics, a Paris-based non-profit, some of these accounts attempt to game TikTok’s algorithm – which decides what content users see – by posting large amounts of content in the hope that it goes viral.
One posted up to 70 times a day or at the same time of day, an indication of an automated account, and most of the accounts were launched at the beginning of the year.
Last month TikTok revealed there were at least 1.3bn AI-generated posts on the platform. More than 100m pieces of content are uploaded to the platform every day, indicating that labelled AI material is a small part of TikTok’s catalogue. TikTok is also giving users the option of reducing the amount of AI content they see.
Of the accounts that posted content most frequently, half focused on content related to the female body. “These AI women are always stereotypically attractive, with sexualised attire or cleavage,” the report said.
AI Forensics found the accounts did not label half of the content they posted and less than 2% carried the TikTok label for AI content – which the nonprofit warned could increase the material’s deceptive potential. Researchers added that the accounts sometimes escape TikTok’s moderation for months, despite posting content barred by its terms of service.
Dozens of the accounts revealed in the study have subsequently been deleted, researchers said, indicating that some had been taken down by moderators.
Some of the content took the form of fake broadcast news segments with anti-immigrant narratives and material sexualising female bodies, including girls that appeared to be underage. The female body category accounted for half of the top 10 most active accounts, said AI Forensics, while some of the fake news pieces featured known broadcasting brands such as Sky News and ABC.
Some of the posts have been taken down by TikTok after they were referred to the platform by the Guardian.
TikTok said the report’s claims were “unsubstantiated” and the researchers had singled it out for an issue that was affecting multiple platforms. In August the Guardian revealed that nearly one in 10 of the fastest growing YouTube channels globally were showing only AI-generated content.
“On TikTok, we remove harmful AIGC [artificial intelligence-generated content], block hundreds of millions of bot accounts from being created, invest in industry-leading AI-labelling technologies and empower people with tools and education to control how they experience this content on our platform,” a TikTok spokesperson said.
The most popular accounts highlighted by AI Forensics in terms of views had posted “slop”, the term for AI-made content that is nonsensical, bizarre and designed to clutter up people’s social media feeds – such as animals competing in an Olympic diving contest or talking babies. The researchers acknowledged that some of the slop content was “entertaining” and “cute”.
theguardian.com
Harry Davies and Yuval Abraham in Jerusalem
Wed 29 Oct 2025 14.15 CET
The tech giants agreed to extraordinary terms to clinch a lucrative contract with the Israeli government, documents show
When Google and Amazon negotiated a major $1.2bn cloud-computing deal in 2021, their customer – the Israeli government – had an unusual demand: agree to use a secret code as part of an arrangement that would become known as the “winking mechanism”.
The demand, which would require Google and Amazon to effectively sidestep legal obligations in countries around the world, was born out of Israel’s concerns that data it moves into the global corporations’ cloud platforms could end up in the hands of foreign law enforcement authorities.
Like other big tech companies, Google and Amazon’s cloud businesses routinely comply with requests from police, prosecutors and security services to hand over customer data to assist investigations.
This process is often cloaked in secrecy. The companies are frequently gagged from alerting the affected customer their information has been turned over. This is either because the law enforcement agency has the power to demand this or a court has ordered them to stay silent.
For Israel, losing control of its data to authorities overseas was a significant concern. So to deal with the threat, officials created a secret warning system: the companies must send signals hidden in payments to the Israeli government, tipping it off when it has disclosed Israeli data to foreign courts or investigators.
To clinch the lucrative contract, Google and Amazon agreed to the so-called winking mechanism, according to leaked documents seen by the Guardian, as part of a joint investigation with Israeli-Palestinian publication +972 Magazine and Hebrew-language outlet Local Call.
Based on the documents and descriptions of the contract by Israeli officials, the investigation reveals how the companies bowed to a series of stringent and unorthodox “controls” contained within the 2021 deal, known as Project Nimbus. Both Google and Amazon’s cloud businesses have denied evading any legal obligations.
The strict controls include measures that prohibit the US companies from restricting how an array of Israeli government agencies, security services and military units use their cloud services. According to the deal’s terms, the companies cannot suspend or withdraw Israel’s access to its technology, even if it’s found to have violated their terms of service.
Israeli officials inserted the controls to counter a series of anticipated threats. They feared Google or Amazon might bow to employee or shareholder pressure and withdraw Israel’s access to its products and services if linked to human rights abuses in the occupied Palestinian territories.
They were also concerned the companies could be vulnerable to overseas legal action, particularly in cases relating to the use of the technology in the military occupation of the West Bank and Gaza.
The terms of the Nimbus deal would appear to prohibit Google and Amazon from the kind of unilateral action taken by Microsoft last month, when it disabled the Israeli military’s access to technology used to operate an indiscriminate surveillance system monitoring Palestinian phone calls.
Microsoft, which provides a range of cloud services to Israel’s military and public sector, bid for the Nimbus contract but was beaten by its rivals. According to sources familiar with negotiations, Microsoft’s bid suffered as it refused to accept some of Israel’s demands.
As with Microsoft, Google and Amazon’s cloud businesses have faced scrutiny in recent years over the role of their technology – and the Nimbus contract in particular – in Israel’s two-year war on Gaza.
During its offensive in the territory, where a UN commission of inquiry concluded that Israel has committed genocide, the Israeli military has relied heavily on cloud providers to store and analyse large volumes of data and intelligence information.
One such dataset was the vast collection of intercepted Palestinian calls that until August was stored on Microsoft’s cloud platform. According to intelligence sources, the Israeli military planned to move the data to Amazon Web Services (AWS) datacentres.
Amazon did not respond to the Guardian’s questions about whether it knew of Israel’s plan to migrate the mass surveillance data to its cloud platform. A spokesperson for the company said it respected “the privacy of our customers and we do not discuss our relationship without their consent, or have visibility into their workloads” stored in the cloud.
Asked about the winking mechanism, both Amazon and Google denied circumventing legally binding orders. “The idea that we would evade our legal obligations to the US government as a US company, or in any other country, is categorically wrong,” a Google spokesperson said.
During its offensive in the territory, where a UN commission of inquiry concluded that Israel has committed genocide, the Israeli military has relied heavily on cloud providers to store and analyse large volumes of data and intelligence information.
One such dataset was the vast collection of intercepted Palestinian calls that until August was stored on Microsoft’s cloud platform. According to intelligence sources, the Israeli military planned to move the data to Amazon Web Services (AWS) datacentres.
Amazon did not respond to the Guardian’s questions about whether it knew of Israel’s plan to migrate the mass surveillance data to its cloud platform. A spokesperson for the company said it respected “the privacy of our customers and we do not discuss our relationship without their consent, or have visibility into their workloads” stored in the cloud.
Asked about the winking mechanism, both Amazon and Google denied circumventing legally binding orders. “The idea that we would evade our legal obligations to the US government as a US company, or in any other country, is categorically wrong,” a Google spokesperson said.
With this threat in mind, Israeli officials inserted into the Nimbus deal a requirement for the companies to a send coded message – a “wink” – to its government, revealing the identity of the country they had been compelled to hand over Israeli data to, but were gagged from saying so.
Leaked documents from Israel’s finance ministry, which include a finalised version of the Nimbus agreement, suggest the secret code would take the form of payments – referred to as “special compensation” – made by the companies to the Israeli government.
According to the documents, the payments must be made “within 24 hours of the information being transferred” and correspond to the telephone dialing code of the foreign country, amounting to sums between 1,000 and 9,999 shekels.
Under the terms of the deal, the mechanism works like this:
If either Google or Amazon provides information to authorities in the US, where the dialing code is +1, and they are prevented from disclosing their cooperation, they must send the Israeli government 1,000 shekels.
If, for example, the companies receive a request for Israeli data from authorities in Italy, where the dialing code is +39, they must send 3,900 shekels.
If the companies conclude the terms of a gag order prevent them from even signaling which country has received the data, there is a backstop: the companies must pay 100,000 shekels ($30,000) to the Israeli government.
Legal experts, including several former US prosecutors, said the arrangement was highly unusual and carried risks for the companies as the coded messages could violate legal obligations in the US, where the companies are headquartered, to keep a subpoena secret.
“It seems awfully cute and something that if the US government or, more to the point, a court were to understand, I don’t think they would be particularly sympathetic,” a former US government lawyer said.
Several experts described the mechanism as a “clever” workaround that could comply with the letter of the law but not its spirit. “It’s kind of brilliant, but it’s risky,” said a former senior US security official.
Israeli officials appear to have acknowledged this, documents suggest. Their demands about how Google and Amazon respond to a US-issued order “might collide” with US law, they noted, and the companies would have to make a choice between “violating the contract or violating their legal obligations”.
Neither Google nor Amazon responded to the Guardian’s questions about whether they had used the secret code since the Nimbus contract came into effect.
“We have a rigorous global process for responding to lawful and binding orders for requests related to customer data,” Amazon’s spokesperson said. “We do not have any processes in place to circumvent our confidentiality obligations on lawfully binding orders.”
Google declined to comment on which of Israel’s stringent demands it had accepted in the completed Nimbus deal, but said it was “false” to “imply that we somehow were involved in illegal activity, which is absurd”.
A spokesperson for Israel’s finance ministry said: “The article’s insinuation that Israel compels companies to breach the law is baseless.”
‘No restrictions’
Israeli officials also feared a scenario in which its access to the cloud providers’ technology could be blocked or restricted.
In particular, officials worried that activists and rights groups could place pressure on Google and Amazon, or seek court orders in several European countries, to force them to terminate or limit their business with Israel if their technology were linked to human rights violations.
To counter the risks, Israel inserted controls into the Nimbus agreement which Google and Amazon appear to have accepted, according to government documents prepared after the deal was signed.
The documents state that the agreement prohibits the companies from revoking or restricting Israel’s access to their cloud platforms, either due to changes in company policy or because they find Israel’s use of their technology violates their terms of service.
Provided Israel does not infringe on copyright or resell the companies’ technology, “the government is permitted to make use of any service that is permitted by Israeli law”, according to a finance ministry analysis of the deal.
Both companies’ standard “acceptable use” policies state their cloud platforms should not be used to violate the legal rights of others, nor should they be used to engage in or encourage activities that cause “serious harm” to people.
However, according to an Israeli official familiar with the Nimbus project, there can be “no restrictions” on the kind of information moved into Google and Amazon’s cloud platforms, including military and intelligence data. The terms of the deal seen by the Guardian state that Israel is “entitled to migrate to the cloud or generate in the cloud any content data they wish”.
Israel inserted the provisions into the deal to avoid a situation in which the companies “decide that a certain customer is causing them damage, and therefore cease to sell them services”, one document noted.
The Intercept reported last year the Nimbus project was governed by an “amended” set of confidential policies, and cited a leaked internal report suggesting Google understood it would not be permitted to restrict the types of services used by Israel.
Last month, when Microsoft cut off Israeli access to some cloud and artificial intelligence services, it did so after confirming reporting by the Guardian and its partners, +972 and Local Call, that the military had stored a vast trove of intercepted Palestinian calls in the company’s Azure cloud platform.
Notifying the Israeli military of its decision, Microsoft said that using Azure in this way violated its terms of service and it was “not in the business of facilitating the mass surveillance of civilians”.
Under the terms of the Nimbus deal, Google and Amazon are prohibited from taking such action as it would “discriminate” against the Israeli government. Doing so would incur financial penalties for the companies, as well as legal action for breach of contract.
The Israeli finance ministry spokesperson said Google and Amazon are “bound by stringent contractual obligations that safeguard Israel’s vital interests”. They added: “These agreements are confidential and we will not legitimise the article’s claims by disclosing private commercial terms.”
Exclusive: Tech firm ends military unit’s access to AI and data services after Guardian reveals secret spy project
Microsoft blocks Israel’s use of its technology in mass surveillance of Palestinians
Exclusive: Tech firm ends military unit’s access to AI and data services after Guardian reveals secret spy project
Microsoft has terminated the Israeli military’s access to technology it used to operate a powerful surveillance system that collected millions of Palestinian civilian phone calls made each day in Gaza and the West Bank, the Guardian can reveal.
Microsoft told Israeli officials late last week that Unit 8200, the military’s elite spy agency, had violated the company’s terms of service by storing the vast trove of surveillance data in its Azure cloud platform, sources familiar with the situation said.
The decision to cut off Unit 8200’s ability to use some of its technology results directly from an investigation published by the Guardian last month. It revealed how Azure was being used to store and process the trove of Palestinian communications in a mass surveillance programme.
In a joint investigation with the Israeli-Palestinian publication +972 Magazine and the Hebrew-language outlet Local Call, the Guardian revealed how Microsoft and Unit 8200 had worked together on a plan to move large volumes of sensitive intelligence material into Azure.
The project began after a meeting in 2021 between Microsoft’s chief executive, Satya Nadella, and the unit’s then commander, Yossi Sariel.
In response to the investigation, Microsoft ordered an urgent external inquiry to review its relationship with Unit 8200. Its initial findings have now led the company to cancel the unit’s access to some of its cloud storage and AI services.
Equipped with Azure’s near-limitless storage capacity and computing power, Unit 8200 had built an indiscriminate new system allowing its intelligence officers to collect, play back and analyse the content of cellular calls of an entire population.
The project was so expansive that, according to sources from Unit 8200 – which is equivalent in its remit to the US National Security Agency – a mantra emerged internally that captured its scale and ambition: “A million calls an hour.”
According to several sources, the enormous repository of intercepted calls – which amounted to as much as 8,000 terabytes of data – was held in a Microsoft datacentre in the Netherlands. Within days of the Guardian publishing the investigation, Unit 8200 appears to have swiftly moved the surveillance data out of the country.
According to sources familiar with the huge data transfer outside of the EU country, it occurred in early August. Intelligence sources said Unit 8200 planned to transfer the data to the Amazon Web Services cloud platform. Neither the Israel Defense Forces (IDF) nor Amazon responded to a request for comment.
The extraordinary decision by Microsoft to end the spy agency’s access to key technology was made amid pressure from employees and investors over its work for Israel’s military and the role its technology has played in the almost two-year offensive in Gaza.
A United Nations commission of inquiry recently concluded that Israel had committed genocide in Gaza, a charge denied by Israel but supported by many experts in international law.
The Guardian’s joint investigation prompted protests at Microsoft’s US headquarters and one of its European datacentres, as well as demands by a worker-led campaign group, No Azure for Apartheid, to end all ties to the Israeli military.
No Azure for Apartheid demonstrators
On Thursday, Microsoft’s vice-chair and president, Brad Smith, informed staff of the decision. In an email seen by the Guardian, he said the company had “ceased and disabled a set of services to a unit within the Israel ministry of defense”, including cloud storage and AI services.
Smith wrote: “We do not provide technology to facilitate mass surveillance of civilians. We have applied this principle in every country around the world, and we have insisted on it repeatedly for more than two decades.”
The decision brings to an abrupt end a three-year period in which the spy agency operated its surveillance programme using Microsoft’s technology.
Unit 8200 used its own expansive surveillance capabilities to intercept and collect the calls. The spy agency then used a customised and segregated area within the Azure platform, allowing for the data to be retained for extended periods of time and analysed using AI-driven techniques.
Although the initial focus of the surveillance system was the West Bank, where an estimated 3 million Palestinians live under Israeli military occupation, intelligence sources said the cloud-based storage platform had been used in the Gaza offensive to facilitate the preparation of deadly airstrikes.
The revelations highlighted how Israel has relied on the services and infrastructure of major US technology companies to support its bombardment of Gaza, which has killed more than 65,000 Palestinians, mostly civilians, and created a profound humanitarian and starvation crisis.
The Guardian
Lauren Almeida
Mon 22 Sep 2025 13.19 CEST
First published on Mon 22 Sep 2025 10.03 CEST
Software provider Collins Aerospace completing updates after Heathrow, Brussels and Berlin hit by problems
Flight delays continue across Europe after weekend cyber-attack
Software provider Collins Aerospace completing updates after Heathrow, Brussels and Berlin hit by problems
Passengers are facing another day of flight delays across Europe, as big airports continue to grapple with the aftermath of a cyber-attack on the company behind the software used for check-in and boarding.
Several of the largest airports in Europe, including London Heathrow, have been trying to restore normal operations over the past few days after an attack on Friday disrupted automatic check-in and boarding software.
The problem stemmed from Collins Aerospace, a software provider that works with several airlines across the world.
The company, which is a subsidiary of the US aerospace and defence company RTX, said on Monday that it was working with four affected airports and airline customers, and was in the final stages of completing the updates needed to restore full functionality.
The European Union Agency for Cybersecurity said on Monday that Collins had suffered a ransomware attack. This is a type of cyber-attack where hackers in effect lock up the target’s data and systems in an attempt to secure a ransom.
Airports in Brussels, Dublin and Berlin have also experienced delays. While kiosks and bag-drop machines have been offline, airline staff have instead relied on manual processing.
The government’s independent reviewer of terrorism legislation, Jonathan Hall KC, said it was possible state-sponsored hackers could be behind the attack.
When asked if a state such as Russia could have been responsible, Hall told Times Radio “anything is possible”.
He added that while people thought, “understandably, about states deciding to do things it is also possible for very, very powerful and sophisticated private entities to do things as well”.
A spokesperson for Brussels airport said Collins Aerospace had not yet confirmed the system was secure again. On Monday, 40 of its 277 departing flights and 23 of its 277 arriving services were cancelled.
A Heathrow spokesperson said the “vast majority of flights at Heathrow are operating as normal, although check-in and boarding for some flights may take slightly longer than usual”.
They added: “This system is not owned or operated by Heathrow, so while we cannot resolve the IT issue directly, we are supporting airlines and have additional colleagues in the terminals to assist passengers.”
theguardian.com - Conservative government used superinjuction to hide error that put Afghans at risk and led to £2bn mitigation scheme.
Thousands of Afghans relocated to UK under secret scheme after data leak
Conservative government used superinjuction to hide error that put Afghans at risk and led to £2bn mitigation scheme
What we know about the secret Afghan relocation scheme
Afghan nationals: have you arrived in the UK under the Afghan Response Route?
Dan Sabbagh and Emine Sinmaz
Tue 15 Jul 2025 22.07 CEST
Share
Conservative ministers used an unprecedented superinjunction to suppress a data breach that led the UK government to offer relocation to 15,000 Afghans in a secret scheme with a potential cost of more than £2bn.
The Afghan Response Route (ARR) was created in haste after it emerged that personal information about 18,700 Afghans who had applied to come to the UK had been leaked in error by a British defence official in early 2022.
Panicked ministers and officials at the Ministry of Defence learned of the breach in August 2023 after data was posted to a Facebook group and applied to the high court for an injunction, the first sought by a British government – to prevent any further media disclosure.
It was feared that publicity could put the lives of many thousands of Afghans at risk if the Taliban, who had control of the country after the western withdrawal in August 2021, were to become aware of the existence of the leaked list and to obtain it.
The judge in the initial trial, Mr Justice Knowles, granted the application “contra mundum” – against the world – and ruled that its existence remain secret, resulting in a superinjunction which remained in place until lifted on Tuesday.
The gagging order meant that both the data breach and the expensive mitigation scheme remained hidden despite its size and cost until the near two-year legal battle was brought to a close in the high court.
At noon on Tuesday, the high court judge Mr Justice Chamberlain said it was time to end the superinjuction, which he said had the effect of concealing discussions about spending “the sort of money which makes a material difference to government spending plans and is normally the stuff of political debate”.
A few minutes later, John Healey, the defence secretary, offered a “sincere apology” for the data breach. In a statement to the Commons, he said he had felt “deeply concerned about the lack of transparency” around the data breach and “deeply uncomfortable to be constrained from reporting to this house”.