iverify.io
By Matthias Frielingsdorf, VP of Research
Oct 21, 2025
iOS 26 changes how shutdown logs are handled, erasing key evidence of Pegasus and Predator spyware, creating new challenges for forensic investigators
As iOS 26 is being rolled out, our team noticed a particular change in how the operating system handles the shutdown.log file: it effectively erases crucial evidence of Pegasus and Predator spyware infections. This development poses a serious challenge for forensic investigators and individuals seeking to determine if their devices have been compromised at a time when spyware attacks are becoming more common.
The Power of the shutdown.log
For years, the shutdown.log file has been an invaluable, yet often overlooked, artifact in the detection of iOS malware. Located within the Sysdiagnoses in the Unified Logs section (specifically, Sysdiagnose Folder -> system_logs.logarchive -> Extra -> shutdown.log), it has served as a silent witness to the activities occurring on an iOS device, even during its shutdown sequence.
In 2021, the publicly known version of Pegasus spyware was found to leave discernible traces within this shutdown.log. These traces provided a critical indicator of compromise, allowing security researchers to identify infected devices. However, the developers behind Pegasus, NSO Group, are constantly refining their techniques, and by 2022 Pegasus had evolved.
Pegasus's Evolving Evasion Tactics
While still leaving evidence in the shutdown.log, their methods became more sophisticated. Instead of leaving obvious entries, they began to completely wipe the shutdown.log file. Yet, even with this attempted erasure, their own processes still left behind subtle traces. This meant that even a seemingly clean shutdown.log that began with evidence of a Pegasus sample was, in itself, an indicator of compromise. Multiple cases of this behavior were observed until the end of 2022, highlighting the continuous adaptation of these malicious actors.
Following this period, it is believed that Pegasus developers implemented even more robust wiping mechanisms, likely monitoring device shutdown to ensure a thorough eradication of their presence from the shutdown.log. Researchers have noted instances where devices known to be active had their shutdown.log cleared, alongside other IOCs for Pegasus infections. This led to the conclusion that a cleared shutdown.log could serve as a good heuristic for identifying suspicious devices.
Predator's Similar Footprint
The sophisticated Predator spyware, observed in 2023, also appears to have learned from the past. Given that Predator was actively monitoring the shutdown.log, and considering the similar behavior seen in earlier Pegasus samples, it is highly probable that Predator, too, left traces within this critical log file.
iOS 26: An Unintended Cleanse
With iOS 26 Apple introduced a change—either an intentional design decision or an unforeseen bug—that causes the shutdown.log to be overwritten on every device reboot instead of appended with a new entry every time, preserving each as its own snapshot. This means that any user who updates to iOS 26 and subsequently restarts their device will inadvertently erase all evidence of older Pegasus and Predator detections that might have been present in their shutdown.log.
This automatic overwriting, while potentially intended for system hygiene or performance, effectively sanitizes the very forensic artifact that has been instrumental in identifying these sophisticated threats. It could hardly come at a worse time - spyware attacks have been a constant in the news and recent headlines show that high-power executives and celebrities, not just civil society, are being targeted.
Identifying Pegasus 2022: A Specific IOC
For those still on iOS versions prior to 26, a specific IOC for Pegasus 2022 infections involved the presence of a /private/var/db/com.apple.xpc.roleaccountd.staging/com.apple.WebKit.Networking entry within the shutdown.log. This particular IOC also revealed a significant shift in NSO Group's tactics: they began using normal system process names instead of easily identifiable, similarly named processes, making detection more challenging.
An image of a shutdown.log file
Correlating Logs for Deeper Insight (< iOS 18)
For devices running iOS 18 or earlier, a more comprehensive approach to detection involved correlating containermanagerd log entries with shutdown.log events. Containermanagerd logs contain boot events and can retain data for several weeks. By comparing these boot events with shutdown.log entries, investigators could identify discrepancies. For example, if numerous boot events were observed before shutdown.log entries, it suggested that something was amiss and potentially being hidden.
Before You Update
Given the implications of iOS 26's shutdown.log handling, it is crucial for users to take proactive steps:
Before updating to iOS 26, immediately take and save a sysdiagnose of your device. This will preserve your current shutdown.log and any potential evidence it may contain.
Consider holding off on updating to iOS 26 until Apple addresses this issue, ideally by releasing a bug fix that prevents the overwriting of the shutdown.log on boot.
techcrunch.com
Lorenzo Franceschi-Bicchierai
7:45 AM PDT · October 21, 2025
A developer at Trenchant, a leading Western spyware and zero-day maker, was suspected of leaking company tools and was fired. Weeks later, Apple notified him that his personal iPhone was targeted with spyware.
Earlier this year, a developer was shocked by a message that appeared on his personal phone: “Apple detected a targeted mercenary spyware attack against your iPhone.”
“I was panicking,” Jay Gibson, who asked that we don’t use his real name over fears of retaliation, told TechCrunch.
Gibson, who until recently built surveillance technologies for Western government hacking tools maker Trenchant, may be the first documented case of someone who builds exploits and spyware being themselves targeted with spyware.
“What the hell is going on? I really didn’t know what to think of it,” said Gibson, adding that he turned off his phone and put it away on that day, March 5. “I went immediately to buy a new phone. I called my dad. It was a mess. It was a huge mess.”
At Trenchant, Gibson worked on developing iOS zero-days, meaning finding vulnerabilities and developing tools capable of exploiting them that are not known to the vendor who makes the affected hardware or software, such as Apple.
“I have mixed feelings of how pathetic this is, and then extreme fear because once things hit this level, you never know what’s going to happen,” he told TechCrunch.
But the ex-Trenchant employee may not be the only exploit developer targeted with spyware. According to three sources who have direct knowledge of these cases, there have been other spyware and exploit developers in the last few months who have received notifications from Apple alerting them that they were targeted with spyware.
Apple did not respond to a request for comment from TechCrunch.
The targeting of Gibson’s iPhone shows that the proliferation of zero-days and spyware is starting to ensnare more types of victims.
Spyware and zero-day makers have historically claimed their tools are only deployed by vetted government customers against criminals and terrorists. But for the past decade, researchers at the University of Toronto’s digital rights group Citizen Lab, Amnesty International, and other organizations have found dozens of cases where governments used these tools to target dissidents, journalists, human rights defenders, and political rivals all over the world.
The closest public cases of security researchers being targeted by hackers happened in 2021 and 2023, when North Korean government hackers were caught targeting security researchers working in vulnerability research and development.
Suspect in leak investigation
Two days after receiving the Apple threat notification, Gibson contacted a forensic expert who has extensive experience investigating spyware attacks. After performing an initial analysis of Gibson’s phone, the expert did not find any signs of infection, but still recommended a deeper forensic analysis of the exploit developer’s phone.
A forensic analysis would have entailed sending the expert a complete backup of the device, something Gibson said he was not comfortable with.
“Recent cases are getting tougher forensically, and some we find nothing on. It may also be that the attack was not actually fully sent after the initial stages, we don’t know,” the expert told TechCrunch.
Without a full forensic analysis of Gibson’s phone, ideally one where investigators found traces of the spyware and who made it, it’s impossible to know why he was targeted or who targeted him.
But Gibson told TechCrunch that he believes the threat notification he received from Apple is connected to the circumstances of his departure from Trenchant, where he claims the company designated him as a scapegoat for a damaging leak of internal tools.
Apple sends out threat notifications specifically for when it has evidence that a person was targeted by a mercenary spyware attack. This kind of surveillance technology is often invisibly and remotely planted on someone’s phone without their knowledge by exploiting vulnerabilities in the phone’s software, exploits that can be worth millions of dollars and can take months to develop. Law enforcement and intelligence agencies typically have the legal authority to deploy spyware on targets, not the spyware makers themselves.
Sara Banda, a spokesperson for Trenchant’s parent company L3Harris, declined to comment for this story when reached by TechCrunch before publication.
A month before he received Apple’s threat notification, when Gibson was still working at Trenchant, he said he was invited to go to the company’s London office for a team-building event.
When Gibson arrived on February 3, he was immediately summoned into a meeting room to speak via video call with Peter Williams, Trenchant’s then-general manager who was known inside the company as “Doogie.” (In 2018, defense contractor L3Harris acquired zero-day makers Azimuth and Linchpin Labs, two sister startups that merged to become Trenchant.)
Williams told Gibson the company suspected he was double employed and was thus suspending him. All of Gibson’s work devices would be confiscated and analyzed as part of an internal investigation into the allegations. Williams could not be reached for comment.
“I was in shock. I didn’t really know how to react because I couldn’t really believe what I was hearing,” said Gibson, who explained that a Trenchant IT employee then went to his apartment to pick up his company-issued equipment.
Around two weeks later, Gibson said Williams called and told him that following the investigation, the company was firing him and offering him a settlement agreement and payment. Gibson said Williams declined to explain what the forensic analysis of his devices had found, and essentially told him he had no choice but to sign the agreement and depart the company.
Feeling like he had no alternative, Gibson said he went along with the offer and signed.
Gibson told TechCrunch he later heard from former colleagues that Trenchant suspected he had leaked some unknown vulnerabilities in Google’s Chrome browser, tools that Trenchant had developed. Gibson, and three former colleagues of his, however, told TechCrunch he did not have access to Trenchant’s Chrome zero-days, given that he was part of the team exclusively developing iOS zero-days and spyware. Trenchant teams only have strictly compartmentalized access to tools related to the platforms they are working on, the people said.
“I know I was a scapegoat. I wasn’t guilty. It’s very simple,” said Gibson. “I didn’t do absolutely anything other than working my ass off for them.”
The story of the accusations against Gibson and his subsequent suspension and firing was independently corroborated by three former Trenchant employees with knowledge.
Two of the other former Trenchant employees said they knew details of Gibson’s London trip and were aware of suspected leaks of sensitive company tools.
All of them asked not to be named but believe Trenchant got it wrong.
Since we launched the public Apple Security Bounty program in 2020, we’re proud to have awarded over $35 million to more than 800 security researchers, with multiple individual reports earning $500,000 rewards. We’re grateful to everyone who submitted their research and worked closely with us to help protect our users.
Today we’re announcing the next major chapter for Apple Security Bounty, featuring the industry’s highest rewards, expanded research categories, and a flag system for researchers to objectively demonstrate vulnerabilities and obtain accelerated awards.
We’re doubling our top award to $2 million for exploit chains that can achieve similar goals as sophisticated mercenary spyware attacks. This is an unprecedented amount in the industry and the largest payout offered by any bounty program we’re aware of — and our bonus system, providing additional rewards for Lockdown Mode bypasses and vulnerabilities discovered in beta software, can more than double this reward, with a maximum payout in excess of $5 million. We’re also doubling or significantly increasing rewards in many other categories to encourage more intensive research. This includes $100,000 for a complete Gatekeeper bypass, and $1 million for broad unauthorized iCloud access, as no successful exploit has been demonstrated to date in either category.
Our bounty categories are expanding to cover even more attack surfaces. Notably, we're rewarding one-click WebKit sandbox escapes with up to $300,000, and wireless proximity exploits over any radio with up to $1 million.
We’re introducing Target Flags, a new way for researchers to objectively demonstrate exploitability for some of our top bounty categories, including remote code execution and Transparency, Consent, and Control (TCC) bypasses — and to help determine eligibility for a specific award. Researchers who submit reports with Target Flags will qualify for accelerated awards, which are processed immediately after the research is received and verified, even before a fix becomes available.
These updates will go into effect in November 2025. At that time, we will publish the complete list of new and expanded categories, rewards, and bonuses on the Apple Security Research site, along with detailed instructions for taking advantage of Target Flags, updated program guidelines, and much more.
Since we introduced our bounty program, we have continued to build industry-leading security defenses in our products, including Lockdown Mode, an upgraded security architecture in the Safari browser, and most recently, Memory Integrity Enforcement. These advances represent a significant evolution in Apple platform security, helping make iPhone the most secure consumer device in the world — and they also make it much more challenging and time-consuming for researchers to develop working exploits for vulnerabilities on our platforms.
Meanwhile, the only system-level iOS attacks we observe in the wild come from mercenary spyware — extremely sophisticated exploit chains, historically associated with state actors, that cost millions of dollars to develop and are used against a very small number of targeted individuals. While Lockdown Mode and Memory Integrity Enforcement make such attacks drastically more expensive and difficult to develop, we recognize that the most advanced adversaries will continue to evolve their techniques.
As a result, we’re adapting Apple Security Bounty to encourage highly advanced research on our most critical attack surfaces despite the increased difficulty, and to provide insights that support our mission to protect users of over 2.35 billion active Apple devices worldwide. Our updated program offers outsize rewards for findings that help us stay ahead of real-world threats, significantly prioritizing verifiable exploits over theoretical vulnerabilities, and partial and complete exploit chains over individual exploits.
Greater rewards for complete exploit chains
Mercenary spyware attacks typically chain many vulnerabilities together, cross different security boundaries, and incrementally escalate privileges. Apple’s Security Engineering and Architecture (SEAR) team focuses its offensive research on understanding such exploitation paths to drive foundational improvements to the strength of our defenses, and we want Apple Security Bounty to encourage new perspectives and ideas from the security research community. Here is a preview of how we're increasing rewards for five key attack vectors:
Current Maximum New Maximum
Zero-click chain: Remote attack with no user-interaction $1M $2M
One-click chain: Remote attack with one-click user-interaction $250K $1M
Wireless proximity attack: Attack requiring physical proximity to device $250K $1M
Physical device access: Attack requiring physical access to locked device $250K $500K
App sandbox escape: Attack from app sandbox to SPTM bypass $150K $500K
Top rewards are for exploits that are similar to the most sophisticated, real-world threats, that work on our latest hardware and software, and that use our new Target Flags, which we explain in more detail below. The rewards are determined by the demonstrated outcome, regardless of the specific route through the system. This means that rewards for remote-entry vectors are significantly increasing, and rewards for attack vectors not commonly observed in real-world attacks are decreasing. Individual chain components or multiple components that cannot be linked together will remain eligible for rewards, though these are proportionally smaller to match their relative impact.
Boosting macOS Gatekeeper
Because macOS allows users to install applications from multiple sources, Gatekeeper is our first and most important line of defense against malicious software. Although Gatekeeper has been included in Apple Security Bounty since 2020, we've never received a report demonstrating a complete Gatekeeper bypass with no user interaction. To drive deeper research in this critical area, researchers who report a full Gatekeeper bypass with no user interaction are eligible for a $100,000 award.
Expanded Apple Security Bounty categories
One-click attacks through the web browser remain a critical entry vector for mercenary spyware on all major operating systems, including iOS, Android, and Windows. Our core defense against these threats is deeply robust isolation of WebKit’s WebContent process, and our focused engineering improvements over the past few years — including the GPU Process security architecture and our comprehensive CoreIPC hardening — have eliminated WebContent’s direct access to thousands of external IPC endpoints and removed 100 percent of the IOUserClient attack surface from the WebContent sandbox.
As a result, researchers who demonstrate chaining WebContent code execution with a sandbox escape can receive up to $300,000, and continuing the chain to achieve unsigned code execution with arbitrary entitlements becomes eligible for a $1 million reward. Modern browser renderers are exceptionally complex, which is why rigorous process isolation is so central to our WebKit security strategy. Therefore, WebContent exploits that are not able to break process isolation and escape the sandbox will receive smaller rewards.
We're also expanding our Wireless Proximity category, which includes our latest devices with the Apple-designed C1 and C1X modems and N1 wireless chip. We believe the architectural improvements and enhanced security in these devices make them the most secure in the industry, making proximity-based attacks more challenging to execute than ever. While we've never observed a real-world, zero-click attack executed purely through wireless proximity, we're committed to protecting our users against even the most sophisticated threats. We are therefore expanding our wireless proximity bounty to encompass all radio interfaces in our latest devices, and we are doubling the maximum reward for this category to $1 million.
Introducing Target Flags
In addition to increasing reward amounts and expanding bounty categories, we're making it easier for researchers to objectively demonstrate their findings — and to determine the expected reward for their specific research report. Target Flags, inspired by capture-the-flag competitions, are built into our operating systems and allow us to rapidly review the issue and process a resulting reward, even before we release a fix.
When researchers demonstrate security issues using Target Flags, the specific flag that’s captured objectively demonstrates a given level of capability — for example, register control, arbitrary read/write, or code execution — and directly correlates to the reward amount, making the award determination more transparent than ever. Because Target Flags can be programmatically verified by Apple as part of submitted findings, researchers who submit eligible reports with Target Flags will receive notification of their bounty award immediately upon our validation of the captured flag. Confirmed rewards will be issued in an upcoming payment cycle rather than when a fix becomes available, underscoring the trust we've built with our core researcher community.
Target Flags are supported on all Apple platforms — iOS, iPadOS, macOS, visionOS, watchOS, and tvOS — and cover a number of Apple Security Bounty areas, and coverage will expand over time.
Reward and bonus guidelines
Top rewards in all categories apply only for issues affecting the latest publicly available software and hardware. Our newest devices and operating systems incorporate our most advanced security features, such as Memory Integrity Enforcement in the iPhone 17 lineup, making research against current hardware significantly more valuable for our defensive efforts.
We continue to offer bonus rewards for exceptional research. Reports on issues in current developer or public beta releases qualify for substantial bonuses, as they give us a chance to fix the problem before the software is ever released to our users. And we continue to award significant bonuses for exploit chain components that bypass specific Lockdown Mode protections.
Finally, each year we receive a number of issues outside of Apple Security Bounty categories which we assess to be of low impact to real-world user security, but which we nonetheless address with software fixes out of an abundance of caution. Often times, these issues are some of the first reports we receive from researchers new to our platforms. We want those researchers to have an encouraging experience — so in addition to CVE assignment and researcher credit as before, we will now also reward such reports with a $1,000 award. We have been piloting these awards for some time and are pleased to make them a permanent part of our expanded reward portfolio.
Special initiatives for 2026
In 2022, we made an unprecedented $10 million cybersecurity grant in support of civil society organizations that investigate highly targeted mercenary spyware attacks. Now, we are planning a special initiative featuring iPhone 17 with Memory Integrity Enforcement, which we believe is the most significant upgrade to memory safety in the history of consumer operating systems. To rapidly make this revolutionary, industry-leading defense available to members of civil society who may be targeted by mercenary spyware, we will provide a thousand iPhone 17 devices to civil society organizations who can get them into the hands of at-risk users. This initiative reflects our continued commitment to make our most advanced security protections reach those who need them most.
Additionally, the 2026 Security Research Device Program now includes iPhone 17 devices with our latest security advances, including Memory Integrity Enforcement, and is available to applicants with proven security research track records on any platform. Researchers seeking to accelerate their iOS research can apply for the 2026 program by October 31, 2025. All vulnerabilities discovered using the Security Research Device receive priority consideration for Apple Security Bounty rewards and bonuses.
In closing
We’re updating Apple Security Bounty to encourage researchers to examine the most critical attack surfaces on our platforms and services, and to help drive the highest impact security discoveries. As we continue to raise our research standards, we are also dramatically increasing rewards — our highest award will be $2 million before bonus considerations.
Until the updated awards are published online, we will evaluate all new reports against our previous framework as well as the new one, and we'll award the higher amount. And while we’re especially motivated to receive complex exploit chains and innovative research, we’ll continue to review and reward all reports that significantly impact the security of our users, even if they're not covered by our published categories. We look forward to continuing to work with you to help keep our users safe!
helpnetsecurity.com 20.08.2025 - Apple has fixed yet another vulnerability (CVE-2025-43300) that has apparently been exploited as a zero-day in targeted attacks.
CVE-2025-43300 is an out-of-bounds write issue that could be triggered by a vulnerable device processing a malicious image file, leading to exploitable memory corruption.
The vulnerability affects the Image I/O framework used by Apple’s iOS and macOS operating systems.
Apple has fixed this flaw with improved bounds checking in:
iOS 18.6.2 and iPadOS 18.6.2
iPadOS 17.7.10
macOS Sequoia 15.6.1
macOS Sonoma 14.7.8
macOS Ventura 13.7.8
With Apple claiming the discovery of the vulnerability, it’s unlikely that we will soon find out who is/was leveraging it and for what.
But even though these attacks were apparently limited to targeting specific individuals – which likely means that the goal was to delivery spyware – all users would do well to upgrade their iDevices as soon as possible.
Apple OSes will soon transfer passkeys seamlessly and securely across platforms.
Apple this week provided a glimpse into a feature that solves one of the biggest drawbacks of passkeys, the industry-wide standard for website and app authentication that isn't susceptible to credential phishing and other attacks targeting passwords.
The import/export feature, which Apple demonstrated at this week’s Worldwide Developers Conference, will be available in the next major releases of iOS, macOS, iPadOS, and visionOS. It aims to solve one of the biggest shortcomings of passkeys as they have existed to date. Passkeys created on one operating system or credential manager are largely bound to those environments. A passkey created on a Mac, for instance, can sync easily enough with other Apple devices connected to the same iCloud account. Transferring them to a Windows device or even a dedicated credential manager installed on the same Apple device has been impossible.
Growing pains
That limitation has led to criticisms that passkeys are a power play by large companies to lock users into specific product ecosystems. Users have also rightly worried that the lack of transferability increases the risk of getting locked out of important accounts if a device storing passkeys is lost, stolen, or destroyed.
The FIDO Alliance, the consortium of more than 100 platform providers, app makers, and websites developing the authentication standard, has been keenly aware of the drawback and has been working on programming interfaces that will make the passkey syncing more flexible. A recent teardown of the Google password manager by Android Authority shows that developers are actively implementing import/export tools, although the company has yet to provide any timeline for their general availability. (Earlier this year, the Google password manager added functionality to transfer passwords to iOS apps, but the process is clunky.) A recent update from FIDO shows that a large roster of companies are participating in the development, including Dashlane, 1Password, Bitwarden, Devolutions, NordPass, and Okta.
Researchers revealed on Thursday that two European journalists had their iPhones hacked with spyware made by Paragon. Apple says it has fixed the bug that was used to hack their phones.
The Citizen Lab wrote in its report, shared with TechCrunch ahead of its publication, that Apple had told its researchers that the flaw exploited in the attacks had been “mitigated in iOS 18.3.1,” a software update for iPhones released on February 10.
Until this week, the advisory of that security update mentioned only one unrelated flaw, which allowed attackers to disable an iPhone security mechanism that makes it harder to unlock phones.
On Thursday, however, Apple updated its February 10 advisory to include details about a new flaw, which was also fixed at the time but not publicized.
“A logic issue existed when processing a maliciously crafted photo or video shared via an iCloud Link. Apple is aware of a report that this issue may have been exploited in an extremely sophisticated attack against specific targeted individuals,” reads the now-updated advisory.
In the final version of its report published Thursday, The Citizen Lab confirmed this is the flaw used against Italian journalist Ciro Pellegrino and an unnamed “prominent” European journalist
It’s unclear why Apple did not disclose the existence of this patched flaw until four months after the release of the iOS update, and an Apple spokesperson did not respond to a request for comment seeking clarity.
The Paragon spyware scandal began in January, when WhatsApp notified around 90 of its users, including journalists and human rights activists, that they had been targeted with spyware made by Paragon, dubbed Graphite.
Then, at the end of April, several iPhone users received a notification from Apple alerting them that they had been the targets of mercenary spyware. The alert did not mention the spyware company behind the hacking campaign.
On Thursday, The Citizen Lab published its findings confirming that two journalists who had received that Apple notification were hacked with Paragon’s spyware.
It’s unclear if all the Apple users who received the notification were also targeted with Graphite. The Apple alert said that “today’s notification is being sent to affected users in 100 countries.”
TCC on macOS isn't just an annoying prompt—it's the last line of defense between malware and your private data. Read this article to learn why.
Lately, I have been reporting many vulnerabilities in third-party applications that allowed for TCC bypass, and I have discovered that most vendors do not understand why they should care. For them, it seems like just an annoying and unnecessary prompt. Even security professionals tasked with vulnerability triage frequently struggle to understand TCC’s role in protecting macOS users’ privacy against malware.
Honestly, I don’t blame them for that because, two years ago, I also didn’t understand the purpose of those “irritating” pop-up notifications. It wasn’t until I started writing malware for macOS. I realized how much trouble an attacker faces because of TCC in actually harming a victim. I wrote this article for Application Developers in mind so that, after reading it, they do not underestimate the vulnerabilities that allow bypassing TCC. It is also intended for Vulnerability Researchers to illustrate an attack vector for further research.
Apple rolls out iOS and macOS platform updates to fix serious security bugs that could be triggered simply by opening an image or video file.
Apple on Monday pushed out patches for security vulnerabilities across the macOS, iPhone and iPad software stack, warning that code-execution bugs that could be triggered simply by opening a rigged image, video or website.
The new iOS 18.5 update, rolled out alongside patches for iPadOS, covers critical bugs in AppleJPEG and CoreMedia with a major warning from Cupertino that attackers could craft malicious media files to run arbitrary code with the privileges of the targeted app.
The company also documented serious file-parsing vulnerabilities patched in CoreAudio, CoreGraphics, and ImageIO, each capable of crashing apps or leaking data if booby-trapped content is opened.
The iOS 18.5 update also provides cover for at least 9 documented WebKit flaws, some serious enough to lead to exploits that allow a hostile website to execute code or crash the Safari browser engine.
The company also patched a serious ‘mute-button’ flaw in FaceTime that exposes the audio conversation even after muting the microphone.
Beneath the interface, Apple said iOS 18.5 hardens the kernel against two memory-corruption issues and cleans up a libexpat flaw (CVE-2024-8176) that affects a broad range of software projects.
Other notable fixes include an issue in Baseband (CVE-2025-31214) that allows attackers in a privileged network position to intercept traffic on the new iPhone 16e line; a privilege escalation bug in mDNSResponder (CVE-2025-31222); an issue in Notes that expose data from a locked iPhone screen; and security gaps in FrontBoard, iCloud Document Sharing, and Mail Addressing.
It's time to update your Macs again! This time, I'm not burying the lede. CVE-2025-31250, which was patched in today's release of macOS Sequoia 15.5, allowed for…
…any Application A to make macOS show a permission consent prompt…
…appearing as if it were coming from any Application B…
…with the results of the user's consent response being applied to any Application C.
These did not have to be different applications. In fact, in most normal uses, they would all likely be the same application. Even a case where Applications B and C were the same but different than Application A would be relatively safe (if somewhat useless from Application A's perspective). However, prior to this vulnerability being patched, a lack of validation allowed for Application B (the app the prompt appears to be from) to be different than Application C (the actual application the user's consent response is applied to).
Spoofing these kinds of prompts is not exactly new. In fact, the HackTricks wiki has had a tutorial on how to perform a similar trick on their site for a while. However, their method requires:
the building of an entire fake app in a temporary directory,
the overriding of a shortcut on the Dock, and
the simple hoping that the user clicks on the (now) fake shortcut.
This vulnerability requires none of the above.
TCC
As I explained in my first ever article on this site, TCC is the core permissions system built into Apple's operating systems. It is used by sending messages to the tccd daemon (or rather, by using functions in the private TCC framework). The framework is a private API, so developers don't call the functions directly (instead, public API's call the functions under-the-hood as needed). However, all this wrapping cannot hide the fact that the control mechanism is still simply sending messages to the daemon.
The daemon uses Apple's public (but proprietary) XPC API for messaging (specifically the lower-level dictionary-based API). Prior to this vulnerability being patched, any app with the ability to send XPC messages to tccd could send it a specifically-crafted message that, as described above, would make it display a permission prompt as if it were from one app but then apply the user's response to a completely separate app. But how was this possible, and was it even hard? Before I answer these questions, we need to detour into what will, at first, seem like a completely unrelated topic.
Millions of Americans have downloaded apps that secretly route their internet traffic through Chinese companies, according to an investigation by the Tech Transparency Project (TTP), including several that were recently owned by a sanctioned firm with links to China’s military.
TTP’s investigation found that one in five of the top 100 free virtual private networks in the U.S. App Store during 2024 were surreptitiously owned by Chinese companies, which are obliged to hand over their users’ browsing data to the Chinese government under the country’s national security laws. Several of the apps traced back to Qihoo 360, a firm declared by the Defense Department to be a “Chinese Military Company." Qihoo did not respond to questions about its app-related holdings.
Apple finally adds TCC events to Endpoint Security!
Since the majority of macOS malware circumvents TCC through explicit user approval, it would be incredibly helpful for any security tool to detect this — and possibly override the user’s risky decision. Until now the best (only?) option was to ingest log messages generated by the TCC subsystem. This approach was implemented in a tool dubbed Kronos, written by Calum Hall Luke Roberts (now, of Phorion fame). Unfortunately, as they note, this approach did have it drawbacks:
In iOS 18, Apple spun off its Keychain password management tool—previously only tucked away in Settings—into a standalone app called...
For the third time in as many months, Apple has released an emergency patch to fix an already exploited zero-day vulnerability impacting a wide range of its products.
The new vulnerability, identified as CVE-2025-24201, exists in Apple's WebKit open source browser engine for rendering Web pages in Safari and other apps across macOS, iOS, and iPadOS. WebKit is a frequent target for attackers because of how deeply integrated it is with Apple's ecosystem.
Company will no longer provide its highest security offering in Britain in the wake of a government order to let security officials see protected data.
Apple recently addressed a macOS vulnerability that allows attackers to bypass System Integrity Protection (SIP) and install malicious kernel drivers by loading third-party kernel extensions.
#Apple #Computer #InfoSec #Integrity #Microsoft #Protection #SIP #Security #System #Vulnerability #macOS