| OpenAI
openai.com/index/mixpanel-incident
November 26, 2025
OpenAI shares details about a Mixpanel security incident involving limited API analytics data. No API content, credentials, or payment details were exposed. Learn what happened and how we’re protecting users.
Transparency is important to us, so we want to inform you about a recent security incident at Mixpanel, a data analytics provider OpenAI used for web analytics on the frontend interface for our API product (platform.openai.com(opens in a new window)).
The incident occurred within Mixpanel’s systems and involved limited analytics data related to some users of the API. Users of ChatGPT and other products were not impacted.
This was not a breach of OpenAI’s systems. No chat, API requests, API usage data, passwords, credentials, API keys, payment details, or government IDs were compromised or exposed.
What happened
On November 9, 2025, Mixpanel became aware of an attacker that gained unauthorized access to part of their systems and exported a dataset containing limited customer identifiable information and analytics information. Mixpanel notified OpenAI that they were investigating, and on November 25, 2025, they shared the affected dataset with us.
What this means for impacted users
User profile information associated with the use of platform.openai.com(opens in a new window) may have been included in data exported from Mixpanel. The information that may have been affected was limited to:
Name that was provided to us on the API account
Email address associated with the API account
Approximate coarse location based on API user browser (city, state, country)
Operating system and browser used to access the API account
Referring websites
Organization or User IDs associated with the API account
Our response
As part of our security investigation, we removed Mixpanel from our production services, reviewed the affected datasets, and are working closely with Mixpanel and other partners to fully understand the incident and its scope. We are in the process of notifying impacted organizations, admins, and users directly. While we have found no evidence of any effect on systems or data outside Mixpanel’s environment, we continue to monitor closely for any signs of misuse.
Trust, security, and privacy are foundational to our products, our organization, and our mission. We are committed to transparency, and are notifying all impacted customers and users. We also hold our partners and vendors accountable for the highest bar for security and privacy of their services. After reviewing this incident, OpenAI has terminated its use of Mixpanel.
Beyond Mixpanel, we are conducting additional and expanded security reviews across our vendor ecosystem and are elevating security requirements for all partners and vendors.
What you should keep in mind
The information that may have been affected here could be used as part of phishing or social engineering attacks against you or your organization.
Since names, email addresses, and OpenAI API metadata (e.g., user IDs) were included, we encourage you to remain vigilant for credible-looking phishing attempts or spam. As a reminder:
Treat unexpected emails or messages with caution, especially if they include links or attachments.
Double-check that any message claiming to be from OpenAI is sent from an official OpenAI domain.
OpenAI does not request passwords, API keys, or verification codes through email, text, or chat.
Further protect your account by enabling multi-factor authentication(opens in a new window).
The security and privacy of our products are paramount, and we remain resolute in protecting your information and communicating transparently when issues arise. Thank you for your continued trust in us.
OpenAI
FAQ
Why did OpenAI use Mixpanel?
Mixpanel was used as a third-party web analytics provider to help us understand product usage and improve our services for our API product (platform.openai.com)
Was this caused by a vulnerability in OpenAI’s systems?
No. This incident was limited to Mixpanel’s systems and did not involve unauthorized access to OpenAI’s infrastructure.
How do I know if my organization or I were impacted?
We are in the process of notifying those impacted now, and we will reach out to you, or your organization admin, directly via email to inform you.
Was any of my API data, prompts, or outputs affected?
No. Chat content, prompts, responses, or API usage data were not impacted.
Were ChatGPT accounts affected by this?
No. Users of ChatGPT and other products were not impacted.
Were OpenAI passwords, API keys, or payment information exposed?
No. OpenAI passwords, API keys, payment information, government IDs, and account access credentials were not impacted. Additionally, we have confirmed that session tokens, authentication tokens, and other sensitive parameters for OpenAI services were not impacted.
Do I need to reset my password or rotate my API keys?
Because passwords and API keys were not affected, we are not recommending resets or key rotation in response to this incident.
What are you doing to protect my personal information and privacy?
We have obtained the impacted datasets for independent review and are continuing to investigate potential impact, and monitor closely for any signs of misuse. We are notifying all individually impacted users and organizations and are in contact with Mixpanel on further response actions.
Has Mixpanel been removed from OpenAI products?
Yes.
Should I enable multi-factor authentication for my account?
Yes. While account credentials or tokens were not impacted in this incident, as a best practice security control, we recommend all users enable multi-factor authentication to further protect their accounts. For enterprises and organizations, we recommend that MFA is enabled at the single sign-on layer.
Will I receive further updates if something changes?
We’re committed to transparency and will keep you informed if we identify new information that materially affects impacted users. We will also update this FAQ.
Is there someone I can reach out to if I have questions?
If you have questions, concerns, or security issues, you can reach our support team at mixpanelincident@openai.com.
source: OpenAI openai.com
October 30, 2025
Now in private beta: an AI agent that thinks like a security researcher and scales to meet the demands of modern software.
Today, we’re announcing Aardvark, an agentic security researcher powered by GPT‑5.
Software security is one of the most critical—and challenging—frontiers in technology. Each year, tens of thousands of new vulnerabilities are discovered across enterprise and open-source codebases. Defenders face the daunting tasks of finding and patching vulnerabilities before their adversaries do. At OpenAI, we are working to tip that balance in favor of defenders.
Aardvark represents a breakthrough in AI and security research: an autonomous agent that can help developers and security teams discover and fix security vulnerabilities at scale. Aardvark is now available in private beta to validate and refine its capabilities in the field.
How Aardvark works
Aardvark continuously analyzes source code repositories to identify vulnerabilities, assess exploitability, prioritize severity, and propose targeted patches.
Aardvark works by monitoring commits and changes to codebases, identifying vulnerabilities, how they might be exploited, and proposing fixes. Aardvark does not rely on traditional program analysis techniques like fuzzing or software composition analysis. Instead, it uses LLM-powered reasoning and tool-use to understand code behavior and identify vulnerabilities. Aardvark looks for bugs as a human security researcher might: by reading code, analyzing it, writing and running tests, using tools, and more.
Diagram titled “AARDVARK — Vulnerability Discovery Agent Workflow” showing a process flow from Git repository to threat modeling, vulnerability discovery, validation sandbox, patching with Codex, and human review leading to a pull request.
Aardvark relies on a multi-stage pipeline to identify, explain, and fix vulnerabilities:
Analysis: It begins by analyzing the full repository to produce a threat model reflecting its understanding of the project’s security objectives and design.
Commit scanning: It scans for vulnerabilities by inspecting commit-level changes against the entire repository and threat model as new code is committed. When a repository is first connected, Aardvark will scan its history to identify existing issues. Aardvark explains the vulnerabilities it finds step-by-step, annotating code for human review.
Validation: Once Aardvark has identified a potential vulnerability, it will attempt to trigger it in an isolated, sandboxed environment to confirm its exploitability. Aardvark describes the steps taken to help ensure accurate, high-quality, and low false-positive insights are returned to users.
Patching: Aardvark integrates with OpenAI Codex to help fix the vulnerabilities it finds. It attaches a Codex-generated and Aardvark-scanned patch to each finding for human review and efficient, one-click patching.
Aardvark works alongside engineers, integrating with GitHub, Codex, and existing workflows to deliver clear, actionable insights without slowing development. While Aardvark is built for security, in our testing we’ve found that it can also uncover bugs such as logic flaws, incomplete fixes, and privacy issues.
Real impact, today
Aardvark has been in service for several months, running continuously across OpenAI’s internal codebases and those of external alpha partners. Within OpenAI, it has surfaced meaningful vulnerabilities and contributed to OpenAI’s defensive posture. Partners have highlighted the depth of its analysis, with Aardvark finding issues that occur only under complex conditions.
In benchmark testing on “golden” repositories, Aardvark identified 92% of known and synthetically-introduced vulnerabilities, demonstrating high recall and real-world effectiveness.
Aardvark for Open Source
Aardvark has also been applied to open-source projects, where it has discovered and we have responsibly disclosed numerous vulnerabilities—ten of which have received Common Vulnerabilities and Exposures (CVE) identifiers.
As beneficiaries of decades of open research and responsible disclosure, we’re committed to giving back—contributing tools and findings that make the digital ecosystem safer for everyone. We plan to offer pro-bono scanning to select non-commercial open source repositories to contribute to the security of the open source software ecosystem and supply chain.
We recently updated our outbound coordinated disclosure policy which takes a developer-friendly stance, focused on collaboration and scalable impact, rather than rigid disclosure timelines that can pressure developers. We anticipate tools like Aardvark will result in the discovery of increasing numbers of bugs, and want to sustainably collaborate to achieve long-term resilience.
Why it matters
Software is now the backbone of every industry—which means software vulnerabilities are a systemic risk to businesses, infrastructure, and society. Over 40,000 CVEs were reported in 2024 alone. Our testing shows that around 1.2% of commits introduce bugs—small changes that can have outsized consequences.
Aardvark represents a new defender-first model: an agentic security researcher that partners with teams by delivering continuous protection as code evolves. By catching vulnerabilities early, validating real-world exploitability, and offering clear fixes, Aardvark can strengthen security without slowing innovation. We believe in expanding access to security expertise. We're beginning with a private beta and will broaden availability as we learn.
Private beta now open
We’re inviting select partners to join the Aardvark private beta. Participants will gain early access and work directly with our team to refine detection accuracy, validation workflows, and reporting experience.
We’re looking to validate performance across a variety of environments. If your organization or open source project is interested in joining, you can apply here.