OpenAI Revokes macOS Certificate After Axios Supply Chain Attack

OpenAI macOS certificate revoked after malicious Axios library infiltrated GitHub Actions workflow, forcing company to rebuild app signing process.

April 13, 2026

GitHub Actions Workflow Targeted in OpenAI Security Incident

OpenAI pulled its macOS app certificate after discovering a malicious version of the popular Axios JavaScript library had made its way into the GitHub Actions workflow the company uses to sign its macOS applications. The company disclosed the incident on April 11 and confirmed no user data or internal systems were compromised, though it is treating the certificate as potentially exposed and has revoked it as a precaution.

What Happened

The incident traces back to March 31, 2026, when threat actors believed to be linked to a North Korean hacker group hijacked the npm account of an Axios library maintainer and published two malicious updates: versions 1.14.1 and 0.30.4. 

The compromised versions introduced a hidden dependency called plain-crypto-js, which functioned as a cross-platform Remote Access Trojan capable of running on Windows, macOS, and Linux. It was engineered to perform system reconnaissance, establish persistence, and then self-destruct to avoid detection.

OpenAI’s GitHub Actions workflow automatically pulled version 1.14.1 of Axios during its macOS app-signing process. That workflow had access to the certificate and notarization material used to sign ChatGPT Desktop, Codex, Codex CLI, and Atlas. This is the certificate that tells macOS and Apple’s systems that software genuinely comes from OpenAI.

OpenAI’s own analysis concluded the signing certificate was likely not successfully stolen, based on the timing of the payload execution, how the certificate is injected into the job, and other factors in the build sequence. That said, the company is treating the certificate as compromised out of caution and has revoked and rotated it.

OpenAI confirmed that no customer data was accessed, no internal systems were breached, and no software was altered. Passwords and API keys were not affected.

What Users Need to Do

OpenAI has given users until May 8, 2026 to update to the latest versions of its macOS apps. After that date, older versions will stop receiving updates and may become fully non-functional. The company is also working with Apple to block any further notarization of apps signed with the previous certificate, which means any fake OpenAI app using that old certificate would be blocked by macOS security by default.

Users do not need to change their passwords or API keys. Updating through the in-app update mechanism or downloading the latest version from OpenAI’s official site is all that is required.

The Root Cause

OpenAI confirmed the underlying issue was a misconfiguration in its GitHub Actions workflow. The workflow was using a floating tag, meaning it was set to pull the latest available version of a dependency rather than a pinned, verified one. It also had no minimumReleaseAge setting, which would have delayed adoption of newly published package versions and given the security community time to catch something suspicious.

That combination allowed the compromised Axios version to be pulled automatically into the build process the moment it was published.

The Impact

This incident puts a spotlight on a growing pattern. Attackers are increasingly targeting CI/CD pipelines, code-signing systems, and open-source package registries because a single compromise there can cascade across multiple products and organizations at once.

Axios is one of the most widely used JavaScript libraries in the world, with tens of millions of weekly downloads. Targeting a package with that kind of reach gives attackers enormous leverage. The malicious versions were only live for a short window before being removed, but automated build pipelines can pull a new version within seconds of it being published.

The broader attack that included Axios was one of two major supply chain incidents targeting the open-source ecosystem in March. Google has also linked the campaign to a North Korean group.

For AI companies specifically, this incident signals that they are now prime targets for classic software supply chain attacks, not just threats that are unique to AI infrastructure.

How to Avoid This

Organizations can take several concrete steps to reduce exposure to this type of attack.

  • Pin dependencies to a specific commit hash rather than a version tag or floating reference. This is the most direct fix for the exact misconfiguration that affected OpenAI.
  • Set a minimumReleaseAge for package updates. This creates a delay between when a new package version is published and when your build system can pull it, giving the community time to flag suspicious releases.
  • Treat every CI runner as a potential entry point. Avoid pull-request-target triggers in GitHub Actions unless absolutely necessary, and use short-lived, narrowly scoped credentials throughout your pipeline.
  • Use an internal mirror or artifact proxy for critical dependencies. This adds a layer of control between your build system and public package registries.
  • Deploy canary tokens across your build environment so that any attempt to exfiltrate secrets or certificates triggers an alert immediately.
  • Audit your GitHub Actions workflows regularly for hardcoded secrets, overly broad permissions, and dependencies that are not pinned.

Both Docker and the Python Package Index have published detailed guidance on securing CI pipelines in the wake of this incident. The U.S. Cybersecurity and Infrastructure Security Agency has also added CVE-2026-33634 to its Known Exploited Vulnerabilities catalog.

How Secure.com Helps

Supply chain attacks like this one succeed when build pipelines are not treated with the same level of monitoring as production systems. Secure.com gives security teams continuous visibility across their entire attack surface, including CI/CD infrastructure, so unusual behavior in a build workflow does not go undetected.

Here is what that looks like in practice:

  • Continuous asset discovery that covers build pipelines, CI runners, and developer tooling, not just production infrastructure
  • Real-time alerts for policy violations and configuration drift across workflows
  • Risk scoring that flags overly permissive credentials, floating dependency tags, and other misconfigurations before they become incidents
  • Automated evidence collection tied to frameworks like NIST and CIS, so your team can show auditors exactly what controls are in place across your development environment
  • Workflow automation that enforces security checks at every stage of the build process, not just at the perimeter