New Delhi: OpenAI has introduced a new cybersecurity-focused AI model, GPT-5.4-Cyber, days after Anthropic drew attention with its restricted release of Claude Mythos Preview. The move signals a growing split in how leading AI firms are handling cyber risks linked to advanced models.
The announcement, published on April 15, 2026 (IST), outlines OpenAI’s next phase in AI-led cybersecurity. The company is positioning its approach as measured and controlled, even as concerns rise across the industry about misuse by bad actors.
We’re expanding Trusted Access for Cyber with additional tiers for authenticated cybersecurity defenders.
Customers in the highest tiers can request access to GPT-5.4-Cyber, a version of GPT-5.4 fine-tuned for cybersecurity use cases, enabling more advanced defensive workflows.…
— OpenAI (@OpenAI) April 14, 2026
OpenAI says GPT-5.4-Cyber is built to support cybersecurity professionals, helping them identify vulnerabilities, analyse malware, and secure digital systems faster. The model has been trained to allow more flexibility in security-related tasks compared to general-purpose AI systems.
The company said, “We believe the class of safeguards in use today sufficiently reduce cyber risk enough to support broad deployment of current models.”
Access to the model will remain limited in the early phase. It is being rolled out through OpenAI’s Trusted Access for Cyber or TAC programme, which verifies users before granting access. Higher levels of access are reserved for organisations and researchers working in cybersecurity.
OpenAI’s strategy is built around controlled deployment and continuous testing. The company is expanding TAC to include thousands of verified individuals and hundreds of teams responsible for protecting critical infrastructure.
According to the blog, access decisions will rely on identity verification and trust signals instead of manual approvals. The aim is to widen access for legitimate users without opening doors for misuse.
The company said it avoids “arbitrarily deciding who gets access for legitimate use and who doesn’t,” instead relying on structured verification systems.
OpenAI has outlined three core pillars guiding its cybersecurity efforts:
The company has already launched initiatives like Codex Security and a cybersecurity grants programme. These efforts focus on helping developers detect and fix vulnerabilities early in the software development cycle.
The timing of GPT-5.4-Cyber’s launch is notable. Anthropic recently limited access to its Claude Mythos model, citing fears that advanced AI could be exploited for cyberattacks.
OpenAI, too, acknowledges risks but maintains that current safeguards are sufficient for now. At the same time, the company has warned that future models will require stronger protections.
The blog states that more powerful systems will need “more expansive defenses” as capabilities grow.
Contact to : xlf550402@gmail.com
Copyright © boyuanhulian 2020 - 2023. All Right Reserved.