Entities
04-mini
Incidents implicated systems
Incident 12381 Report
OpenAI ChatGPT Models Reportedly Jailbroken to Provide Chemical, Biological, and Nuclear Weapons Instructions
2025-10-10
An NBC News investigation found that OpenAI's language models o4-mini, GPT-5-mini, oss-20b, and oss-120b could be jailbroken under normal usage conditions to bypass safety guardrails and generate detailed instructions for creating chemical, biological, and nuclear weapons. Using a publicly documented jailbreak prompt, reporters repeatedly elicited hazardous outputs such as steps to synthesize pathogens or maximize harm with chemical agents. The findings reportedly revealed significant real-world safeguard failures, prompting OpenAI to commit to further mitigation measures.
MoreRelated Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities. 
Related Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities.