Hacker Bypasses ChatGPT’s Security to Generate Explosive Device Instructions

Published on September 13, 2024
Written by:
Lore Apostol
Lore Apostol
Cybersecurity & Streaming Writer

An artist and hacker known as Amadon successfully exploited OpenAI's ChatGPT by bypassing its ethical guidelines to produce dangerous instructions for homemade explosives, according to TechCrunch.

Typically, when users inquire about constructing devices like the fertilizer bomb used in the 1995 Oklahoma City bombing, ChatGPT's safeguards prevent the dissemination of such information. However, Amadon employed a sophisticated technique known as "jailbreaking" to circumvent these protections.

ChatGPT’s instructions on how to make a fertilizer bomb were largely accurate, according to retired University of Kentucky professor Darrell Taulbee, who had worked with the U.S. Department of Homeland Security to make fertilizer less dangerous. 

Amadon's approach involved a "social engineering hack," which tricked ChatGPT into discarding its pre-set safety protocols. The hacker initiated the process by instructing ChatGPT to "play a game," creating a context in which the AI's usual safeguards were deactivated. 

Through a series of connected prompts that crafted a detailed science-fiction scenario, Amadon manipulated the chatbot to divulge sensitive information.

Experts confirm that the details provided by ChatGPT included sufficient information to create detonable devices, with instructions becoming increasingly specific as the conversation progressed.

Amadon has articulated his actions as an intellectual challenge rather than a traditional hacking endeavor. “Navigating AI security feels like working through an interactive puzzle — understanding what triggers its defenses and what doesn’t,” he remarked. 

The hacker emphasized that the objective was to engage strategically with the AI, exploring how to elicit specific responses while understanding the framework within which ChatGPT operates.

Recently, the artificial intelligence (AI) company OpenAI banned several Iranian accounts linked to an alleged covert influence operation (IO) focusing on the upcoming U.S. presidential election that used ChatGPT to generate content for various social media accounts and websites.



For a better user experience we recommend using a more modern browser. We support the latest version of the following browsers: For a better user experience we recommend using the latest version of the following browsers: