An unauthorized third party gained access to Microsoft Corp-backed OpenAI internal messages in 2023, the New York Times reported on Thursday. The hacker allegedly infiltrated an employee online forum containing discussions about OpenAI's latest technologies and managed to steal details about the company's AI tech design.
OpenAI is behind the ChatGPT chatbot, but the report says the systems used for storing and building its AI have not been exposed. The employees and the company's board were notified about the breach in April last year.
However, the security incident was not seen as a threat, so it was not disclosed publicly or to federal law enforcement agencies because it did not affect any individuals, according to the report. The hacker is suspected to be a private individual unrelated to a foreign government.Â
In May, OpenAI said it had disrupted five covert influence operations using its AI models for malicious activity, which sparked debates about the risk of AI misuse. Recently, scammers were observed using generative AI tools to send credible malware-ridden messages via Booking.com to hosts and even create fake properties to collect sensitive user data.
Shortly after free online tools like ChatGPT were launched, a worrying rise in online scams using generative AI phishing attacks was observed.
At the second global AI summit in May, 16 companies developing AI pledged to ensure the safety of their technology, accountable governance, and public transparency, including Amazon, Microsoft, Samsung, IBM, xAI, France's Mistral AI, China's Zhipu.ai, and G42 of the United Arab Emirates.