ChatGPT-maker OpenAI may not have disclosed some details – ET CISO
https://etimg.etb2bimg.com/thumb/msid-111528486,imgsize-99750,width-1200,height=765,overlay-etciso/ot-security/chatgpt-maker-openai-may-not-have-disclosed-some-details.jpg
OpenAI, the company behind the popular ChatGPT chatbot, experienced a security breach in 2023. According to a report by the New York Times, hackers infiltrated the Microsoft-backed company’s internal messaging system and were able to steal details about their artificial intelligence (AI) technologies.
The report claims that the compromised information originated from an online forum where OpenAI employees discussed the company’s latest AI advancements. However, the hackers were unable to access the core systems where OpenAI builds and houses its AI, including ChatGPT itself.
Why OpenAI kept these details of the breach under wraps
In April 2023, OpenAI executives informed both employees and the company’s board about the breach. However, they opted to keep the news confidential. Their reasoning for not making the breach public was twofold. Firstly, no customer or partner information was compromised during the attack. Secondly, they believed the hacker was an individual actor with no connection to a foreign government, the report adds.
Concerns about AI safety
This news comes as concerns about the safety and potential misuse of AI technology continue to grow. In May, OpenAI announced that the company had successfully disrupted several covert operations attempting to leverage their AI models for deceptive purposes online.
However, the company reportedly shut down its team dedicated to researching long-term risks from AI, known as the “superalignment team,”. This decision comes just a year after the team’s formation and follows the departures of prominent figures like co-founder and chief scientist Ilya Sutskever and team co-lead Jan Leike.
The Biden administration is also reportedly considering implementing safeguards to protect US AI technology from competitors like China and Russia. These safeguards could potentially include restrictions on access to advanced AI models like ChatGPT.
Earlier this year, 16 major AI companies came together to pledge their commitment to developing AI technology responsibly. This highlights the growing awareness within the industry of the need for ethical considerations alongside the rapid advancements being made in AI.