Here’s what IT leaders have to say on ChatGPT-enabled cyber attacks
ChatGPT quickly became the talk-of-the-town when it launched last year. It wowed with its capabilities so much so that Microsoft announced a huge investment in OpenAI, the company that developed the AI platform. IT leaders have now said that apart from its exciting use cases, ChatGPT poses a threat to people who are online.
According to a research conducted by BlackBerry, the popular chatbot can be used against organisations in the form of AI-infused cyberattacks in the next 12 to 24 months. The research, conducted in January 2023, said that 51% of 1,500 IT and cybersecurity decision-makers surveyed across North America, Australia, and the UK say that “we are less than a year away from a successful cyberattack being credited to ChatGPT.”
“Some think that could happen in the next few months. And more than three-fourths of respondents (78%) predict a ChatGPT credited attack will certainly occur within two years. In addition, a vast majority (71%) believe nation-states may already be leveraging ChatGPT for malicious purposes,” the report found.
Ways IT leader think threat actors may harness the AI chatbot
The BlackBerry report also said that 53% of IT professionals believe that ChatGPT will help hackers craft more believable and legitimate-sounding phishing emails, 49% say it will help less experienced hackers improve their technical knowledge and develop their skills, and equal number of them suggest that ChatGPT will be used to for spreading misinformation/ disinformation.
Furthermore, 48% claim that the AI chatbot will be used to create new malware and 46% of them say it will be used to increase the sophistication of threats/attacks. Nearly three-quarters of respondents believe ChatGPT will be used mainly for “good.”
“I believe these concerns are valid, based on what we’re already seeing. It’s been well documented that people with malicious intent are testing the waters and over the course of this year, we expect to see hackers get a much better handle on how to use AI-enabled chatbots successfully for nefarious purposes,” said Shishir Singh, BlackBerry’s chief technology officer.
“In fact, both cybercriminals and cyberdefense professionals are actively investigating how they can utilise ChatGPT to augment and improve their intended outcomes, and they will continue to do so. Time will tell which side is ultimately more successful” he added.
Last year, similar concerns were raised by cyber security company CheckPoint Research when its team of researchers found that ChatGPT can write phishing emails and basic malicious codes.