Akamai CTO Robert Blumofe, ET CISO
Generative artificial intelligence is a superweapon in the hands of cyber threat actors and it is critical for enterprises to integrate safeguards into their LLM-driven chatbots, a leading cybersecurity expert has warned.
Over the past decade or so, the cyber threat landscape has moved from attacks by relatively unsophisticated hacktivists who wished to make a point to a more organised form of attacks meant to extort money, said Robert Blumofe, chief technology officer of Akamai Technologies, a US-based cloud security, computing and content delivery services company, in conversation with ET’s Annapurna Roy.
For these organised cybercriminal gangs, GenAI is now the tool of preference against both enterprises and consumers, he added.
“It is the perfect tool for social engineering, for spear phishing, for constructing malware, for evading defences… For any number of ways that they could attack, they can now do it far more effectively, scalably and virulently, using generative AI,” Blumofe said.
At the same time, AI models face specific vulnerabilities. For instance, they are far more susceptible to DDoS (distributed denial of service) attacks than traditional applications, he noted. Further, criminals can inject cleverly crafted prompts into an AI model to make it do things that it should not be doing. Small inputs can have a large impact, he said.
GenAI also makes it easy for attackers to cross international boundaries, eroding the differences between threat landscapes across different countries, Blumofe warned. Phishing lures can be generated in any language, whether over SMS, email or even voice, where they can be made to sound convincingly like a native speaker, he said.
So, the challenge for enterprises is to ensure that guardrails are built in for their large language model based chatbots to stick to the domain area they were finetuned for, Blumofe said.
Hallucinations are not going away anytime soon – not even for specialised models, he said.
“For an insurance company, for example, that chatbot started life as a foundation model that was trained on everything. That’s a model that could talk about not just how to file a claim, but also about politics, race, gender issues – any number of things that you probably don’t want the model talking about,” he added.
Given these realities, Akamai’s roadmap increasingly involves adding capabilities specifically for protecting and securely deploying LLMs, Blumofe said.
While AI can be a tool in defending against the increasingly sophisticated cyberattacks, Blumofe cautioned against ‘AI washing’ and said enterprises should focus on the fundamentals of cybersecurity.
“I would caution anybody against thinking that AI can be a magical solution that can identify deepfakes and stop these attacks,” he said. “Any good cyber defence has an AI component to it, but it’s not a silver bullet.”
Tried-and-true capabilities like multifactor authentication, encrypted communication, zero-trust access and micro segmentation are now more relevant than ever, even as AI will aid them, Blumofe said.