Govts should engage AI research institutions to detect fake news: Book, ET CISO
With the need for regulating dissemination of information becoming extremely crucial, governments should engage AI research institutions and social media companies to detect and flag false or misleading content, says a new book. Author Aswin Chandarr says “The Inevitable AI: Art Of Growth With Generative Intelligence” is a “friendly guide” to the world of AI or artificial intelligence, seeking to demystify its essence, strengths and limitations.
Governments have traditionally been the custodians of public discourse and the architects of national narratives. However, the landscape has significantly changed with the advent of social media and its unprecedented reach, the book says.
“… Distorted narratives can manipulate public opinion, sway elections, and even incite violence. The need for governments to regulate the dissemination of information has never been more crucial,” it says.
How can this be achieved?
“One approach is through monitoring and fact-checking. Governments could collaborate with AI research institutions and social media companies to develop sophisticated AI systems capable of detecting and flagging false or misleading content,” suggests Chandarr.
“Additionally, public investments can be directed towards enhancing digital literacy, which would equip citizens with the necessary skills to discern truth from falsehood in the digital arena,” he says.
Regulating online platforms is another essential strategy, he adds.
“Governments must hold these platforms accountable for the content they host, encouraging them to actively moderate and factcheck the information they disseminate. One possible measure could be implementing strict penalties for platforms that fail to remove harmful content,” Chandarr writes.
He says this is, however, a delicate task.
“On the one hand, strict regulation is necessary to curb the spread of disinformation. Conversely, not infringing upon the fundamental right to freedom of speech is crucial. Striking a balance between these two imperatives is a task that governments will need to navigate with care and precision.
“Moreover, governments must maintain high transparency and integrity standards in communication to set the right precedent,” the book says.
According to the author, the battle against the deluge of disinformation and misinformation is a challenge of paramount importance for governments in the AI era.
Through proactive measures such as monitoring, fact-checking, regulation, education, and maintaining transparency in communication, governments can provide a bulwark against these harmful tactics, he says.
“This task is fraught with complexities and potential pitfalls, but it is one that governments cannot afford to ignore, for the stakes are high concerning nothing less than the integrity of our democratic societies,” he adds.
The book also suggests that the very strengths of AI – its speed, precision, and capacity for complex problem-solving – can be harnessed to enhance national cybersecurity.
“Generative AI systems can shoulder routine but intricate tasks, liberating skilled cyber personnel to tackle the ever-changing landscape of threats. By leveraging AI’s potential, we can build reactive defence systems that proactively anticipate, adapt, and neutralize threats, keeping us one step ahead in the ceaseless game of digital cat and mouse,” it says.