How cybersecurity firms are using AI to mitigate online threats
The Hindu
AI is being used to predict and identify threats, reduce human error, and provide insights on attacks. Despite its benefits, caution is recommended as AI-based tools can be misused by criminals. Consolidation of data is seen in the form of integrated platforms and databases, but no one product will rule them all.
As artificial intelligence (AI) tools are increasingly used in content generation, work applications, and even web search, hackers have figured out ways to misuse this technology. AI-generated deepfakes are one of the important areas of concern for cybersecurity experts.
Cybersecurity firms have been investing in machine learning, a subset of AI, for quite some time now to counter such threats. These investments are coming to fruition with the launch of AI models that can predict vulnerabilities and warn users about threat actors. Hacking threats, as predicted by an AI model built by cybersecurity firm Tenable, is “at about 25%,” said Glen Pendley, CTO of Tenable.
Technologies like Security Data Retention (SDR) and ML are used to predict and identify threats, especially those that are anomalous, Pendley added.
“ML is critical, and implementation of AI in cybersecurity takes their use to a different level,” said Vishal Salvi, CEO of Quick Heal Technologies.
(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)
Apart from threat prediction, cybersecurity firms are also deploying AI where there is a lack of experienced human resources.
Despite apparent benefits, “it [AI] should be used with caution” said Pendley. “Like any other tool, while it can be useful, it can also be dangerous. I wouldn’t recommend people shy away from it. I would just say treat it like you would any other tool and try to maximize efficiency through its use.”