Leron Zinatullin Cybersecurity Risk Consultant
Cybersecurity suffers from a skills shortage in the market. As a result, the opportunities for artificial intelligence (AI) automation are vast. In many cases, AI is used to enhance and improve certain defensive aspects of cybersecurity. Prime examples are combating spam and detecting malware.
From the attacker point of view, there are many incentives to using AI when trying to penetrate others’ vulnerable systems. These incentives include the speed of attack and low costs of AI, combined with the likely situation that the system being attacked is understaffed in its cyber-protections (due to the skills shortage). These factors add up to create an attractive environment for bad actors.
Current research in the public domain is limited to white hat hackers employing machine learning to identify vulnerabilities and suggest fixes. At the speed AI is developing, however, it won’t be long before we see attackers using these capabilities on a mass scale, if they aren't already.
How do we know for sure? It is true that it is quite hard to attribute a botnet or a phishing campaign to AI rather than a human. Industry practitioners, however, believe that we will see an AI-powered cyber-attack within a year; 62% of surveyed Black Hat conference participantsseem to be convinced in such a possibility.
Many believe that AI is already being deployed for malicious purposes by highly motivated and sophisticated attackers. It’s not at all surprising – AI systems make an adversary’s job much easier.