“For hackers and their targets, it’s now a game of cat and mouse.” According to Matthieu Bonenfant, Marketing Director at Stormshield, it is impossible to predict whether big data and artificial intelligence (AI) will have a lasting impact on the power dynamics of cybersecurity. Both sides are striving to gain the upper hand in the use of data – and the game has only
Hacking, marketing – the same principle?
In many cases of malicious hacking, conventional techniques have been refined through the use of big data. “Hackers think like marketers when they set the price of a ransom via ransomware,” says Matthieu Bonenfant. A hacker will often try to maximise their profits, but if the price of the ransom is too high, the victim may refuse to pay or in some cases will call the police.
“The more data the hacker has on their target, the better their chances of success,” explains the marketing director. To gather such data, hackers often resort to social engineering techniques, whereby dedicated software is used to analyse the target’s ‘open source’ information. Social networking sites thus become gold mines, and personal data a vulnerability. Who can honestly say they’ve never used a birthday, or the name of a cat or a loved one, as inspiration for a password?
Social engineering is also used for spear phishing. Using their personal data, hackers are able to profile users and send them personalised messages – just like a good marketing software programme. “AI can spot links in data that an ordinary human could not, and test combinations at a much faster rate,” explains Matthieu Bonenfant.
And now, with the rise of chatbots in the marketing sphere, experts have predicted a new malicious purpose for AI: “Malicious chatbots could be invented and, through them, a fictional conversation automatically set up with the target so as to reduce their alertness to potential threats.” Not to mention the risk of real robots being hacked and their conversations intercepted. To avoid this, messages should be encrypted and stored securely before being deleted within a set time frame.
Artificial intelligence: buzzword or genuine opportunity?
In addition to these uses – apparently inspired by the world of marketing – AI is also a source of great interest and discussion throughout the Internet. But it is time to debunk a certain myth: that of the ultra-powerful malware, driven purely by artificial intelligence and able to thwart any computer protection system that opposes it. “At the moment, and to the best of our knowledge, we have never seen malware directly based on AI,” assures Paul Fariello, Security Researcher at Stormshield.
The only known cases of artificial intelligence programmes attacking information systems on their own are within the realm of academia. This was the case in Las Vegas in 2016, where, for the first time, seven artificial intelligence systems competed in the DARPA Cyber Grand Challenge.
For now, “conventional tactics are working very well indeed,” laments Paul Fariello. “People are still fooled by basic techniques, such as phishing – a similar process to teleshopping, and which only takes one or two ‘buyers’ to become profitable. So why would hackers develop expensive programmes based entirely on AI?” In this regard, the best defence strategy remains the training of specialised teams. And maybe one day, there will be a massive government investment in an attack programme powered by artificial intelligence – only time will tell.
AI plays better in defence
In terms of computer defence tools, however, artificial intelligence has more direct uses. Combined with big data, AI can be used to analyse large numbers of files exhibiting suspicious behaviour, and clearly identify those that are truly malicious in nature. “The goal is to have a number of analysts continuously processing an exponential amount of data,” explains Paul Fariello. These detailed analyses can contribute to patches and updates that leave hackers less able to take a chief information security officer (CISO) completely by surprise, and reduces the risk of a zero-day attack.
To capitalise on this opportunity, Stormshield has developed the cloud sandboxing programme Breach Fighter. Its open-access portal allows a suspicious file to be submitted for further analysis. Through this software, information on all detected malware is thus automatically centralised. “This gives us a much better view of the threat,” adds Matthieu Bonenfant. “As new malware is identified, better systems of protection are developed – unlike previously, when information on malware often remained with users.” Yet AI cannot operate within computer defence without the help of humans. This creates a paradox: AI’s usefulness to computer security depends heavily on its interaction with humans. Elsewhere, in contrast (e.g. driverless cars, Computer Go games), artificial intelligence is far more autonomous.
“AI does not guarantee a result,” explains Paul Fariello. “It does not produce anything of substance by itself. It simply calculates the probability that a file’s behaviour defines it as benign or malicious, according to criteria pre-defined by humans, and it sometimes has trouble telling the difference. Take the example of a malware that steals a computer’s processing power in order to mine cryptocurrencies. Its behaviour might be very similar to that of an original cryptocurrency-mining software programme, created by humans for this purpose.” In a different context altogether, but just as important to understanding the limited discernment capabilities of AI, Amazon has been forced to put an end to an internal programme developed to facilitate recruitment, after reported cases of gender discrimination.
Similarly to the tools used by hackers, collecting large volumes of information is thus essential to the protection of potential victims. For both cat and mouse, the race for data has only just begun.