The Downside of AI
Artificial Intelligence (AI) is gradually turning trivial, inanimate objects around us into sophisticated systems with a human-like interface. These can behave in a human-like fashion, can learn all by themselves, and, theoretically, can make infinite progress, always becoming something more than they had been a year, a day, or even a second ago. So, much like humans – and just as unpredictable.
AI-based programs owe this flexibility to their Machine Learning (ML) nature, which, in essence, means that they are able to gain new knowledge without being explicitly programmed. You may get a better picture of what this means if you imagine your word processing program, say Microsoft Word, suddenly realizing that you prefer to use Arial instead of Times New Roman and modifying its Normal template to appease your wishes all by itself. Now, multiply this by hundred and you’ll get the idea of what some of the greatest minds in the world are working on at the moment.
History has taught us that with every great discovery comes a great way to misuse it. And just as ML can turn strange-looking puppets into self-evolving androids, it can also modify programs into advanced tools for illegal and malicious activities, or cybercrimes.
We’ve written quite a few articles about the light side of the ML/AI force. This one is about the dark side. And what the light side does – and should do – to control it.
The Cruelty of Powerful Algorithms
ML-based applications are already advanced enough to make your life much less complicated. Just think of a virtual assistant that can schedule your meetings or purchase the thing you like at the cheapest possible price, or smart household devices which will gradually adapt to your personal sense of comfort, or even self-learning programs that can analyse piles of data and, rather accurately, predict the near future.
But, Ð°s every coin, this one has its downside too: ML-based applications are also already advanced enough to make your life much more complicated.
Because, if they fall in criminal hands, they can just as well develop into a real danger for society. Indeed, theoretically, one can “teach” an ML-based program to perform any kind of activity, regardless of how legal it is. Almost everything can be made easier for humans with the appropriate ML algorithm. But even digital criminals are humans – and this is true about them as well.
You’re still wondering how? Well, remember natural language processing (NLP) and image recognition? A while ago, they were the two most talked-about fields of computer science. They still are, in one way or another, and it just happens to be that they are what AI is all about. Teaching a program to understand natural language, and recognize the way a human does, means teaching a program to simulate human behaviour.
And a program which simulates human behaviour can also be used to steal personal data.
Misusing Machine Learning
ML-based programs have been used regularly to perform malicious activities for years now and you are probably already familiar with the two most popular utilizations of the technology: imitating writing styles (in spam letters) and mimicking human behaviour to bypass CAPTCHA. Both may sound quite trivial today – but, you have to remember that they weren’t just a while ago.
Meaning – it took quite some time for the light side to find a way to tackle them.
As of April 2016, AI-powered email scams have cost business more than $2.3 billion in losses over the period between 2014 and 2016. To put this into perspective, that amounts to almost $800 million a year, or $66 million a month, or a little more than $2 million a day. The dramatic increase in business email compromise scams (or “CEO frauds” as they are more conveniently called) owes much to the advancements in the ML/AI technology. Because it’s due to them that computer criminals have found a way to turn the more suspicious and less successful phishing attacks (sending generic letters to a large random selection of receivers) into a more personalized form of phishing (sending emails directed at specific individuals), or even cleverly disguised attempts at “whaling”, so called not only because it targets certain individuals, but also because these individuals are high-profile targets.
Just a few years ago, when ML was too expensive, it was difficult to automatize sending naturally-sounding targeted and personalized emails. But today, thanks to few advancements in NLP, smart programs are already able to acquire the structure of a certain individual’s business emails quite successfully (learning the specific phrases which make his or her writing style recognizable) and, subsequently, target that particular individual.
“If I were emailing someone outside the company, I’d probably be polite and formal, but if I was emailing a close colleague, I’d be more jokey as I email them all the time,” says Dave Palmer, director of technology at Darktrace, a cybersecurity company interested in ML. “Maybe I’d sign off my emails to them in a certain way. That would all be easily replicated by machine learning and it’s not hard to envision an email mimicking my style with a malicious attachment.“
As of June 2017, business email compromise (BEC) and email account compromise (EAC) are the two most common cybercrimes worldwide. The only difference between the cybercrimes is that whereas EAC is more general, BEC attacks target business accounts exclusively. The idea behind both crimes is the same: to steal sensitive data from accounts which regularly perform wire transfer payments.
One of the most high-profile BEC cases in recent memory is Olympic Vision, an info-stealing keylogger malware, which managed, using spear phishing (emails targeted at a specific company) and social engineering techniques, to intrude upon business transactions of at least 18 large companies in the US, Middle East and Asia.
CAPTCHA and reCAPTCHA
You may have screamed at websites for including it quite a few times, but you should know that CAPTCHA was the best and the most secure way to prevent malware software from scraping sensitive data ever since it was first invented in 1997. And then in April 2014, Google reported that its data scientists had invented a ML algorithm able to crack CAPTCHA with 99.8% of success.
Consequently, the distorted text of regular CAPTCHA had to be substituted by “I’m not a robot” checkbox called reCAPTCHA, leading many people around the world to wonder how on Earth was that an improvement and how can a simple checkbox prevent anything? The concept, however, is much more complicated than it seems at first sight.
Because in reCAPTCHA, the main thing isn’t the input itself, but the behaviour of the user leading up to it. In other words, it doesn’t matter if you check the box, but it matters how quickly you scroll down (and/or whether you scroll at all) to check it. As it turns out, creating an AI software which is able to mimic such human behaviour is quite a challenging task..
Using AI To Tackle Cybersecurity Issues
According to Bot Traffic Report, one third of all Internet traffic comes from potentially malicious programs. However, as the most recent ransomware outbreaks have proved, this is only a part of the problem.
WannaCry and Petya took the world by storm, with the majority of cybersecurity specialists being forced to admit that internet security is experiencing a severe crisis at the moment. And as humans become more and more helpless to tackle the issues, turning to ML and AI for help may be the only viable solution.
Just so that you understand better how serious things are at the moment: Microsoft recently admitted that they may need ML algorithms to prevent cyber criminals from attacking its OS. Using large amounts of data and numerous ML algorithms, Microsoft intends to add new AI-driven features making Windows Defender Advanced Threat Detection way smarter than before in recognizing malicious threats.
“The stack will be powered by our cloud-based security intelligence, which moves us from a world of isolated defenses to a smart, interconnected, and coordinated defense grid that is more intelligent, simple to manage, and ever-evolving,” Microsoft commented on its plans.
And this is merely the first step. Very soon, AI should become an integral part of cybersecurity, identifying software flaws and potential threats with an unprecedented accuracy and a mind-blowing tempo. The war between white hats and black hat crackers will very soon evolve into a war between ethical and malicious ML algorithms. And the DARPA grand challenge is merely the beginning.
Welcome, people: cybersecurity has entered an AI-powered era.
It’s official: ML and AI will probably revolutionize everything, from cryptocurrencies to app development to power systems. And, regardless of certain limitations, they will probably very soon blur the line between being a machine and being a living person and revolutionize human existence itself. It’s only normal to expect that AI will transform cybercrime as well.
However, even though it has made possible for computer criminals to – ever more and more accurately – mimic human behaviour and steal sensitive data, AI has also proved the most reliable solution to tackle this very behaviour. Large companies, such as Microsoft, have already started developing advanced algorithms in order to strengthen their defence mechanisms and grow less susceptible to malware and ransomware attacks.
To you, it may seem as just another episode of the never-ending war between good and evil. To people in the industry, however, it’s also something of a necessary step forward, a sort of a challenging competition which will eventually push the limits of AI.
And, subsequently, of what’s possible.