There are literally hundreds of articles about machine learning out there! And probably each of them is worth both your time and your attention.
By Iffy Kukkoo
20 Jul, 2017
There are literally hundreds of articles about machine learning out there! And probably each of them is worth both your time and your attention. Is there some special reason why you should read this one? I may be able to offer you one.
First of all, you may have already noticed that reading too many articles on the same subject will probably leave you with nothing more than red eyes and headaches. In addition to few ambiguously remembered concepts you were there to understand clearly in the first place. Reading too much is sometimes as tricky as reading too little: just when you think you grasp something, you are left as bemused as before by the slightly different explanation proposed in the next article.
Which brings me to my second point. Namely, most of the articles about machine learning you can find on the internet make the mistake of either offering too little or too much important information. There are the ones which want to introduce you to a certain field baby-steps style. Their problem is that they expect you to spend hours and hours reading article after article to get the whole picture. The problem with the other category of articles – the crash-course-styled ones – is the opposite one: they expect you to understand each and every sentence from the offset. We don’t have the time to explore the former, and we are not computers to process so many information at once.
That’s why this article is a bit different. Instead of showing you miniature parts of the whole picture bit by bit or all the parts of the picture at once, it opts for the compromise solution: providing a strong frame for all your future knowledge. Jigsaw-puzzle style: see the whole image now, and you’ll easily find a place for each puzzle piece afterwards.
You may be surprised how many people often confuse Artificial Intelligence (AI) and Machine Learning (ML). So, defining them is our first assignment.
Unfortunately, it’s not something you can’t do it yourself easily by simply googling the terms. But, it’s a common human weakness: we find the basic concepts too basic to google them and we hope that as we learn new things about them, they should naturally become clearer. It’s not so, however: the basic concepts become ever more complicated as we learn new things, which, in turn, are obscured by the very absence of proper elementary definitions.
Internet has mostly changed our lives for the better, but it has incapacitated us from some of our learning capabilities. We think that memorizing is wrong: why should we, when we can google anything at almost any moment? Why should we, indeed – about trivial matters. But, when you want to learn something – and not just entertain yourself – memorizing is essential. It’s the hard way – but it’s also the right way.
Because learning means storing information in your long-term memory; googling things means forgetting new information on the go. You’re not here to forget; you’re here to learn. So, get rid of your old habits, and start memorizing few definitions. We’re here to provide them.
As I said, our first objective is to distinguish AI from ML; in other words: defining what each of them is and in what ways are the two connected.
Surprisingly, not many people know, that, strictly speaking, AI is not a highly scientific term, but, rather, a term we use to give our expectations some name. It’s kind of an umbrella-designation which encompasses concepts such as ML, NLP, image and speech recognition, text processing, and so on.
Building machine intelligence is usually, but not necessarily connected with ML. AI always is: when we talk about artificial intelligence, we always talk about reproducing the way humans think in inanimate objects.
Scientists usually speak about three different types of AI.
Let’s have a look at each one.
We’re still here. We’ve managed to build computers with the intelligence of – well, for illustrative purposes, monkeys.
Yes. Even though we have perfected image recognition, revolutionized speech recognition and natural language processing, know how to do text mining pretty well, etc. It’s still weak AI. The name is self-explanatory.
Consider chatbots with built-in NLP services. How many functions do you think they are intended to have? In most cases – no more than one. No matter how fascinating it seems to see a chatbot exceling at his job, consider that it has never been built to do anything else. And even you can build one if you want to. Nobody can build, however, a chatbot able to spontaneously talk about everything.
Based on how the media presents the information, I can’t blame you for expecting SF-scenarios during the next decade. True, scientists are building more and more powerful supercomputers and are always working on some SF-sounding projects, but there’s a long way to go. And understanding this is crucial if you want to be able to separate facts from fiction. The future of AI looks extremely promising, but let’s not get ahead of ourselves just yet. How promising, you ask?
Well, Andrew Ng is a name you should remember. He’s one of the leading ML scientists in the world. He’s the co-founder of Coursera, adjunct professor at Stanford University, founder and one-time leader of the Google Brain Deep Learning Project… You get the idea – you can trust Ng when it comes to AI. And this is what he expects from the field in the near future: