There are literally hundreds of articles about machine learning out there! And probably each of them is worth both your time and your attention.
By Iffy Kukkoo
20 Jul, 2017
The Reason Why You Should Read This Article
There are literally hundreds of articles about machine learning out there! And probably each of them is worth both your time and your attention. Is there some special reason why you should read this one? I may be able to offer you one.
First of all, you may have already noticed that reading too many articles on the same subject will probably leave you with nothing more than red eyes and headaches. In addition to few ambiguously remembered concepts you were there to understand clearly in the first place. Reading too much is sometimes as tricky as reading too little: just when you think you grasp something, you are left as bemused as before by the slightly different explanation proposed in the next article.
Which brings me to my second point. Namely, most of the articles about machine learning you can find on the internet make the mistake of either offering too little or too much important information. There are the ones which want to introduce you to a certain field baby-steps style. Their problem is that they expect you to spend hours and hours reading article after article to get the whole picture. The problem with the other category of articles – the crash-course-styled ones – is the opposite one: they expect you to understand each and every sentence from the offset. We don’t have the time to explore the former, and we are not computers to process so many information at once.
That’s why this article is a bit different. Instead of showing you miniature parts of the whole picture bit by bit or all the parts of the picture at once, it opts for the compromise solution: providing a strong frame for all your future knowledge. Jigsaw-puzzle style: see the whole image now, and you’ll easily find a place for each puzzle piece afterwards.
Get Oriented
You may be surprised how many people often confuse Artificial Intelligence (AI) and Machine Learning (ML). So, defining them is our first assignment.
Unfortunately, it’s not something you can’t do it yourself easily by simply googling the terms. But, it’s a common human weakness: we find the basic concepts too basic to google them and we hope that as we learn new things about them, they should naturally become clearer. It’s not so, however: the basic concepts become ever more complicated as we learn new things, which, in turn, are obscured by the very absence of proper elementary definitions.
Internet has mostly changed our lives for the better, but it has incapacitated us from some of our learning capabilities. We think that memorizing is wrong: why should we, when we can google anything at almost any moment? Why should we, indeed – about trivial matters. But, when you want to learn something – and not just entertain yourself – memorizing is essential. It’s the hard way – but it’s also the right way.
Because learning means storing information in your long-term memory; googling things means forgetting new information on the go. You’re not here to forget; you’re here to learn. So, get rid of your old habits, and start memorizing few definitions. We’re here to provide them.
Artificial Intelligence
As I said, our first objective is to distinguish AI from ML; in other words: defining what each of them is and in what ways are the two connected.
Surprisingly, not many people know, that, strictly speaking, AI is not a highly scientific term, but, rather, a term we use to give our expectations some name. It’s kind of an umbrella-designation which encompasses concepts such as ML, NLP, image and speech recognition, text processing, and so on.
Building machine intelligence is usually, but not necessarily connected with ML. AI always is: when we talk about artificial intelligence, we always talk about reproducing the way humans think in inanimate objects.
Scientists usually speak about three different types of AI.
Let’s have a look at each one.
1) Narrow (Weak) AI
We’re still here. We’ve managed to build computers with the intelligence of – well, for illustrative purposes, monkeys.
Yes. Even though we have perfected image recognition, revolutionized speech recognition and natural language processing, know how to do text mining pretty well, etc. It’s still weak AI. The name is self-explanatory.
Consider chatbots with built-in NLP services. How many functions do you think they are intended to have? In most cases – no more than one. No matter how fascinating it seems to see a chatbot exceling at his job, consider that it has never been built to do anything else. And even you can build one if you want to. Nobody can build, however, a chatbot able to spontaneously talk about everything.
Based on how the media presents the information, I can’t blame you for expecting SF-scenarios during the next decade. True, scientists are building more and more powerful supercomputers and are always working on some SF-sounding projects, but there’s a long way to go. And understanding this is crucial if you want to be able to separate facts from fiction. The future of AI looks extremely promising, but let’s not get ahead of ourselves just yet. How promising, you ask?
Well, Andrew Ng is a name you should remember. He’s one of the leading ML scientists in the world. He’s the co-founder of Coursera, adjunct professor at Stanford University, founder and one-time leader of the Google Brain Deep Learning Project… You get the idea – you can trust Ng when it comes to AI. And this is what he expects from the field in the near future:
In the next decade, AI will transform society. It will change what we do vs. what we get computers to do for us. Perhaps a few decades from now, there’ll be a Quora post where someone asks ‘what would your life be like if you had to drive your own car?
Andrew Ng
Do you need some help?
2) General (Strong) AI
Expect scientists to get here during the next few decades. And you’ll know when this happens, because computers with general AI will be just as smart as humans usually are.
It’s certainly not an easy task, but scientists say that we’re on the right track. There are still some serious limitations nobody has found a way to transcend. But many are working on that.
The biggest hurdle is translating our surroundings into the language of 1s and 0s. Sure, you can translate any fact into the numerical language computers understand, but how can you translate abstract ideas such as Truth or Liberty? Even stranger: how do you translate Love into numbers? We may have to think of a completely different way to communicate with machines in order to create general AI, because general AI is supposed to go hand in hand with “emotional AI”, and emotions, so far, seem all but untranslatable in numbers.
We need quite a few things to get to a point where general AI is a reality.
First of all, we need computing capacity equal to the one a human brain possesses (ok, we may already have one, but it’s not exactly what one would expect from a computer suitable for further AI developments). Secondly, we have to understand better how human brains work and how consciousness evolved. And finally, we need to build computers able to become smarter on their own (self-learning machines).
According to the results of a survey held in 2013 and published by Vincent C. Müller and Nick Bostrom (the latter a very influential Swedish philosopher, author of the book New York Times bestseller Superintelligence: Paths, Dangers, Strategies), general AI should come here in about 25 years.
Brace yourself.
3) Super AI
According to Nick Bostrom, super AI is
An intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.
Nick Bostrom
Due to the expected self-learning abilities of general AI, it’s believed that super AI will come almost immediately after general AI. As opposed to the decades necessary to evolve from weak to strong AI, the period between the last two stages of the expected AI evolution may last no more than few days. And it may be the most crucial period in the history of mankind.
Because the moment AI-driven machines become thousand times smarter than humans, they will certainly get out of our control. The event will decide the fate of humanity.
Knowing this, the most prominent minds in the AI field are currently working on devising ways to make AI safer. In other words: to either stop general AI from evolving into super AI all by itself, or find a way to regulate super AI’s behaviour.
Machine Learning
Now that you know enough about AI, it’s time you learned something about ML.
Machine learning is the basis of all other smart computer services, whether NLP, image recognition, or text analytics. Arthur Samuel, an American AI pioneer and the person who coined the term “machine learning”, described the concept as a subfield of computer science focused on giving
computers the ability to learn without being explicitly programmed.
Arthur Samuel
Explicit programming is, basically, the programming you are already familiar with. It means writing as many “if-else-then” lines as there are conditions.
In ML, however, scientists create programs which learn from the inputs either by “looking” at the example “input-output” pairs or by finding rules by themselves in the raw data available to them.
The difference is what separates unsupervised ML from supervised ML.
Supervised ML
Supervised ML means feeding a program with example inputs and corresponding outputs, thus teaching it how it should map the pieces of information. There are two subcategories of supervised ML problems: regression and classification.
Regression analysis estimates the relationship between variables. It’s used when we expect the ML algorithms to predict a new output for a given input, based on the underlying pattern of the input/output training dataset. You can read much more about how practical application of regression works here.
Classification, on the other hand, means identifying to which of the already existent set of categories a new input belongs on the basis of a dataset which contains inputs whose category is known. For an interesting new discovery concerning a controversial classification problem – read here.
Unsupervised ML
As opposed to supervised ML, unsupervised ML is expected to work all by itself. It means feeding your program with “unlabelled” data (the inputs are uncategorized and lack outputs) and hoping that the program will find hidden patterns. It’s used, for example, in internet security for preventing unknown and previously undetected types of attacks.
Reinforcement Learning
There’s also one more area of ML called reinforcement learning (RL), inspired by behaviourist psychology and responsible for the success of self-driving cars and game-playing computers. It’s different from supervised learning in that it doesn’t present a program with correct input/output pairs, instead teaching it to take actions so as to maximize rewards in a given environment. In other words, a RL program acts in accordance with a simple motto: the bigger the reward, the more accurate the action leading up to it must be.
Deep Learning
Deep learning (DL) is a sort of “transcategory” in ML, a group of ML algorithms which can be either of supervised or unsupervised origin. DL is concerned with algorithms inspired by the communication patterns in biological nervous systems and is the basis on which artificial neural networks are being developed. DL architectures have been applied to various different fields, ranging from speech recognition to NLP and machine translation. (In case this got you interested, here‘s a thorough overview of DL by Andrew Ng).
Conclusion
Even though we are still at a very early stage of the AI evolution, the technologies we have come up with are already promising enough to point towards a recent future where computers would be just as smart as humans are.
However, filtering the hype is a key skill in the digital era. And AI has been overhyped ever since its inception. So, be careful out there: don’t believe everything you read and check every information twice. There are trusted sources in the midst of all the uncertainty and distorted reality propagated by the bulk of the internet.
And we work hard to turn this blog into an oasis of this kind.
Posted By: Iffy Kukkoo
Resident Editor-In-Chief
Iffy is our exclusive resident technology newshound editor, relentlessly exploring the
beauties of the world from a 4th dimensional viewpoint. When not crafting, editing or
publishing our IT content, she spends most of her time helping people understand life and
its basic principles. You know, the little things around you, that you've failed to grasp
each day.
Dee.ie IT blog has updates on IT Consultancy, IT Contractors and Software Development
related posts, on how your business can be managed effectively using technology.
Feel free to read more and or reach out to share your thoughts, feelings and input on our
articles, our team would love to hear from you!
Have a Question or Need an Answer?
Ask our Live Chat and we will include it in our FAQ’s to make things easier for
others
We use cookies on this site to enhance your user experience.
For a complete overview of all cookies used, please see cookie settings.
Manage Cookie Preferences
More info, see the
cookies and similar
technologies section of the Privacy Statement
Required
We use required cookies to perform essential website functions. For example, they're
used to log you in,
save your language preferences, provide a shopping basket experience, improve
performance, route traffic
between web servers, detect the size of your screen, determine page load times, improve
user experience
and for audience measurement. These cookies are necessary for our websites to work.
Analytics
We allow third parties to use analytics cookies to understand how you use our websites
so we can make
them better and the third parties can develop and improve their products, which they may
use on websites
that are not owned or operated by AuditMyIT. For example, they're used to gather
information about the
pages you visit and how many clicks you need to accomplish a task.