You can easily find hundreds of articles on Machine Learning that are worth reading. So why to read this one? I have a strong reason for you. Let me explain. Reading a pile of articles can finally leave you without nothing but red eyes. It doesn’t matter how much you have read because the only thing that makes difference is understanding the underlying concept.
What is wrong with the other sources? I think you may want to know it. Here’s the hint: the way you get to know something new differs. When you directly jump into the field starting to explore it from the remote corner, you see only a small point. Then, you do it again and get another point. But you don’t know how to connect them.
In contrary, if you do want to have a solid knowledge, you need a complex approach that starts from grasping the way smaller parts are organised inside the topic you want to learn. In this article, we will build a strong frame so every following piece of information will go to its shelve.
The primary thing that is worth figuring out is how those loud terms like Artificial Intelligence (AI), Machine Learning (ML), and their fellows relate to each other. People easily get confused by the things they frequently hear but to which they never pay enough attention.
You may think of this part (in case if you’re not a complete beginner) in terms of something elementary, which takes few seconds to google. However, this thought usually prevents people from creating a sufficient base. We all tend to think about every new talk of the town that it is just too simple so we can google it anytime.
Our learning has become too superficial because most of the informational sources are at the distance of few clicks. We feel like there’s no need to memorize a lot when you can google it. That’s wrong. At least, in case when you want to learn something, not just to entertain yourself.
Get rid of such thoughts and grow a habit of learning essential basis first, and only after allow yourself to move forward into the field. It’s the only way you can fix acquired knowledge in your long-term memory.
The first and the main tricky question on this road is what kind of connection between AI and ML exists. Surprisingly, how many people think that AI is highly scientific term while, in fact, it’s rather a reflection of our expectations than what we have now. Also, it’s a generalization that keeps together such terms like ML, NLP, image and speech recognition, text processing, and so on.
So why do we need to say “AI” when we can say “ML”? Because ML tasks don’t necessarily need to be connected with the building machine intelligence. But when we say “AI”, it already means that we aim at reproducing, at least, human way of thinking.
AI scientists have come up with the three stages of AI evolution:
Be aware, this stage is the one where current AI is. Though it’s an elementary stage, it took the time to grow computer power and invent ML, natural language processing, image and speech recognition, text mining, and other cognitive services for computers.
The name of the stage stands for itself. Consider chatbots with the built-in NLP services. How wide, you think, is the range of their operation? Mostly, only one. You may get fascinated sometimes by the chatbot that excels in solving issues within a certain business field. But keep in mind that the most difficult challenge is to create a chatbot that can freely talk about everything.
It may become a bit disappointing if you used to think that humanity has already built smart computers. But understanding this classification and our current location on this scale is crucial for everyone who doesn’t want to lose oneself in the hype. Moreover, the future of AI looks extremely promising.
Andrew Ng is an outstanding figure in ML field. He’s the co-founder of Coursera, the professor of Stanford University, the founding leader in Google Brain…And a lot more things to mention here but you got the idea – he’s the one whom you can trust when it comes to AI. This is how he commented on what we should expect from AI development:
“In the next decade, AI will transform society. It will change what we do vs. what we get computers to do for us. Perhaps a few decades from now, there’ll be a Quora post where someone asks “what would your life be like if you had to drive your own car?”
Humanity expects to appear at this stage in the nearest time. For a computer, possessing general AI would mean becoming just as good as humans are. It’s hard to achieve this stage because the distance between narrow AI and general AI is much longer than it may be for a human brain. It’s not just adding more fields of knowledge but, perhaps, even changing the way artificial intelligence functions.
The main problem of AI that hasn’t been solved so far is that any input should be translated into the numerical language. However, many things that surround us just can’t be translated in the strict numbers. This may be the biggest obstacle that separates us from the invention of emotional AI, which general AI is supposed to have.
What do we need right now to get there? First, we need computing capacity that is equal to the human brain’s one (despite having one, we should admit it’s not what we expect when talking about AI). Second, we have to make the computer work as the human brain or just become smarter on its own (based on the idea of self-learning machine).
According to the results of Nick Bostrom’s (the Swedish philosopher, author of the book “Superintelligence: Paths, Dangers, Strategies”) survey held in 2013, most likely, we will have general AI in 25 years.
Nick Bostrom explains super AI as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.”
Due to the self-learning ability of general AI, it’s considered that the distance between general AI and super AI will be extremely short (perhaps, counted in days) but crucial. At the moment when AI will become thousand times smarter than humans, it will get out of human control. The way it will happen can decide the fate of humanity. That’s why the most prominent minds in the field are working on the possible secure ways of treating AI at the advanced stages.
Now you know that AI is a generalization, which helps scientists and developers maintain the certain direction and put it into the words. Let’s get back to the key technologies we can work with at the current stage of AI.
Machine Learning is the modern basis of any other smart computer service like NLP, image recognition, text analytics, and so on. Arthur Samuel, an American pioneer of AI field and the one who coined the term of ML, described it in the terms of
“the field of study that gives computers the ability to learn without being explicitly programmed.”
To understand what explicit programming is, think about the program which you want to follow basic logic rules. Without ML, you would need to write as “if-else-then” lines as needed to properly set all the conditions and, therefore, predict as many inputs as possible. ML allows us to create programs that learn from the inputs either by “looking” at the example “input-output” pairs or by finding rules in the raw data. From the latter division originate two types of ML: Supervised ML and Unsupervised ML.
When you feed the program with the example inputs and corresponding outputs, it means you use supervised ML. Thus, you show the program the way it should map one piece of information to another. There are two subcategories: regression and classification.
Regression is applied when you want your program to find the new output for a given input depending on the underlying pattern. Classification problems imply that you have categories and you want the program to put input into one of them.
Unsupervised ML comes in handy when you don’t have any patterns to show your program but you have data. Consequently, first, you’d want the program to examine the data and come up with the pattern.
In fact, apart from supervised and unsupervised types, there’s one more subfield of ML called Reinforcement Learning (RL). It’s used in self-driving cars and the programs that beat people in games. The program relies on the reward system and the responses received from the environment. When the reward appears high, it means that the action taken by the program is right.
Deep Learning (DL) is one more great thing you may have heard of. It’s a group of ML algorithms that can be either of supervised or unsupervised origin. DL is famous for the artificial neural networks – the algorithms inspired by the way neural tissue is organised in the human brain. Nowadays, DL approaches are the most efficient for the variety of tasks. (In case if you got interested, here’s the thorough overview of DL by Andrew Ng).
Any bespoke software and applications development CRM - ERP - CMSClick here to start your project now