While companies are hiring more data scientists and firing call centers’ staff, we still don’t know how to make machines understand the information the way humans do. By the way, it’s only one of the numerous limitations of Artificial Intelligence that keep data scientists and machine learning experts awake at night.
Different limitations of Artificial Intelligence lead to the different types of failure: Microsoft’s bot-racist Tay, road accidents among self-driving cars, insufficient filtering of inappropriate content. Some of them can become crucial from the ethical, financial, and even life-threatening point.
If we were to look at failures only, we probably wouldn’t have invented most of the things that surround us today. Let’s just admit, everything has its limits, which certainly doesn’t imply the reason for breaking exploration process. In contrary, the better we know what limits us, the more chances we have to get beyond the borders.
Even people from outside of the tech field know that Artificial Intelligence (AI) and Machine Learning (ML) are the two peas in a pod. While Artificial Intelligence is just an umbrella term for a wide range of techniques applied to data, the core thing in the whole AI field is ML.
The most advanced ML techniques require building complex Artificial Neural Networks (ANN) on the basis of the different ML algorithms. Feeding ANN with the data, scientists expect predictive models of the high precision as the outcome. Later, those predictive models are used for the wide range of purposes: from filtering spam to detecting cancer tumors.
Despite all the accomplishments in Deep Learning (the collective name for the ML techniques concerned with the ANN), we are still quite far from solving the main problem of AI – the lack of true consciousness. As long as existing ML approaches don’t seem to be capable of solving this problem, there’s a solid ground for expecting the arrival of the new (perhaps, competing with ML) techniques in AI field.
Let’s take a closer look at the AI limitations (from the top down):
The problem is that even the smartest programs don’t learn the way humans do. The information flow inside the computer “brain” relies on the architecture that vastly differs from the one human brain is based on.
Just think about how much examples it takes to teach a human being to recognize an object. One or, perhaps, few examples, right? It’s completely different when it comes to teaching the computer program. If you want to make it, let’s say, recognize the object, you will need the large datasets.
You may think, wait but image recognition doesn’t seem to be the problem anymore. You are right! Conventional image recognition isn’t the problem if you are armed with the data and you don’t need the program to understand the context. The latter is the real problem – the one we don’t know how to solve.
From text analytics to the filtering an inappropriate video content, we face the same problem: even the most advanced AI-powered programs are unable to grasp the underlying context. You can train them to recognize visual (in image, speech and emotion recognition) or audio (in voice recognition) patterns but they still won’t be able to get the full picture.
Don’t get me wrong, all the above doesn’t mean that modern AI is useless. No. The ML approaches are extremely useful when it comes to processing a lot of data. They can yield insights you wouldn’t be able to get by yourself just because the computing speed isn’t the strongest side of the human brain. It’s just that modern AI still lacks the main thing that makes us humans – the consciousness.
As I have mentioned above, the most advanced approach in AI is Deep Learning (DL), the subfield of ML. In DL, scientists work particularly with the ANN. All too often, it takes a lot of data to create an accurate model that will be used later by ANN to make predictions. Consequently, it leads to the complexification of ANN’s architecture.
Here’s the problem: the more complex ANNs become, the more difficult becomes to understand how they produce predictive models. This is how the professor at MIT Tommi Jaakkola explains the problem:
“If you had a very small neural network, you might be able to understand it. But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.”
Some of the scientists consider it as the natural way of how human-like intelligence may possibly evolve. Just as it happens with the human intelligence: there are still missing parts of the process that we can’t explain. On the other hand, the lack of transparency in ML predictions holds back other specialists from using the results, whatever precise they are.
Since the first ML algorithms were invented in 1950s, for a long while AI was measured by the computing power. However, doubling of transistors’ number every two years (according to the law formulated by Gordon Moore in 1965) finally led to the blooming of Data Science in 2000s. However, it’s going to expire in 5 years (according to the most optimistic scenario). This is the time for us to find new ways of increasing computing power.
Horst Simon, Berkeley Lab Deputy Director, says that the last three years already made supercomputers feel the consequences of the way the increase of computing power has slowed down. They are just not getting any better, which makes scientists and engineers focus on paving the way for supercomputers with the alternative architectures.
There is so much for us to do: make smart computer programs perceive the world the way humans do, “translate” the work of ANN into the understandable language, create supercomputers that can outlive Moore’s law, etc. Meanwhile, we can let AI-powered computer programs do what they are currently able to: process data and retrieve insights. Significantly more is yet to come!
Any bespoke software and applications development CRM - ERP - CMSClick here to start your project now