When it comes to Artificial Intelligence (AI) and its possibilities, Machine Learning (ML) seems to be the most promising field. ML is extremely useful for processing data, finding patterns, and making predictions. However, AI field is still pretty measured, which inevitably leads to questioning the future of ML for the development of AI field.
If you have ever dared to ask Google about the perspectives of ML, you may have found yourself surrounded by many people like you in the forums and Quora discussions. At the times when the buzz around virtual and augmented reality is being replaced by the new talk of the town – the Brain-Computer Interface (Mark Zuckerberg and Elon Musk are working on) – it’s quite natural to be interested in such issues.
The question is how to approach it critically and not to get lost in the hype. The first and foremost condition, as always, is having a clear picture of the current problems and issues that need to be solved before jumping to the next level. In this article, we will try to face the main problems of ML and find out where we can get from there.
Programs don’t learn the way humans do. If you were to teach a kid new things – for instance, to differentiate dogs and cats – you would need to get the kid look at them once or twice, and the kid wouldn’t confuse them next time. But ML doesn’t work this way. It takes huge datasets to make the program recognize cats and dogs.
Since main ML algorithms had been invented, data used to be the main obstacle until computers reached enough power to process as much as it takes to learn. The problem here is that humanity still doesn’t know how to get the process of machine learning closer to the way of human learning.
If you were to think about it for a moment, even on your own, you may have come up with the suggestion to imitate the way our brain works. Sounds pretty logical, doesn’t it? Logical – yes, easy – not.
In scientific society, this strategy is called “whole brain emulation”. The main technique of achieving it is by scanning the human brain and creating a model of every functional node so that altogether the models would work exactly as the human brain does. However, to make this possible, we need significantly more powerful computers.
Even though emulation is going quite slowly – regarding that, so far, it was possible to emulate only a small slice of flatworm brain – this strategy is seen as a quite promising. “Whole brain emulation” is supposed to help us reach immortality.
At the same time, the question whether this strategy can help to build machines with the consciousness is still unclear.
One of the main reasons why current stage of AI is considered weak lies in the narrow range of issues it can operate within. If you are familiar with the chatbots powered by AI, you may know that they can chat in a pretty human-like fashion. However, it’s possible only when the discussion is held in the narrow field. The most challenging task is to make a chatbot that can maintain a routine discussion about “nothing and everything”.
The major impediment on the way to making AI agent of broad character is a lack of memory current ML approaches provide. To overcome this, AI scientists are trying to bring Artificial Neural Networks (ANN) to the next level. Definitely, a lot of changes are yet to come, however, some of the great examples have already come up. Among them are End-To-End Memory Networks, Attention-Based Recurrent Neural Networks, Reinforcement Learning Neural Turing Machines, etc.
To the certain extent, self-learning is already present in some of the advanced ML algorithms, most of which relate to the exact subfield of ML called Reinforcement Learning (RL). Thanks to RL algorithms, self-driving cars are safer than the cars with the human drivers, and such programs like AlphaGo can beat top world masters of Go.
However, self-learning is still too far from what it is in its conventional “human” meaning. Definitely it’s a complex problem that entails two previously discussed problems (as well as the rest of problems relate to each other) – moreover, it contains a major lack of knowledge as the biggest obstacle.
The problem is that AI scientists still don’t understand how exactly learning is happening. In other words, it’s relatively easy to build the models but yet impossible to understand how they work. Due to this significant lack of transparency, even the creators of ML algorithms aren’t aware of how AI reaches its conclusions.
Tommi Jaakkola, an MIT professor of electrical engineering and computer science, calls it a “black-box” problem of ANN. He says:
“You may not want to just verify that the model is making the prediction in the right way; you might also want to exert some influence in terms of the types of predictions that it should make. How does a layperson communicate with a complex model that’s trained with algorithms that they know nothing about? They might be able to tell you about the rationale for a particular prediction. In that sense it opens up a different way of communicating with the model.”
In MIT, the group of scientist is working on making rationalization of ML process possible. In this study, you can get acquainted with the way it works at the example of NLP problems.
The next level in AI at which ML is considered to bring us in the nearest future is Artificial General Intelligence (AGI). AGI implies that machines will become as intelligent as humans are. The majority of scientific society expects to get there between 2030 and 2045.
In ten years most likely we will be experiencing a great leap in tech, which will inevitably influence other fields. Most likely, it is ML that will be able to bring us there. However, it’s not necessarily that current ML approaches will still be relevant in the ten-year period.
Any bespoke software and applications development CRM - ERP - CMSClick here to start your project now