Where Will Machine Learning Be In Ten Years?

If we want to push the limits and start exploring the higher forms of AI, we may need to find a completely new way to create self-learning computers, but, for the time being, machine learning (ML) looks like the most promising application of artificial intelligence (AI) we have invented.

Where Will Machine Learning Be In Ten Years?

By Iffy Kukkoo

06 Jul, 2017

Are We Heading in the Right Direction?

If we want to push the limits and start exploring the higher forms of AI, we may need to find a completely new way to create self-learning computers, but, for the time being, machine learning (ML) looks like the most promising application of artificial intelligence (AI) we have invented.

ML is extremely useful for processing data, finding patterns, and making predictions. However, machines beating humans at chess is still quite far from where we would want to be, which inevitably leads to the question are we heading in the right direction.

If you have used a similar question as a Google search query, you might have already gone through at least few of the hundreds of different Quora discussions on similar subjects and have read quite a few differing predictions about ML prospects. The interest is only natural: times change quickly, and the buzz is no more virtual and augmented reality but the Brain-Computer Interface – and quite a few similar approaches to brain emulation.

Your problem: how to separate the facts from the fiction and how to not lose yourself in the hype. As always, the first and foremost requirement is building a clear picture of what the current problems and issues are and what needs to be done in order for them to be solved.

Which, not that coincidentally, is the objective of this article.

 

ML: Problems and Possible Solutions

1) The Way a Machine Learns

Humans learn in a quite different manner from the way machines do (see here and read the quote from here). If, for example, you want to teach a child to differentiate between dogs and cats, all you will need to do is show a dog and a cat to the child once or twice, and the child would never confuse them in the future. But ML doesn’t work this way. It would take much more than a single dog/cat pair to make a program recognize cats and dogs. In fact, it would take thousands of them.

In the early days of ML algorithms, the problem was that huge datasets required as powerful computing resources that we weren’t able to manufacture. Nowadays, computers are powerful enough to process as much data as necessary. The new problem is that, even so, machines are not getting as smart as humans, since they are learning – it seems – in a much more complicated manner.

Then why not teach them to think like humans. Simple solution, right? In theory – of course; in practice – not so much.

Scientists have been trying to emulate human brains for a while now. And the process goes the way you’d expect it to go: they first scan the human brain and subsequently create a model of each and every functional node so that, when combined, they form a manufactured brain which is a complete replica of the human brain. If so, why should it work any differently?

Well, why should it? We just don’t know it yet. And we’re all but sure that no matter how great we do, consciousness is still something we would just hope that the brain will achieve all by itself. Because, we have no idea from where we should start. We don’t even know how much more powerful computers we’re going to need to do something like whole-brain emulation!

But, on the bright side, the common round worm has much smaller and less complicated brain. 302 neurons and 95 muscle cells. And they have all been researched thoroughly during the past century. So, was it enough? Did scientists do it? Did they manage to recreate the worm’s brain?

Yes – they did! And they did the first thing you wanted them to do: they uploaded it unto a Lego robot! And it worked almost as expected. Wait – it gets even more fascinating: now they want to recreate every single cell.

Immortality anyone?

Our privacy promise to you
dee.ie Do you need some help?

2) The Narrowness of ML

One of the main reasons why AI is still considered to be merely in its first stage of evolution is its narrowness. We already talked about AI-powered chatbots and explained why they are unable to speak in a human-like manner. They can only simulate being humans on a smaller scale, i.e. when a discussion is about a certain topic they understand. There’s nothing more challenging than developing a chatbot able to discuss about “everything and nothing” at the same time. They care too much.

The problem is something you wouldn’t expect to be a problem: computers don’t memorize things as good as people. You wouldn’t discuss the same thing twice with the same person; but a computer might to. To overcome this problem, AI scientists are working on next-gen artificial neural networks (ANN). Many great ideas are being researched at the moment, and you can already read a thing or two about some of them: end-to-end memory networks, attention-based recurrent neural networks, reinforcement learning neural Turing machines, etc.etc.

3) Self-Learning and the “Black-Box” Problem

To a certain extent, self-learning is already a part of advanced ML algorithms. Most of it is a special type of machine learning called reinforcement learning (RL). Thanks to RL algorithms, self-driving cars are safer than the cars driven by humans, and programs such as AlphaGo are able to beat the best Go players in the world.

However, self-learning is still much too far from what humans conventionally understand as “self-learning”. It’s a complex issue – so much in fact, that even the people who develop self-learning machines are unable to understand it completely.

In other words, even though it’s becoming relatively easy to build some of these self-learning AI models, it’s becoming impossible to comprehend their inner workings. It’s not something a person would like to hear, but it’s true: even the creators of the most advanced ML algorithms aren’t aware of how machines use them to reach their conclusions.

Tommi Jaakkola – we’ve already quoted him here on the same subject – is an MIT professor of electrical engineering and computer science. He calls this problem the “black-box” problem of ANN. He further explains:

 

You may not want to just verify that the model is making the prediction in the right way; you might also want to exert some influence in terms of the types of predictions that it should make. How does a layperson communicate with a complex model that’s trained with algorithms that they know nothing about? They might be able to tell you about the rationale for a particular prediction. In that sense, it opens up a different way of communicating with the model.

Tommi Jaakkola

At MIT, Jaakkola and his colleagues are working hard to rationalize ML processes. In this study, you may learn something more about how ML algorithms work when dealing with NLP problems.

 

Conclusion

If all goes well, ML should help us reach artificial general intelligence (AGI) in a very short period of time. AGI means that machines should become as intelligent as humans are. The scientific community expects that we are very close to something like this and that we may get there in 20-30 years.

No more than a decade should pass before the next great leap in technology; we can’t know its nature, but, most likely, it should be AI-related. Whether the AI leap will be achieved through some ML means already available to us and further developed in the meantime or current ML approaches will become irrelevant by then – is something about few would want to hazard a guess at the moment.

One thing’s for sure, though: even the least optimistic predictions mean that some of us could still be alive when the technology capable of making us digitally immortal appears for the first time. A pretty nice thing to look forward to.

Posted By: Iffy Kukkoo
Resident Editor-In-Chief

Iffy is our exclusive resident technology newshound editor, relentlessly exploring the beauties of the world from a 4th dimensional viewpoint. When not crafting, editing or publishing our IT content, she spends most of her time helping people understand life and its basic principles. You know, the little things around you, that you've failed to grasp each day.

Dee.ie IT blog has updates on IT Consultancy, IT Contractors and Software Development related posts, on how your business can be managed effectively using technology.

Feel free to read more and or reach out to share your thoughts, feelings and input on our articles, our team would love to hear from you!

Our privacy promise to you
Have a Question or Need an Answer? Ask our Live Chat and we will include it in our FAQ’s to make things easier for others

Our IT Blog

Latest Blog Post
blog-post

How to improve your businesses Software Maintenance?

Latest Blog Post
blog-post

What is the Difference Between a CTO & IT Consultant?

All Posts