Technology is thriving and, in order to keep up with it, it’s only normal that many people make a step or two to the fore before it’s justified to do so.
By Iffy Kukkoo
19 Nov, 2017
If you read us on a regular basis, you are probably already aware that we’re less interested in the talk and much more in the walk. Technology is thriving and, in order to keep up with it, it’s only normal that many people make a step or two to the fore before it’s justified to do so. This, however, results in a factual mess of chaotic proportions.
Because, before you notice, the fiction becomes a fact, and people start convincing you that somewhere in a Japanese factory, a robot invasion is already taking place (even though humans are still way smarter and we’re decades away from general AI), while China is developing the ultimate doom device as we speak (even though there are few things China is working on that are much more important).
But, there’s a difference between the potential of something and the actual realization of that potential. Tech writers tend to confuse between the two more often than not. It is our goal not to. So, whether we’re writing about VR or ML, we’re always interested in juxtaposing the current state-of-the-art in a field with what the future may hold for that respective field.
And there’s no reason why we shouldn’t do the same with quantum computing. Because, as you will find out in a while, even if you don’t know much about quantum computing – you should learn as much as you can as soon as possible.
Because, in few years’ time, your Cortana or Siri will most probably be a quantum computer.
It’s too early to speak of traditional computing in the past tense, but there are two strong reasons why it’s only reasonable to believe that our children’s children – or maybe even our children if born during the past few years – will probably see them as remnants of previous times, not unlike the children of today see VHS tapes or Walkman players.
First of all, traditional computing was imagined as a way to further expand our brain power, and, in more than one field, it successfully has. However, since it works in a manner quite different from our brains, it may never be able to grow as smart as us. What we’re currently doing with all these supercomputers is essentially amassing more computing power by linking computers. But the technology virtually stays the same and promises less and less with every passing year. Especially in the most advanced and promising fields such as AI.
Secondly, Moore’s law may be still alive, but it’s dying nevertheless. Traditional computers developed at such fast rates that few have stopped to consider what should we do when there will be no more room left for further development. It’s simple physics. The smallest transistors of today (10nm) are about 1000 times smaller than a red blood cell, but any smaller than 5nm and they stop functioning as transistors, since, due to their small size, the electrons start experiencing quantum tunnelling, or, in other words, are able to continuously flow from one gate to the next, virtually eliminating the off state of the circuit, or the 0 value of a bit. True, some researchers are working on workaround solutions, but the point remains: at some point in the future, it will be physically impossible to make smaller transistors, not to mention economically unviable.
Tech experts are virtually scraping the bottom of the barrel.
In a nutshell, traditional computing has all but reached its limits due to physical barriers. The laws of quantum physics stand as an insurmountable obstacle before its future. And if traditional computers don’t behave as we expect them to once we reach the physical barrier of atom-sized transistors, our best bet may be developing a radically new type of computer which will take the new physical laws in consideration.
We’ve won the game at this level. Let’s see how well we do at the next one.
A bit – as you know – is the basic unit of information in traditional computing, and it has only one of two values. The basic unit of information in quantum computing, however, is the qubit (quantum bit) which, in addition to having one of the two values, it may also have both values simultaneously. As far as you know, once it falls on the ground, a coin may be either heads or tails; but as far as quantum physicists are concerned, it may be heads, tails, heads/tails, or tails/heads.
This is called quantum superposition and is one of the two fundamental principles of quantum mechanics which make quantum computing both more complex and more promising than traditional computing.
The second is called quantum entanglement and we already outlined it here in an attempt to bring you closer to China’s successful QKDN experiments. However, it’s hard to use analogies to bring the phenomenon closer to the general public, so we may be excused for borrowing a paragraph or two from the researchers at IBM Q to paint it a bit more vividly:
Entanglement is a property of many quantum superpositions and does not have a classical analog. In an entangled state, the whole system can be described definitively, even though the parts cannot. Observing one of two entangled qubits causes it to behave randomly, but tells the observer exactly how the other qubit would act if observed in a similar manner. Entanglement involves a correlation between individually random behaviors of the two qubits, so it cannot be used to send a message. Some people call it “instantaneous action at a distance,” but this is a misnomer. There is no action, but rather correlation; the correlation between the two qubits’ outcomes is detected only after the two measurements when the observations are compared. The ability of quantum computers to exist in entangled states is responsible for much of their extra computing power, as well as many other feats of quantum information processing that cannot be performed, or even described, classically.
Did you get it? Don’t worry if you didn’t. Even Einstein had problems explaining it. He didn’t even try to sound scientific, in fact. For all intents and purposes, to him, quantum entanglement was “spooky action at a distance“. Let’s just stay at that for now.
If explanations and analogies won’t do, practical implications may make things clearer.
Consider, with IBM’s Dr. Talia Gershon, a simple problem: make all the possible arrangements of ten guests at one of your wedding tables. If you know anything about mathematics, you have probably already given up, because you know that 10! = 3,628,800 and that that’s how many combinations are there.
This is an exponential scaling problem and is one that classical computers have problems dealing with. 10 is a small number, but already the number of combinations a computer should index and search through is huge. If you had, say, 100 guests at your wedding and you want a computer to find the perfect arrangement for you, you’ll have to look for a supercomputer, because the number of digits in 100 factorial is 158. And a traditional computer will have to go through each and every one of the combinations individually and compare them afterwards!
What’s different with quantum computers? Well, due to the phenomena we mentioned above, quantum computers will not go through the arrangements one by one, but will process them all simultaneously. You’ll get your solutions by applying a phase to each of the states when encoding the combinations and amplifying/nullifying (via interference) some answers afterwards. The difference is time.
A better illustration may be a labyrinth of billion paths only one of which leads to the exit. A traditional computer will have to walk through each of the paths one after the other and see if it’s a dead-end or not, before it informs you where you should go. A quantum computer will walk through all of the paths instantaneously and you’ll be able to check which path leads you to the exit in a minute.
Basically, this means that quantum computing is inestimably better at database searching than traditional computing. But it’s also better at molecular modelling, at deep learning, and – which is really frightening – code breaking.
In fact, it speeds up processes so much that, potentially, it can be better at everything, something John Preskill has recently named quantum supremacy, and something that scientists expect to be achieved when quantum computers with 50-qubit processors finally appear. Just imagine the number: if 1 qubit can be in a superposition of 2 states, 2 qubits in a superposition of 4 states, etc. etc., 50 qubits can be in a superposition of 250 = 1,125,899,906,842,624 states! At once.
Cori, the fifth fastest supercomputer in the world, recently simulated the output of a 45-qubit circuit, but the paper documenting the achievement clearly states that going “beyond these capabilities means entering the domain of Quantum Supremacy”. Classical machines have no way to keep up with quantum technology once 50-qubit processors are proved to be possible.
And Google intends to do exactly this by the end of 2017. John Martinis, a professor at the University of California, Santa Barbara, who has been a member of the Google Quantum team since 2014, explains the raison d’être of the experiment in the following manner:
“We’ve been talking about, for many years now, how a quantum processor could be powerful because of the way that quantum mechanics works, but we want to specifically demonstrate it.”
And Google is not alone.
Ever since Richard Feynman proposed the idea of a quantum computer back in May 1981 (“Simulating physics with computers”), people around the world have been working on viable quantum computing solutions relentlessly and with varying success.
D-Wave Systems, a Canadian quantum computing company, is considered both a pioneer and a leader in the field ever since its inception in 1999. During the two decades of its existence, however, it has garnered a fair share of controversy as well: a January 2014 article, to say the least, has questioned the very “quanticity” of D-Wave’s computers.
The problem is that D-Wave’s quantum computers don’t use the “gate-based” method favoured by IBM and other players in the field, using instead the “quantum annealing” approach which doesn’t guarantee any control over the qubits and which merely takes advantage of the fact that all physical systems tend towards minimum energy state. This, however, means that D-Wave’s computers don’t use the qubits’ full potential, resulting in many raised eyebrows over whether their computers are really “quantum” and whether a D-Wave chip really produces “quantum speedup”.
Even though most of the studies have given negative answers to these questions, D-Wave Systems has so far successfully manufactured the D-Wave One, “the first commercially available quantum computer” in 2011, selling its successor, the D-Wave Two, for $15 million in 2013, and announcing in 2015 that its first 1000+ qubit quantum computer, the D-Wave 2X, has been installed at the Quantum Artificial Intelligence Lab at NASA Ames Research Centre.
It was in 2015 that Google’s researchers authored an article claiming that D-Wave’s quantum computers can solve certain algorithms up to 100 million times faster than a single core CPU, which led to Google teaming up with D-Wave and actively buying and using its technology.
In January 2017, D-Wave released its newest quantum computer, the D-Wave 2000Q and Qbsolv, a piece of open-source software which solves Quadratic unconstrained binary optimization (QUBO) problems on quantum processors.
Few months after D-Wave 2000Q, IBM announced that it has “successfully built and tested its most powerful universal quantum computing processor”, a 17-qubit machine, which should form the basis for the first IBM Q early-access commercial systems. IBM Q itself is a relatively new division – established in March 2017 – but is the pinnacle of 35 years of quantum research at IBM and the beginning of a new era of commercial quantum computers.
Further evidence of this is the fact that many other companies have joined in. Microsoft, for example, has established Microsoft Station Q, a research lab focused on studies of topological quantum computing, located on the campus of the University of California in Santa Barbara. A year ago, Intel started exploring the possibility of turning silicon into quantum computing material, thus adapting the existent silicon transistor for quantum computing tasks and hoping to gain advantage in the field in this manner.
Proving that quantum computing isn’t just a North American affair, Alibaba signed a memorandum of understanding with the Chinese Academy of Sciences (CAS) and started building a new quantum laboratory in Shanghai. According to Jianwei Pan, professor and executive vice president of University of Science and Technology of China, the CAS/Alibaba Quantum Computing Laboratory should “undertake frontier research on systems that appear the most promising in realizing the practical applications of quantum computing”. Its current objective: to build an international “dream team” for quantum computing research.
PQC is short for Personal Quantum Computer and it’s an abbreviation that will never catch up – because it’s simply impossible that it will ever be a reality.
Quantum computers are built in a specific way (qubits are basically artificially developed atoms controlled by a microwave resonator) and work only in extreme conditions, at 0.015 Kelvin, i.e. close to absolute zero. This is colder than space, let alone your living room (unless you are a tardigrade). But this is not colder than the artificially created conditions within some of the world’s largest data centres where the future of quantum computers should be envisioned.
So, how would you ever use one? Well, believe it or not, you already can.
In May 2016, IBM announced that it has made available one of its 5-qubit quantum computers online. Many people – in fact, 40,000 of them – around the world have already used it, most of them coming from Europe (UK, Switzerland), USA, Canada, South Africa, and Australia, jointly making about 300,000 experiments. And you can use it too: read the manuals here, and start playing with it here – once you realize how it’s working.
And soon enough, you’ll be able to do the same with Google’s quantum processors. If you don’t believe us, you can certainly believe the guys at Google, who recently published an article – suggestively titled “Commercialize quantum technologies in five years” – outlining Google’s vision for the future in a manner which offers much promise and even more to look forward to: