Sunday, October 29, 2017

Scott Aaronson at the Austin Quantum Computing meetup

We have heard that quantum computers are supposed to be much faster than classical ones. But what exactly does that mean, and why can't computers based on some other exotic physical models of computation be as fast? Scott Aaronson examined other models of computation and dispelled some popular myths about quantum computing at the Austin Quantum Computing meetup in May of 2017.

His talk might not have had a whole lot of new stuff for those who read his blog. It was a tour-de-fource of the topics he often addresses on his blog, and a synopsis of what a layperson needs to know about quantum computing. It was about what makes quantum computing so fascinating, but also how to keep your expectations of it realistic.

Scott started his talk with a look at some hypothetical alternative forms of computing. What physical models of computation would be alternative enough to violate the Extended Church-Turing thesis?

Church-Turing Thesis is closely related to the definition of computability: it says that all the computable functions are those and only those that could be computed by the Turing machine. Extended Church-Turing Thesis says that a Turing machine can simulate all other computers with at most a polynomial overhead. Are there forms of computing that challenge the extended Church-Turing thesis? On what kind of physics could a computer possibly be based to overturn it? Scott examined hypothetical computers based on exotic conditions where everyday laws of physics break down.

Relativity Computer

Scott's first example was a relativity computer. How come no one talks about relativity computer, he asks. The idea is simple: you start your computer working on some really hard problem, maybe NP-hard problem, and leave it on Earth, while you take off into space. From the computer's perspective, billions years have passed, the civilization has collapsed. But from your perspective, thanks to the relativistic time dilation, only ~ 20 years have passed. You come back and miraculously find your computer in the rubble, still connected to a power source, and you can read out an answer to your hard computational problem. So why hasn't anyone tried it? If you're worried that your friends will be dead in the distant future, just bring them on the spaceship with you.

Humor aside, the question is, would such a "relativity computer" (well, in this case the computer is ordinary, it's you travelling at a relativistic speed that makes the answer appear quickly) provide a fast solution to NP-complete problems? Would the problem be solved in polynomial time from your perspective?

The answer to that, Scott says, has to do with the amount of energy that would take to accelerate to relativistic speed. If you want to get an exponential speedup in only polynomial amount of time as experienced by you, you would have to accelerate so close to the speed of light that would take exponential amount of energy. So before your spaceship takes off, just fuelling it up would take exponential time.

It can sometimes seem like you could achieve exponential speedup for some problems by exploiting certain physical processes, but to really evaluate any such possibility you have to look at everything we know about physics, such as the energy involved in such a calculation.

Zeno's computer

Zeno's computer is a hypothetical computer in which each operation would take only half the time that it took for the previous operation. This scenario could occur at the so-called Omega Point. Omega Point is a theological notion, but according to some physicists, it could occur under some highly debatable conditions. (I don't think Scott mentioned Omega Point in his speech, I just remembered this trope because it seemed fitting.) For example, the founder of quantum computing, David Deutsch, envisions Omega Point happening near the time of the Big Crunch, as the universe oscilates faster and faster. By the time the universe collapses, the computations have achieved nearly infinite speed, and if you simulate the whole universe on the computational power of those oscillations, the world will never end for you. The closer the real thing comes towards the end, the faster the oscillations will go, the longer "subjective" time you'll be able to simulate. Here is a quote from the David Deutsch book "Fabric of Reality" where he discusses Omega Point. Scroll down to the paragraph "The key discovery in the omega-point theory...".

On the Omega Point computational substrate you would be able to solve NP-hard problems fast. But according to our best cosmological models, the universe is not going to end in the Big Crunch. Even aside from this particular scenario, Scott points out a fundamental problem with Zeno's computer. Zeno's computer relies on infinite amount of computational capacity, but any space-time region has only a finite number of information storage capacity. He says so in his own comment on his blog:

"Rahul #50:

Does Nature (or an existing proof) prevent undecidability in statements involving finite times / spaces / lattices & precision?

Mathematically, if we know that a problem only requires enumerating a finite list of possibilities, then essentially by definition that problem is computable. This is why, any time you see a new problem that's been proved at least as hard as the halting problem, that problem will always involve some element that goes to infinity (if the title and abstract aren't forthcoming about this, read the main text and see! ?? ).

Physically, it's conceivable that we could have lived in a universe where an infinite amount of stuff could get done in finite time (e.g., by what I call the "Zeno Computer," that does one step in 1 second, the next in 1/2 second, the next in 1/4 second, and so on). In such a universe, of course the halting problem could be solvable in finite time.

But the current conjecture in theoretical physics -- primarily because of the work of Jakob Bekenstein on black hole thermodynamics, which I blogged about before -- is that we don't live in such a universe. Rather, we seem to live in a universe that can be modeled as a quantum computer where each finite region of space stores at most ~1069 qubits per square meter of enclosing surface area (with the bound saturated only by black holes), and where those qubits are operated on at most ~1043 times per second. In such a universe, of course the halting problem would not be solvable."

Also, he says in an excerpt of his talk Big Numbers

:

While no one has tested this directly, it appears from current physics that there is a fundamental limit to speed, and that it's about 1043 operations per second, or one operation per Planck time. Likewise, it appears that there's a fundamental limit to the density with which information can be stored, and that it's about 1069 bits per square meter, or one bit per Planck area. (Surprisingly, the latter limit scales only with the surface area of a region, not with its volume.)

What would happen if you tried to build a faster computer than that, or a denser hard drive? The answer is: cycling through that many different states per second, or storing that many bits, would involve concentrating so much energy in so small a region, that the region would exceed what's called its Schwarzschild radius. If you don't know what that means, it's just a fancy way of saying that your computer would collapse to a black hole. I've always liked that as Nature's way of telling you not to do something!

One of the few things about quantum gravity that everybody agrees on, says Scott Aaronson, is that it places fundamental limits on how much computation you place in a bounded region.

Other models of computation based on physical world

Sometimes people suggest that soap bubbles could solve NP-hard problems in polynomial time just by finding the minimum surface when they are connected. Finding the minimum surface for connected soap bubbles is an NP-hard problem, but humans have observed that soap bubbles form a minimum surface very quickly, thus prompting the idea that a polynomial-time algorithm for finding the minimum surface does exist. And if we could encode an NP-hard problem as a soap bubble problem -- in effect "feeding" it to a soap bubble computer -- then we could solve them in polynomial time.

Scott Aaaronson giving a presentation at the Austin Quantum Computing meetup
Scott Aaaronson giving a presentation at the Austin Quantum Computing meetup

The problem with it is that bubbles do get stuck in local optima, says Scott. In other words, the surface they settle into is not guaranteed to be the absolute minimum but some kind of local minimum. Scott talks about it in more detail in his paper "NP-complete Problems and Physical Reality" (PDF).

In his teens, Scott experimented with the surfaces soap bubbles form. That was the closest in his life that he had ever been to an experimentalist. And in his experimments the bubbles got stuck in local optima. "But I haven't tried every possible brand of soap," Scott quipped.

Same is true about using protein folding for solving NP-complete problems. Finding optimal configuration for a protein can be modelled as NP-complete problem, and yet every cell in our bodies does it every second. Despite having been under enormous selection pressure to not get trapped in local optima, proteins still sometimes do that, and that can lead to prion- and amyloid-related illnesses.

That's the basic problem with all physics-based computations: systems in nature do get stuck in local optima. "This may sound silly," says Scott, "but every few months I get calls from popular science writers and they ask me, what about this? This violates Church-Turing thesis! But the things they suggest usually have the problems I mentioned."

When Scott Aaronson first heard about quantum computing as a teenager, he was similarly skeptical. He thought some physicist didn't understand Church-Turing thesis. So he had to find out about quantum mechanics for himself.

Does quantum computing bypass Extended Church-Turing thesis?

Does it? Skipping ahead, the smart people of the internet think it very likely does.

But back to Scott Aaronson's story. Setting out to learn quantum mechanics, Scott found out that quantum mechanics is not as hard to learn as it is commonly assumed. As he says, Quantum Mechanics turns out to be incredibly simple once you take the physics out of it. And no, he doesn't mean in a woo-woo New Agey sense. He adds: "The way I see it, it's a level below physics. It's an operating system that the rest of physics runs on as application. And what that OS is is probability theory with minus signs."

Scott Aaaronson's slide on quantum mechanics as probability with minus signs
Scott Aaaronson's slide on quantum mechanics as probability with minus signs. It shows the double slit experiment with a smiley-faced photon going through two slits and interfering with itself.

Probabilities with minus signs" is the concept of probability amplitude, which Scott has written about in more detail in this Quantum Computing Since Democritus series article. Here, Scott shows how the concept of probability amplitude (which is an entity that can be negative) explains the phenomenon of quantum interference. Interference is what causes some "paths" that lead to an observed outcome (e.g. photon going through either slit in the famous double-slit experiment) to cancel each other, because one has a plus sign, and the other has a minus sign.

"Probabilities as complex numbers come up in the double slit experiment," says Scott. "If I close off one of the slits, the photon appears in places where it doesn't want to appear before. By decreasing the number of choices, I can increase the likehood of an event. Physicists were eventually forced to say: the photon has some amplitude for going through the first slit, and some amplitude for going through the second slit. And amplitudes are complex numbers. You have to add the amplitudes of all the slots, and then take a square. That's a positive number."

This approach to quantum mechanics leads you to learning enough quantum mechanics from the "right angle" to understand quantum computing -- and you don't need differential equations and Schrodinger's wave equation to understand it. The latter is the traditional approach to teaching quantum mechanics, and, according to Scott, it scares people away from it. "Richard Feynman in his book "QED" doesn't even mention complex numbers. He says that each path has an arrow attached, and to compute how likely that path was, you stack those arrows together. So that's a perfect example how to boil it down to a bare minimum but not lower than that," says Scott. "I've given talks about this stuff at high schools. The smarter high school students can understand a lot of it without much difficulty, part of it because there is less they have to unlearn. To get into this field you don't have to take years and years of physics, you don't have to known how to do integrals or solve differential equations. But you have to understand vectors and matrices and complex numbers. So that's the bare minimum. But that bare minimum is accessible to a much larger population of people than the people that had been traditionally thought of as being able to understand quantum mechanics."

(To be fair, not all quantum physicists agree that quantum mechanics can be taught as "probability theory with minus signs" without first introducing the wave equation and such. Here is a discussion on Quora about what is lost when you approach it that way. -- E.)

Understanding interference will also inoculate you against the biggest fallacy that's being thrown around in the popular press when explaining quantum computing. A quantum computer does not "try every answer in parallel" to arrive at the right answer. If there is one thing Scott would like you to take away from this or any of his quantum computing presentations, it is this. The quantum computer does not try every answer in parallel. It's not a massively parallel classical computer.

It is interference that lets you get the right answer out of a quantum computer. You rely on the fact that amplitudes behave not like probabilities because they can cancel each other. For every wrong answer, you hope that each path leading to the answer to have equal positive and negative amplitudes, so they could cancel each other. For the right answer we want all the paths leading to it to have either all positive or all negative amplitudes. We need to to do something to boost the probability of the right answer, and quantum interference does that. That's one new tool quantum mechanics puts in our toolbox.

This is also his bare minimum standard for popular science articles explaining quantum computing, also known as the minus sign test. The article needs to mention interference.

Applications of quantum computing

Though we are used to thinking about quantum computers as something that would let us solve hard problems fast, the first application of quantum computing that its pioneers conceived wasn't that at all. Their initial reason for building quantum computers was to simulate quantum systems. "In the 1980s, Feynman, Deutsch, and others noticed that a system of n qubits seems to take ~ 2^n steps to simulate on a classical computer, because of the phenomenon of entanglement between the qubits. They had the amazing idea of building a quantum computer to overcome that problem," Scott says. And now, decades after Deutsch proposed that idea, Scott Aaronson still thinks that quantum simulation is a major, perhaps the biggest and most promising, application for quantum computing. It could have huge applications for energy.

For Scott personally, the number one application is to disprove people who come to his blog and argue that quantum computing is impossible.

What about cryptography?

When people say "quantum cryptography", they typically mean one of two things: (1) Quantum Key Distribution (QKD) -- a process that lets two parties exchange cryptographic keys while guaranteeing that the keys won't be intercepted by an attacker in transit, or (2) quantum-safe cryptography, that is to say, classical encryption algorithms that can't be broken by quantum computers any faster than they could be broken by classical computers. I'm not sure which one Scott meant when he said that quantum cryptography already exists today, but there is almost no market for it, because it's a problem that's already solved by private key encryption.

As far as public-private key encryption, it relies on computational difficulty of factoring large numbers, and that is actually vulnerable to Shor's factorization algorithm, but there are many symmetric encryption algorithms (that use the same key for encryption and decryption) that don't rely on factorization and thus can't be cracked by quantum computers.

(Here is a more detailed overview of quantum-safe encryption algorithms and QKD.)

But when it comes to using quantum computing to solve any kind of problems faster, as anyone who reads Scott's blog already knows, he is very doubtful that there will be real, practical applications for quantum computing in that respect. At least not soon.

Can quantum computers actually achieve exponential speedup? What about any other kind of significant speedup?

"If we were physicists, we would have declared P != NP to be a law of nature," Scott says. "But we use different terminology. Where physics have laws, we have conjectures."

Popular press sometimes states that quantum computers can solve NP-complete problems in polynomial time, but anyone who reads Scott's blog knows that's not true. The integer factorization problem that Shor's algorithm solves is not known to be NP-complete, and is suspected not to be. But can quantum computers do it at some point?

Scott Aaaronson's slide of the BQP problem class
Scott Aaaronson's slide of the BQP problem class

There is a complexity class called BQP, or bounded-error quantum polynomial time. Those are problems for which exists a bounded-error polynomial time algorithm for quantum computers. "Bounded error" means that the algorithm will give a wrong answer no more than a certain percentage of times -- that percentage probably being sufficiently low, or can be brought down to be sufficiently low for practical purposes. This class of problems is bigger than P, the class of polynomially-solvable problems, but it definitely does not include the NP-complete problems -- at least nobody at present thinks that it does, and there is a lot of evidence that it doesn't. So most likely polynomial-time algorithms for NP-complete problems, even on quantum computers, don't exist.

But it's not known yet where the boundaries of BQP lie. While it includes some problems in the intermediate zone of NP problems that are not known to be P, but not known to be NP-complete either, some of these problems are very special problems. We don't even know if BQP is contained in NP. So there may be problems that quantum computer can efficiently solve, but a classical computer might not even be able to verify the answer. For that you would need another quantum computer.

What about D-Wave's claims of exponential speedup?

D-Wave is a company that builds adiabatic quantum computers and that has claimed to achieve quantum supremacy, that is, made a quantum computer that achieves significant speedup over a classical computer for some type of problem. Scott has long been skeptical of their claims. Even if you were able to build a perfect adiabatics quantum computer, we don't know how well it would do. The quantum adiabatic algorithm was described in a 2000 paper by Farhi and other authors, and the general idea is shown in this slide by Scott. Farhi suggested that maybe it solves NP-complete problems in polynomial time, which would put NP in BQP. The iffy part is that the amount of time you need to run adiabatic algorithm depends on something called eigenvalue gap. As you vary the Hamiltonian, what is is the smallest gap between its first and second eigenvalues? If that gap becomes exponentially small, you need to run it for an exponential time. "And some people were able to construct problems for which this gap becomes exponentially small," says Scott.

"After you've seen problems with NP-completeness, you are tempted to elevated [computational] hardness as a fundamental physics problem. You would ask, what implication it would have for physics? And now we know some examples: for example, in condensed matter systems spectral gaps would have to become exponentially small," says Scott Aaraonson.

For the last some number of years D-Wave has been trying to find problems for which adiabatic algorithm would be exponentially faster than a classical algorithm. Scott is quite skeptical that they have found any. In the previous quantum computing meetup, though, Brian La Cour from UT Austin said he thought it wasn't so clear-cut, because the recent D-Wave advancements make it less apparent that there is a classical algorithm that would do just as well. But it's not that Scott is skeptical about adiabatic algorithms altogether. "We may not understand the potential of adiabatic algorithm until we have a real quantum computer," he says.

Scott Aaronson's areas of research

Scott's own research area is quantum supremacy, that is, finding quantum algorithms that for some types of problems provide a definite speedup over any classical algorithm. And by the way, he got a lot of comedic mileage out of that phrase during the Trump campaign year.

A year ago Scott wrote this paper connecting quantum computation to the black hole firewall problem. He briefly mentioned it and said that there wasn't enough time to discuss it at this presentation. This left me and some other people in the audience with a cliffhanger feeling. Just when we got to hearing something we had not heard about before, the lecture was over! Some of us asked Scott about that paper later after the talk, and he pointed us to more information about the paper on his blog.

Questions from the audience

Unfortunately, the questions were nearly impossible to hear, because the audience members didn't have a microphone. So I could only guess the general topic they asked about.

Q1. About quantum entanglement.

Scott Aaronson. I didn't say much about entanglement in this talk. In quantum mechanics entanglement is not a separate rule you have to postulate. It's just something that's there for a ride. It's just that entanglement means that in QM you can't write a state of a qubits as a product state.

If qubits are not entangled, you don't have a quantum computer. You can simulate it easily with a classical computer. Entanglement is a necessary but insufficient condition for QM.

Q2. About decoherence, the problem that makes quantum computing so hard to implement in practical terms.

Scott Aaronson. A crucial discovery from the nineties was that you don't have to get your decoherence rate down to 0. You just need to get it down to very very low rate. So even if some small number of qubits will leak out into environment, the quantum information that you care about is still there.

Q3. About machine learning and quantum computing.

Scott acknowledged that quantum computing might seem like a natural fit for quantum computing. "In a way, machine learning is all about linear algebra in high-dimensional spaces, and so is quantum computing. But you always have to ask, if someone can simulate a quantum machine learning algorithm with a classical approach, would they really a get a speedup from the quantum computing algorithm?"

No comments: