Google announced today that it has demonstrated “quantum supremacy,” i.e. that they have solved a problem on a quantum computer that could not be solved on a classical computer. Google says
Our machine performed the target computation in 200 seconds, and from measurements in our experiment we determined that it would take the world’s fastest supercomputer 10,000 years to produce a similar output.
IBM disputes this claim. They don’t dispute that Google has computed something with a quantum computer that would take a lot of conventional computing power, only that it “would take the world’s fastest supercomputer 10,000 years” to solve. IBM says it would take 2.5 days.
Scott Aaronson gets into technical details of the disagreement between Google and IBM. He explains that the supercomputer in question is Oak Ridge National Labs’ Summit machine. It covers the area of two basketball courts and has 250 petabytes of disk storage. By exploiting its enormous disk capacity, Summit could simulate Google’s quantum calculation on classical hardware in “only” two and half days. In a nutshell, it seems Google didn’t consider that you could trade off a gargantuan amount of storage for processor power. But as Aaronson points out, if Google’s machine added just a couple more qubits, even Summit couldn’t keep up on this particular problem.
So does this mean that all the world’s encryption systems are going to fail by the end of the week?
Google selected a problem that is ideal for a quantum computer. And why wouldn’t they? This is the natural thing to do. But they’re not on the verge of rendering public key encryption useless.
Google’s Sycamore processor used 54 qubits. According to this paper, it would take 20,000,000 qubits to factor 2048-bit semiprimes such as used in RSA encryption. So while Google has achieved a major milestone in quantum computing, public key encryption isn’t dead yet.
If and when large-scale quantum computing does become practical, encryption algorithms that depend on the difficulty of factoring integers will be broken. So will algorithms that depend on discrete logarithms, either working over integers or over elliptic curves.
Post-quantum encryption methods, methods that will remain secure even in a world of quantum computing (as far as we know), have been developed but not widely deployed. There’s a push to develop post-quantum methods now so that they’ll be ready by the time they’re needed. Once a new method has been proposed, it takes a long time for researchers to have confidence in it. It also takes a long time to develop efficient implementations that don’t introduce vulnerabilities.
The NSA recommends using existing methods with longer keys for now, then moving to quantum-resistant methods, i.e. not putting any more effort into developing new algorithms that are not quantum-resistant.
Here are some posts I’ve written about post-quantum encryption methods.
It seems that IBM have kept the provision of CPU cores, local volatile memory, further non-volatile storage and the bandwidth of the pipes connecting these ‘Amdahl balanced’
http://www-inst.eecs.berkeley.edu/~n252/paper/Amdahl.pdf
John, I’ve been playing with Microsoft’s Quantum Computing Kit. It started me thinking about what context switching would look like with even a relatively small 100 qubit core. It seems we might not be able to switch all 100 qubits to a new context at the same time. I’m such a rank beginner with this, but wondering how well a quantum computer could time slice between various workloads.