IBM Casts Doubt on Google’s Claims of Quantum Supremacy
October 23, 2019
A study from Google claiming quantum supremacy, accidentally leaked online last month, has now been published in Nature. The Google group reiterates its claim that its 53-qubit computer performed, in 200 seconds, an arcane task that would take 10,000 years for Summit, a supercomputer IBM built for the Department of Energy that is currently the world’s fastest. But IBM appears to have already rebutted Google’s claim. On 21 October, it announced that, by tweaking the way Summit approaches the task, it can do it far faster: in 2.5 days. IBM says the threshold for quantum supremacy—doing something a classical computer can’t—has thus still not been met. The race continues. Read our 23 September story here:
The age of quantum computing may have begun not with a flashy press conference, but with an internet leak. According to a paper posted briefly—and presumably mistakenly—to a lab site, physicists at Google have used a quantum computer to perform a calculation that would overwhelm the world’s best conventional supercomputer. Although the specific computation has no known use, the result means scientists have passed a milestone known as “quantum supremacy.”
“It’s a great scientific achievement,” says physicist Chad Rigetti, founder and CEO of Rigetti Computing in Berkeley and Fremont, California, which is developing its own quantum computers. “Google called their shot,” he adds, noting that the company detailed exactly how it would demonstrate quantum supremacy a couple of years ago. Greg Kuperberg, a mathematician at the University of California, Davis, calls the advance “a big step toward kicking away any plausible argument that making a quantum computer is impossible.”
According to the Financial Times, which broke the story, the paper appeared last week on the website of NASA’s Ames Research Center in Moffett Field, California; some of the researchers there are paper authors. Readers downloaded the manuscript before it vanished, and it is circulating online. John Martinis, the physicist who leads Google’s quantum computing effort in Santa Barbara, California, declined to comment on the paper, but others in the field think it is legitimate.
A quantum computer aims to exploit the strange aspects of quantum mechanics to perform types of calculations that would swamp a classical computer. Whereas a classical computer depends on “bits” of information that can be set as either zero or one, a quantum computer employs qubits which can be set to zero, one, or—thanks to quantum mechanics—any combination of zero and one at the same time. That enables a quantum computer to process a multitude of inputs simultaneously. For example, a 10-qubit quantum computer could process 210, or 1024, possible inputs at once instead of analyzing them one at a time.
But such a computer’s real power comes from other quantum phenomena. For certain computational problems, all potential solutions can be thought of as quantum waves simultaneously sloshing among the qubits. Set things upright and those waves interfere with one another so that incorrect answers cancel one another and the right answer pops out. Such interference should enable a full-fledged quantum computer to hack current internet encryption schemes by factoring the huge numbers that underlie them.
That feat would require thousands of qubits, so Martinis and colleagues conceived a problem on which a quantum computer with just dozens of qubits could best any conventional rival. The 53 qubits in their device consist of tiny circuits of superconducting metal that can be in a low-energy state to denote zero, a high-energy state to denote one, or both at the same time—at least until measured when such two-way states collapse one way or the other. The researchers then made pairs of qubits interact in various ways through a fixed but random set of operations.
Taken as a group, the qubits output any number between zero and 253. Thanks to quantum interference caused by the operations, some numbers should show up more often than others. And as the number of qubits grows, calculating that uneven distribution of outputs would become overwhelmingly difficult for an ordinary computer. So, if experimenters see the telltale unequal pattern of outputted numbers, they have evidence their quantum device calculated something a conventional computer cannot.
As with any quantum computing effort, the key was to preserve the qubits’ delicate quantum states throughout the process. If they fuzz out then all outputs become equally likely. But the Google team reports that it managed to see the telltale pattern in the generated numbers. To prove the pattern wasn’t just noise, researchers compared the results for smaller trials and subgroups of the qubits with supercomputer simulations. They couldn’t do that for the biggest instances of the problem, however. What the quantum computer could do in a little over 3 minutes would take a supercomputer 10,000 years to reproduce, they estimate.
Some researchers say the demonstration isn’t so much computation as an effort to cook up a quantum state that’s hard to simulate. “Quantum computers are not ‘supreme’ against classical computers because of a laboratory experiment designed to essentially … implement one very specific quantum sampling procedure with no practical applications,” says Dario Gil, director of IBM Research in Yorktown Heights, New York, which is also developing machines with superconducting qubits.
The Google computer also lacks the ability to correct errors, which may be key to making a full-fledged quantum computer. That requires encoding a single, more stable “logical” qubit in several less reliable “physical” ones, to enable the machine to maintain quantum states much longer, Kuperberg explains. Rigetti, however, notes that Google’s achievement may put the company in an ideal position to demonstrate such error correction, too.
Gil voices another worry long held by many the field: that after all the hype surrounding quantum supremacy, quantum computing may experience a letdown like the one that plagued the field of artificial intelligence from the 1970s until the current decade, when technology finally caught up with aspirations. In the leaked paper, however, the 76 authors optimistically conclude: “We are only one creative algorithm away from valuable near-term applications.”