It felt like for a long time, the quantum computing industry avoided talking about ‘quantum advantage’ or ‘quantum supremacy,’ the point where quantum computers can solve problems that would simply take too long to solve on classical computers. To some degree, that’s because the industry wanted to avoid the hype that comes with that, but IBM today brought back talk about quantum advantage again by detailing how it plans to use a novel error mitigation technique to chart a path toward running the increasingly large circuits it’ll take to reach this goal — at least for a certain set of algorithms.
It’s no secret that quantum computers hate nothing more than noise. Qubits are fickle things, after all, and the smallest change in temperature or vibration can make them decohere. There’s a reason the current era of quantum computing is associated with ‘noisy intermediate-scale quantum (NISQ) technology.
The engineers at IBM and every other quantum computing company are making slow but steady strides toward reducing that noise on the hardware and software level, with IBM’s 65-qubit systems from 2020 now showing twice the coherence time compared to when they first launched, for example. The coherence time of IBM’s transmon superconducting qubits is now over 1ms. That may not sound like much — and some other companies have shown
But IBM is also taking another approach but betting on new error mitigation techniques, dubbed probabilistic error cancellation and zero noise extrapolation. At a very basic level, you can almost think of this as the quantum equivalent of the active noise cancellation in your headphones. The system regularly checks the system for noise and then essentially inverts those noisy circuits to enable it to create virtually error-free results.
IBM has now shown that this isn’t just a theoretical possibility but actually works in its existing systems. One disadvantage here is that there is quite a bit of overhead when you constantly sample these noisy circuits and that overhead is exponential in the number of qubits and the circuit depths. But that’s a tradeoff worth making, argues Jerry Chow, the Director of Hardware Development for IBM Quantum.
“Error mitigation is about finding ways to deal with the physical errors in certain ways, by learning about the errors and also just running quantum circuits in such a way that allows us to cancel them,” explained Chow. “In some ways, error correction is like the ultimate error mitigation, but the point is that there are techniques that are more near term with a lot of the hardware that we’re building that already provide this avenue. The one that we’re really excited about is called probabilistic error cancellation. And that one really is a way of trading off runtime — trading off running more circuits in order to learn about the noise that might be inherent to the system that is impacting your calculations.”
The system essentially inserts additional gates into existing circuits to sample the noise inherent in the system. And while the overhead increases exponentially with the size of the system, the IBM team believes it’s a weaker exponential than the best classical methods to estimate those same circuits.
As IBM previously announced, it plans to introduce error mitigation and suppression techniques into its Qiskit Runtime by 2024 or 2025 so developers won’t even have to think about these when writing their code.