When will quantum computers finally deliver commercial value?

When will quantum computers finally deliver commercial value?

Let’s face it! Quantum computing is all about accelerating the time it takes to get a job done. That’s why we care about the time you spend reading this post, so here is a TL;DR before you jump in:

Predicting the timeline for when quantum computers will deliver commercial value requires both an understanding of hardware progress and knowing what algorithms we will need the device to run. As many valuable quantum algorithms rely on estimation, we expect a “quantum advantage machine” will need to run estimation fast. Considering the dichotomy between near-term intermediate-scale quantum and fault-tolerant quantum computation, we predict that neither FTQC-type nor NISQ-friendly algorithms will be used to achieve quantum advantage that delivers commercial value first. Our latest research reveals a spectrum between these two extremes in which we can maximize the speedup in estimation by taking the quality of the device into account.


 

Part I: The false dichotomy

To predict when quantum computing will deliver commercial value is possibly the largest “make or break” question in quantum computing today. It partly depends on how hardware continues to progress, which means trying to answer questions like: “when will we have hundreds of qubits?” or “at what point will quantum logic gates only fault once per billion uses?”

However, tracking hardware progress is not enough to predict when we will reach “quantum advantage” – we must also know what we will require the hardware to do. That is: “what programs or algorithms will a “quantum advantage machine” run to solve problems that provide the commercial value we’re looking for?” And perhaps more pointedly, “which quantum algorithms will yield value first?”

In the past half-decade or so, a dichotomy has emerged between two types of quantum algorithms:

  • Fault-Tolerant Quantum Computation-type: Quantum algorithms from this branch of the dichotomy are fast but require fault tolerance.

    Think Shor’s factoring algorithm or Kitaev’s phase estimation algorithm. When we try to run these on today’s error-prone devices, the output is nonsensical, like static noise from a radio. Many quantum engineers are working towards building quantum computers that are “fault-tolerant.” Fault-tolerant quantum computation (FTQC) will cut out the static and enable the quantum computers to run these fast quantum algorithms.

  • Near-term Intermediate Scale Quantum-friendly: Quantum algorithms from this branch are slower (or lack performance guarantees) but are robust to errors.

    Examples include the variational quantum eigensolver (VQE) and the quantum approximate optimization algorithms (QAOA). The upshot is that near-term intermediate-scale quantum (NISQ) devices can run them and yield clear, sensible results. The downside of these “NISQ-friendly” algorithms is that we don’t know if they will solve problems better or faster than supercomputers. The current issue, which in the literature has been referred to as the “measurement problem”, is that many NISQ-friendly algorithms rely on estimating a quantum state’s properties. These estimates require a staggering number of statistical samples, and hence time. For commercially valuable problems, the number of samples needed for sufficiently accurate estimates may be prohibitively large and this could block near-term algorithms from outpacing classical computation methods.

Let’s return then to the question of which quantum algorithms will yield value first. It seems that we face two possible routes.

  1. We wait until the hardware is capable of running fast quantum algorithms in a nearly flawless fashion.
  2. We hope that near-term algorithms will eventually outpace state of the art classical computation methods.

However, we could also be stuck in a world where fault-tolerant quantum computation is many years away, and near-term algorithms won’t outperform classical supercomputers. Then what?

At Zapata, the possibility of such a scenario has led us to investigate: Is the NISQ-FTQC algorithm dichotomy a false one? If so, how might we combine the best of both branches? Is it possible to speed up near-term algorithms, while maintaining their robustness to error? Our latest research suggests that, indeed, there is room for improvement.

Part II: Room for improvement – turning up the amplifier

How can we speed up NISQ-friendly algorithms? To answer this question, we must understand what makes NISQ-friendly algorithms slower than FTQC-type algorithms.

Many quantum algorithms rely on estimating properties of a quantum state. Algorithms involving estimation find broad application, including chemistry, materials, finance, logistics, and many other areas. Higher-accuracy estimates require longer runtimes. If we had fault-tolerant quantum computation and could run fast estimation algorithms such as quantum phase estimation and quantum amplitude estimation, we could achieve accurate estimates quickly. Unfortunately, these algorithms cannot be run on NISQ devices. One of the innovations of the first NISQ-friendly algorithm, called the variational quantum eigensolver (VQE), was to develop a way to carry out estimation with a NISQ device. We call this estimation method “standard sampling”. The standard approach to VQE can be carried out on NISQ devices because it uses standard sampling instead of quantum amplitude estimation. The downside of standard sampling is that achieving a target estimation accuracy can take a long time.

What makes quantum amplitude estimation faster than estimation with standard sampling? It uses a general quantum algorithmic technique called quantum amplification that underpins Grover’s search algorithm and Shor’s factoring algorithm. Quantum amplitude estimation uses quantum amplification to make measurement data more informative about the parameter of interest. The standard sampling method used in many NISQ-friendly algorithms does not use quantum amplification; this causes these NISQ-friendly algorithms to be slower than FTQC-type alternatives.

Could we use some degree of quantum amplification to speed up NISQ-friendly algorithms? We explain the answer through an analogy.

Imagine that you were throwing an outdoor party. Lots of food, good conversation, and great music! You want to invite as many people as you can as long as everyone can hear the music. Fortunately, you recently acquired an excellent amplifier to provide the decibels you need.

 

 

Unfortunately, the only speakers you have available are a small pair of headphones.

 

If you tried to run the amplified output into these tiny headphones, you would get out static noise (and some busted headphones). The best we could do is turn the amplifier way down and feed the output into the headphones. A pair of headphones get you a party for one.

To throw the party of your dreams, you would need a massive set of speakers capable of handling the amplifier’s maximum amplification.

 

Unfortunately, such speakers are costly. Saving up enough money to buy them would take a few years. So, it seems that we must choose between two options. Throw a party-for-one now or throw a huge party at some point in the future.

 

This dichotomy is much like the one we seem to face with quantum computing. Instead of headphones vs. concert speakers, we have today’s quantum computers vs. future fault-tolerant quantum computers. Instead of a powerful amplifier, we have the algorithmic trick called quantum amplification used in quantum amplitude estimation. And the decibels we output correspond to the rate of information gain in the estimation process.

However, in the party example, a practical option quickly presents itself: buy the best speakers you can afford now and turn up the amplifier as much as the speakers can handle. These moderately sized speakers are good enough for a great party!

 

Turning up the amplifier to play music through your new speakers seems simple. Designing quantum algorithms for real quantum computers is a bit more involved. But the approach we took to speed up estimation in NISQ-friendly algorithms follows a similar logic: we developed a way to turn up the degree of quantum amplification to maximize the rate of information gain, given the capabilities of the available hardware.

As quantum devices improve, we can enhance their ability to estimate accordingly. Much like opting for the practical route of moderate-sized speakers, it is likely that quantum advantage will opt for the practical route of moderate-degree quantum amplification. Quantum advantage will likely be achieved in the exciting territory between NISQ-friendly and FTQC-type algorithms. In the meantime, we will be helping to put the algorithmic pieces in place.

Given a noisy quantum device, our research team has proposed a novel approach to optimizing the rate of information gain in estimation algorithms. This is done by:

  • Incorporating the effect of errors in the algorithm design
  • Maximizing the rate of information gain with the available hardware capabilities
  • Designing quantum circuits with fewer qubits and shorter depth

With this breakthrough approach, we are entering a world where it is possible to run faster quantum algorithms on NISQ devices. Areas of impact include chemistry, materials, finance, logistics, and many others. We hope these insights will steer hardware development towards realizing commercial value much sooner than expected!


 

Wanna learn more?

  • Read the research paper here
  • Watch a recording of the live webinar here – note in the webinar our research team presents their findings and answers questions from the audience (you can read the answers to a few select questions asked at the webinar here).
  • View a PDF-version of the presentation from the webinar here

 

 

Author
Peter Johnson
Zapata Author

Peter Johnson , Ph.D.

Lead Research Scientist & Co-Founder