Where We Are — and Are Not — With Quantum Computing

Where We Are — and Are Not — With Quantum Computing

Last week, quantum computers crossed a threshold that all game-changing technologies inevitably pass on their way to widespread adoption—gracing the cover of TIME magazine.  

The headline, “Quantum Computers Could Solve Countless Problems—And Create a Lot of New Ones,” is correct. Quantum computing could solve countless problems across industries (see potential use cases for each industry on our solutions page), while threatening the very foundation of cybersecurity.  

But is “the future of computing” really here, as heralded by the magazine cover? Are we on the cusp of a new era of drug discovery and driverless cars? Can this machine really “solve problems in seconds that used to take years?” 

Well…eventually. Let me tell you one of the quantum community’s biggest secrets: for the next few years, the value from quantum computing won’t actually come from quantum computers. It could, however, come from quantum software running on today’s most powerful classical computers.

I don’t say that to diminish optimism for quantum computing (and there is optimism: our recent survey found that 74% of enterprises have already adopted or plan to adopt quantum computing). Nor do I say it to diminish the rapid, accelerating progress we’ve seen in quantum hardware in recent years.  

I say it to set the record straight that there’s a lot you can do right now with quantum techniques running on classical hardware.  

Specifically, the quantum techniques that we believe will deliver the most near-term value are tensor networks. Tensor networks are efficient linear algebraic structures for representing complex correlations between variables. Their popularity grew out of quantum physicists who wanted to simulate quantum many-body states on classical computers.  

Tensor networks have a substantial role to play in quantum computing: every quantum circuit can in principle be represented as a tensor network, which can in turn be run on classical computers. For this reason, we often refer to tensor networks as “quantum-inspired models.”  

Importantly, tensor networks are forwardcompatible with real quantum circuits running on quantum hardware. In other words, you can build valuable applications around tensor networks today, then swap the tensor networks for real quantum circuits in the future era of fault tolerant quantum computers. We expect this to provide a substantial performance boost 

The impact of tensor networks may be felt most acutely in another innovative field riding a wave of momentum right now: generative AI.  

For the next few years, the value from quantum computing won’t actually come from quantum computers. It could, however, come from quantum software running on today’s most powerful classical computers.

Quantum Generative AI

Generative AI has taken the tech world by storm—for good reason. The results generated by ChatGPT, Dall-E and similar tools speak for themselves. AI is no longer limited to simply classifying data, it can now generate new data.  

As it turns out, generative AI (or to be more scientific, generative modeling), is likely to be the first area where quantum computers can deliver a practical advantage. 

You may look at the impressive results generated by classical generative AI and think, how can a quantum computer — let alone the error-prone NISQ devices we have today — improve upon these results?  

The advantage from quantum generative models will come from their expressibility, or their ability to express the full range of possibilities for the data used to train the model. In other words, they will be able to generate a more diverse range of solutions for the generative modeling task. This comes from quantum computers’ ability to encode and sample from complex probability distributions in a way that would not be feasible using classical computers. This has been demonstrated by extensive theoretical research.  

There are a few reasons for this, namely the quantum phenomena of entanglement, which can be used to encode distant correlations within a complex dataset, and wave function collapse, which allows for efficient sampling from complex probability distributions. A more detailed discussion of these phenomena and their role in quantum generative modeling can be found on page 12 of our new white paper, The Near-Term Promise of Quantum Generative AI.

Unfortunately, the limitations of today’s quantum devices put the full potential of quantum generative models out of reach. But you don’t necessarily need real quantum devices to reap the advantages of quantum generative modeling. As our research has shown, tensor networks can already unlock value for generative modeling, particularly when it comes to optimization.

Tensor Networks for Optimization 

For years, it was believed that quantum annealing, or, alternatively, the QAOA algorithm, would be the key to solving combinatorial optimization problems with quantum computing. This would, however, require fault tolerant quantum computers to work, delaying the time to value by at least half a decade. That was until we discovered you could use generative modeling for optimization problems.  

We call this strategy Generator-enhanced Optimization, or GEO. In GEO, we train a generative model on the best optimization solutions provided by classical solvers, which it uses to generate novel solutions.  

You could use any kind of generative model for this, but we have found that quantum-inspired tensor networks can improve upon classical models’ ability to generalize from the training data. Generalization refers to a generative model’s ability to understand the essential features of the training data. In the context of GEO, a generative model with strong generalization could generate novel optimization solutions rather than simply repeating existing solutions

 

Indeed, in our research we used a tensor network known as a matrix product state (MPS) to improve the output of a classical solver for a financial portfolio optimization task and generate previously unconsidered solutions. Using real data from the S&P 500 and other financial stock indexes, we showed how our MPS could generate portfolios with the same level of return but with lower levels of risk than the portfolios generated by the classical solvers alone. 

Other Zapata work has further proven the superior generalization capabilities of tensor networks. In one study, a tensor network-based generative model was up to 68X better at generating unseen and valid samples compared to GANs (a classical generative model) and generated higher quality samples than those in the training set at a ratio of 61:2. We also showed that real quantum generative models, particularly quantum circuit born machines (QCBMs), can generate samples with higher quality than the samples in the training set.  

In a more recent breakthrough for GEO, we found a way to use tensor networks to encode equality constraints, or constraints that a combinatorial optimization solution must satisfy to be valid. Traditional optimizers typically generate many solutions that do not satisfy these constraints, which makes for expensive and inefficient searches. In fact, equality constraints can exponentially reduce the likelihood of finding valid solutions.  

In contrast, our solution encodes equality constraints directly into an MPS tensor network in a way that only outputs valid samples. We found that the constrained MPS outperform their standard unconstrained counterparts in finding novel and higher quality solutions to combinatorial optimization problems. 

What’s more, while most traditional methods suffer when constraints are added, our new approach actually benefits from more constraints, because the more constraints we have, the sparser the tensor network can be parametrized. What this means is that constrained tensor networks deliver better computational performance for optimization problems while using significantly fewer computational resources.

Constrained tensor networks deliver better computational performance for optimization problems while using significantly fewer computational resources.

From Quantum-Inspired to Fully Quantum  

You may be wondering at this point, if you can do so much with quantum-inspired techniques such as tensor networks, why bother with quantum computers at all?  

Well, as I said before, tensor networks can faithfully model any quantum circuit, but there’s a catch. You need exponentially more parameters in the tensor network the more complicated the quantum circuit becomes. While tensor networks are easier to train and don’t suffer from the error rates of real quantum circuits, they can quickly exceed the limits of the classical computational resources available.  

Fortunately, tensor networks can be mapped onto quantum circuits. What this means is that those that start building generative modeling applications today using tensor networkswhether for GEO, synthetic data generation, or elsewhere will be able to swap in quantum circuits when we achieve fault-tolerant quantum computing. 

There are other benefits to starting with tensor networks. Our previous research has shown how tensor networks can improve the training of real quantum generative models to outperform both classical and quantum methods on their own.  

Tensor networks may not be as dazzling as the quantum chandelier on the cover of TIME magazine. At the end of the day, it’s just math. But there are advantages to be gained today from this quantum-inspired math — advantages that will be forward compatible with and enhanced by the more powerful quantum devices of tomorrow.  

So no, we may not be in the “future of computing” yet, but those who start building quantum applications with tensor networks today will be the first to reap the benefits of that future when it does arrive — not to mention added value for optimization and other generative modeling use cases today.  

Tensor networks may not be as dazzling as the quantum chandelier on the cover of TIME magazine. At the end of the day, it’s just math. But there are advantages to be gained today from this quantum-inspired math — advantages that will be forward compatible with and enhanced by the more powerful quantum devices of tomorrow.

For more details on the research covered here, see our new white paper: The Near-Term Promise of Quantum Generative AI.