Quantum Machine Learning Considerations for Enterprises

Quantum Machine Learning Considerations for Enterprises

Opening Day is today! Better late than never, as the saying goes. As a lifelong Cubs fan, I’m excited to see how my team will do this year. And, as a QML Software Lead at Zapata, I am looking forward to writing code with my favorite team on in the background.  

Allow me to explain… 

We all have our own tried and true ways to boost productivity and focus. I grew up in a house filled with 60s music, video games, and Cubs baseball broadcasts. Perhaps it’s not surprising, in a career of hopping around between computational physics, application development, data science and engineering (but always coding), that I have come to love working with the game on in the background — especially during those Friday afternoon games at Wrigley. 

To me, these things just go together. The same goes for quantum computing and machine learning (ML). 

Motivating Quantum + Machine Learning 

Quantum offers an entirely new way to think about computing. The classical world of computing — that of digital circuits, discrete logic and instruction sets — lends itself very well to business logic and information systems. It is highly unlikely that will change, even with a fault-tolerant quantum computer. But the natural world is not based on discrete logic and if/else statements. 

Classical computing has been the only way to compute for so long that it can be hard to appreciate that fundamentally different approaches exist, let alone that these can make some tasks extremely easy on a quantum computer (and others extremely difficult).  

Machine learning offers a great proving ground, as it doesn’t fall in the world of rules engines or discrete logic, and there are many unresolved limitations with current techniques. There is no denying that machine learning has flourished and seen enormous adoption in industry, particularly in the past 15 years. 

Nonetheless, many models have grown so complex and unwieldy that, as a recent IEEE Spectrum article states, ML researchers may be “nearing the frontier of what their tools can achieve.” Why are our brains so much more efficient and robust at image processing than the latest and greatest deep learning models? Is our approach making a relatively simple computation very difficult? 

We are free to explore but bound by our presuppositions. From decision trees to regression, the legacy of classical computing is immediately apparent in machine learning. Even neural networks, which include an activation layer when the input exceeds a certain threshold, resemble the behavior of a transistor. For any new approach, we need to rethink each part of the learning algorithm, from encoding the data to preparing an architecture to training against a data set and decoding the result. 

All of this makes for a very exciting and unique challenge. The silver lining is there’s actually a lot of overlap with the prerequisites for quantum machine learning (QML) and most modern enterprise data architectures. While the algorithms will change, the way an enterprise manages the life cycles of its data and models (the business logic side of things) is unlikely to change. 

Change is constant, in life as well as ML (and QML) 

It’s important to keep in mind ML’s dynamic nature. Research is constantly evolving and improving with new architectures and algorithms, so QML is, in a sense, competing against a moving target. It’s our collective responsibility to stay on top of the research in both fields to understand the pros and cons of each and propose the hybrid architectures that take advantage of both, all while benchmarking against the best-in-class classical algorithms. 

Being a quantum scientist is not a pre-requisite 

Just as an ML engineer doesn’t need to know the semiconductor physics underpinning the behavior of transistors, QML engineers do not need to know about the quantum physics of a particular hardware implementation. We can instead focus on the types of elementary operations at our disposal, called gates in both classical and quantum computing, to create end-user applications that can be used by anyone.  

QML applications and methods should be as user-friendly as possible. Consider the examples of classical machine learning libraries PyTorch and TensorFlow, where one doesn’t necessarily need to know how backpropagation works in order to use it effectively and have it translate into actual business insights. The same holds for good QML software. 

All that said, there are certainly varying degrees of complexity depending on the use case and industry. If you think about the work that’s going on in computational chemistry or some of the more physical sciences, it’s a very different language, a different vocabulary and a much more scientific knowledge base required. 

It’s the models that matter 

Outside of data management, machine learning in enterprise is about finding the models or neural network architectures that align best with your data set and the desired insights one hopes to glean from it. These models could be classical, quantum, or involve a hybrid approach. In general, though, the process includes the following steps: 

  • Start with a dataset and an objective — What do we hope to learn from this project? 
  • Exploration — what’s going on in the dataset? How “clean” is it, and how much munging is needed to map it all to a standard format? (The adage that 95% of a data scientist’s job is munging and cleaning data is absolutely true.) 
  • Feature engineering — which aspects of the dataset are important? Does the data contain features that reasonably map to a desired output or label? Information from other data sources may need to be integrated to train a reliable model. 
  • Model creation — which algorithms or architectures might be most beneficial? This is where hybrid architectures and QML come into play. 
  • Training — there are many tools available for hyperparameter tuning to automate this process and make sure you’re getting the most out of your model. 

Once identified, these use cases can then be put into production. The model is a living thing, just like software is a living thing. You’ll always be updating it, versioning and tweaking things for new features, new data sets, or new hyperparameters. A battle-tested and automated MLOps pipeline is essential for this life cycle. 

The Cubs and Quantum Computing Have More in Common Than You Might Think 

The newness and potential of quantum computing, like ML before it, make for a fascinating research field. Similar to the graphics and speed of the video games I played as a teenager, it’s hard to believe how abruptly things can change in a short amount of time.  

When I was in grad school, quantum computing — and the Cubs winning a World Series — were just pipe dreams. I’d roll my eyes whenever those subjects came up and think they were never going to happen. Today, the Cubs are only a few years removed from winning the World Series and there’s an entire industry dedicated to quantum computing. 

Never before has quantum computing had as much attention from experts in machine learning, numerical optimization, and quantum simulation. 

QML’s impact will grow with our theoretical understanding as much as it will grow with the hardware. We are very much in uncharted territory. Then again, so were the Cubs in Game 7 of the 2016 World Series, and that worked out pretty well. I’m hoping the same for QML. 

For more on what quantum can bring to ML, check out the podcast I recently did with my colleague Luis Serrano, Ph.D. If terms like “probability distributions” and “generative models” get you fired up, then you will love this discussion! 

[note: this article originally appeared in The New Stack] 

Author
Brian Dellabetta
Zapata Author

Brian Dellabetta , Ph.D.

QML Software Lead