We felt your pain – and built Orquestra®

We felt your pain – and built Orquestra<sup>®</sup>

We built Orquestra® to help accelerate the work of quantum researchers inside Zapata and in our broader community. That means we care about your time spent reading, too. Here is a TL;DR before you jump into this lengthy blog post:

Our goal with Orquestra is to accelerate the quantum researchers who are in high demand today and to provide a smoother road for future researchers to get into the field. In addition to easing access to raw computational power (both quantum and classical), we are also introducing extensibility, modularity, reproducibility and scalability to multiply the capabilities of quantum researchers and avoid a lot of pain that comes with doing this work (read below how we’ve done this). We are also building for industry members working on solving valuable business problems with computationally complex workflows.

Our Current Bottlenecks in the Quantum Revolution

The burgeoning field of quantum computing is in a unique position. Once restricted to a myriad of academic labs throughout the world, we are now seeing multiple commercial quantum computing systems becoming available through the cloud. Growing interest in near-term algorithms and experiments (such as Google’s quantum supremacy announcement) have excited both the research community and society at large. However, the sudden growth of quantum computing has revealed many bottlenecks that stymie progress. At the top of the list are talent and tools.

  • Talent

I recently attended an NSF workshop on growing the quantum workforce; the unanimous opinion of both academic and industry scientists was that the community is not producing enough researchers with an expertise in quantum algorithms to meet demand. This demand is increasing perhaps more rapidly than any other subfield in quantum computing, as industries are expanding into the field and hiring aggressively. To make matters worse, many of those who do become qualified quantum computing researchers are in high demand by companies outside the field.

  • Tools

In our first year at Zapata, the need for better tools became outstandingly clear. The set of tools that quantum algorithm researchers have access to is quite incommensurate with the complexity of the space. To make improvements on near-term quantum algorithms requires a broad range of expertise and skills. Researchers need to be sophisticated coders as well as learned in quantum computing approaches and application areas. The most successful progress is generally made by interdisciplinary teams, as the complexity is too much for an individual. The industry has begun providing some new tools to make creating and running quantum algorithms easier, but these systems are not interoperable, so coders are forced to learn and glue together an ever-increasing set of languages and libraries.

To provide an analogy, quantum computing software looks like pre-Industrial Revolution manufacturing. Manufactured goods were produced by individual experts, and in some cases brought together to make a final product. Manufacturing took a leap with the concept of interchangeable parts, and again with the development of the assembly line. In each case, the burden on the expert was reduced, while simultaneously enabling a better end-product. In the classical computing industry, a similar leap was taken with the development of operating systems.

Orquestra was Born from Necessity

Orquestra began as an internal tool to accelerate Zapata’s scientists by providing a framework in which we could collaborate and scale our developments to an arbitrary size. After seeing our own scientists produce results in weeks that would have previously taken closer to a year, we realized that the tool could help accelerate the growth of the industry outside of Zapata. We aspire to make Orquestra the ultimate quantum assembly line, and today we are at the beginnings of the Quantum Revolution and using interchangeable quantum parts in workflows.

A clear benefit of using workflows is that it enables us to exchange and interweave classical subroutines and quantum subroutines. This is essential for production-ready, enterprise applications. Like a car, we can drop in a faster, newer engine without having to build an entirely new car. Later, that same car can be improved again by introducing a new transmission. For quantum, modularizing the components of a software pipeline additionally means that benchmarking, testing, and deploying quantum algorithms can be done without needing to build an entirely new system. This is critical for the forward compatibility of quantum algorithms and workflows being created today: we can “upgrade the machinery” as quantum devices mature.

This sounds good in theory, but how about in practice? For the rest of this blog, I want to give some examples of how Orquestra addresses the most agonizing challenges of modern quantum algorithm designers.

Introducing extensibility, modularity, reproducibility and scalability to multiply the capabilities of quantum researchers

  • Extensibility

One of the most frustrating experiences in life is meaningless or lost work. With nascent technologies like quantum computing, the risk of lost work can mean years of your life. Imagine spending an enormous amount of time testing and integrating a quantum algorithm for one hardware system only to have a different hardware system be best-in-class. Or, suppose you spent a lot of time in graduate school learning to use Cirq, only to get a great job offer from a company focused on IBM’s Qiskit library.

Orquestra can’t mitigate the risks for hardware companies’ technologies, but it does mitigate the risk of writing software and algorithms for them. Workflows in Orquestra are deployed using many self-contained environments, containers, that are told how to work together. These environments interact through a simple input-output relationship specified by the workflow, so you don’t need to figure out how to get code that assumes a Python 2 environment to integrate into a Python 3 environment. Plus, Orquestra provides the context, or the “conveyor belt”, that allows integration between different quantum programming languages. It’s simple to write a workflow that expresses a circuit in Cirq, uses a circuit compiler from Qiskit, and then runs on a superconducting processor, a classical simulator, and an ion trap—all in parallel.

Better yet, it doesn’t even have to be your code. Did a nice paper on arXiv come out that uses a nifty new ansatz for the project that you are working on? If they ran it in Orquestra and open sourced the code, it may take the effort of a single copy/paste into your workflow to include it—even if they used a totally different language than you. This enables collaboration in a totally new way. For industry users, it provides a means of outsourcing some difficult challenges and the peace of mind that you don’t have to bet on which hardware technologies will win. Best of all, for the whole industry, it means progress on quantum algorithms is made faster and easier than before.

  • Modularity

A recurring nightmare of my academic life was getting a referee report from a paper that said more or less, “You need to incorporate noise into your simulations.” I hated this because all of the spaghetti-code I’d written over the course of the project never assumed a noisy model, and so I had no-where incorporated it into my code; I’d have to rewrite the whole project from scratch. I was an expert on quantum algorithms, but sadly not an expert on how to write efficient code that would run on my laptop and return an answer within my lifetime.

Today, an Orquestra workflow provides me with a sensible quantum architecture with which to start. If I run a simulation, I can make a choice of what noise models to include and at what strength, and it doesn’t impact my ability to code the rest of the experiment. Now if I chose a noiseless protocol and get the same referee comments, I can make an adjustment to the noise model without any adjustments to what I had already done. I just update the parameters of my simulator or change the simulator backend to a different one.

But referees weren’t the only nightmare in my academic life. One of the most tedious and unpleasant tasks was working with collaborators’ code. There was nothing wrong with them—they did exactly the same thing I did, which was writing code in a very expedient way. I don’t know how many libraries were copy/pasted and imported into notebooks only to remain unused, or how many days I spent trying to figure out what each function was doing. This was not because I needed to use their code myself, but because I simply needed to know how what I was going to write was going to interface with it. The modularity of Orquestra allows me to collaborate on a much higher level without having to worry about the interface of someone else’s code and my own.

  • Reproducibility

Remember that awesome arXiv paper that you wish you could reproduce the data from so that you could make some great comparisons with your own work? Considering that much of scientific progress is made through incremental improvements, it is telling how much we waste effort reproducing the work of others, only to often fail at it anyway. Even if someone is gracious enough to provide you with their code, the task can be tedious. Academic coders are especially guilty of writing spaghetti code that is expedient and not really intended to be shared. One of the elegant properties of Orquestra workflows is that simulations are reproducible—and thus sharable—by default.

Moreover, Orquestra’s reproducibility is not limited to the ease of sharing code. When I began working in machine learning, I was amazed at how chaotic the performance of systems could be with respect to changes in parameters of the model. Some of the best and worst examples of performance (which are often the most interesting to study) were lost in early iterations. Had I been a less expedient coder, I might have taken the time to design reproducibility into all of my experiments, but that was hardly my priority. Orquestra’s Data Correlation Service exists to relieve you of this headache. It keeps all of the inputs and outputs of all of the tasks in your workflow saved, together with the metadata of the workflow. What generated that pesky outlier? Now you know.

  • Scalability

Have you ever coded up a great experiment only to discover that getting a single data point takes you about 10 hours on your laptop? Or maybe if you could scale up to two more qubits, your plot would look a lot better? If you work in quantum computing, you undoubtedly know what I’m talking about.

If there was one thing we knew about designing software to support quantum researchers, it was that we needed a platform where we could dedicate an arbitrary amount of computing power to a problem. We also know that this is vital to any company with an appetite for high-performance computing. When we decided to make Orquestra a product, that meant we needed to build in the capabilities for a wide range of high-performance computing backends such as cloud services like AWS or Azure, and on-premises hardware for those who already have access to a supercomputing cluster. Many of these resources are available now or are being built as we speak.

 

 

Join us

From early user feedback, we are hearing that Orquestra is indeed helping with these bottlenecks of doing work in quantum computing, and we’ll relentlessly pursue them as we improve the product. Our desire is to make quantum computing easy and increasingly accessible so more people can bring their own expertise and join us here. If you’d like to join an upcoming Orquestra demo or join our waitlist for our private beta, please contact us or DM me at @JonnyQlson. Zapata exists to accelerate your work in quantum and we want to hear from you.

 

Author
Jonathan Olson
Zapata Author

Jonathan Olson , Ph.D.

Associate Director of Quantum Science IP & Co-Founder