How HPC Centers Can Start the Move to Quantum Now

Print Friendly, PDF & Email

by Mark Mattingley-Scott, Quantum Brilliance

Quantum computing and quantum technology promise to touch every industry. However, a quantum computer capable of processing data better and faster than a classical computer — in a practical manner for solving real-world challenges — is not availabl. Yet.

A challenge for quantum providers is to assure high-performance computing centers, supercomputing centers, enterprises, governments and other potential customers that the time to begin their quantum journey is now, in advance of commercially useful quantum computing.

To get started, it’s best HPC center and their customers to go with a small and incremental quantum strategy. A solution that can provide a running start and simplicity for scaling as the technology improves would be optimal for clients aspiring to achieve quantum utility before their competitors or expecting quantum computing to become a highly impactful technology in the (near) future.

HPC and supercomputing centers should start exploring quantum as soon as possible. Users will soon – or are already – expecting these centers to begin testing and preparing for the quantum era. What they and their enterprises will want is quantum computing with as little complexity and cost as possible.

Quantum modalities, such as superconducting, are prohibitively expensive. Because heat causes errors in the qubits, these quantum building blocks must be refrigerated to near absolute zero, which requires massive amounts of energy. The computing costs associated with these large mainframes will increase exponentially alongside the computing speed gained.

Customers will likely expect their HPC centers to carry the load of an on-site quantum computer that would leverage the quantum-classical hybrid possibilities to their advantage. That said, HPC and supercomputing centers will want to consider a modality capable of offering an easy-to-deploy, on-premises solution.

While not as widely seen or known, quantum systems that are rack-mountable and operate at room temperature do exist and can scale according to the growth interests of HPC customers. These systems utilize the defects of synthetic diamonds to control quantum mechanical spin. The nitrogen-vacancy centers in these diamonds are well suited for manipulating electrons to create a qubit. Diamond lattices can insulate nuclear-spin qubits from heat and other environmental interference, or “noise,” without any cooling.

Synthetic-diamond quantum accelerators offer the longest coherence time of any room-temp quantum state. The qubits can operate anywhere that a classical computer can. This approach enables cloudless quantum computing where it is needed – where extreme refrigeration, complex lasers, vacuum systems and separation from classical computers are not needed, and where those factors are moot when it comes time to scale.

Diamond quantum sets the table for immediate-proximity, hybridized quantum-classical supercomputing centers. As technology progresses, these centers can plan to scale toward deployment of massively parallelized quantum accelerators and prepare to provide powerful co-computing with several types of chip architectures.

Companies in fields long considered initial adopters of quantum – including logistics, mobility, manufacturing, pharma and materials science – will soon be looking to their HPC providers to offer hybrid classical-quantum solutions. It’s critical for HPC centers to conduct their due diligence now and understand the best way forward to meet their customers’ needs.

Dr. Mark Mattingley-Scott, Chief Revenue Officer for Quantum Brilliance, teaches human and machine learning at the Institute for Cognitive Science at the University of Osnabrück, is a director of the Frankfurt Institute for New Media and a senior member of the IEEE, and has 30+ years of experience in commercial technology and research, many of them with IBM.