Quantum Computing – and the (possible) end of IT security as we know it

Photo of the IBM Quantum Computer Q

Q, the 50 Qubit IBM Quantum Computer

Apologies for the apocalyptic title, but I’m writing for people who know to read beyond the headline. Don’t worry, we are not about to easily factor the product of two large primes on a Quantum Computer (a computer based on quantum mechanical principles such as superposition and entanglement) just yet. Unless, that is, someone finds a better quantum algorithm to do it with than we currently have (and someone could just as likely come up with a better classical algorithm for this task, although there is nothing on the horizon there, either). When we do, this will enable the breaking of much current cryptography (notably public/private key cryptography), in a useful timeframe.

Here is a latest generation IBM Quantum Computer Q, with 50 Qubits (Quantum Bits) and very pretty it is. This machine (there are other approaches) is the basis for the discussion below.

A Quantum Computer uses “Qubits”, with quantum behaviours, just as a classical computer (like your laptop or smartphone) uses bits. The advantage of Qubits is that, while a classical bit can be in only 2 states (0 and 1), a single Qubit can hold many more states and this moves infeasibly large calculations using the best classical computers into feasibility. In other words, problems which need more classical bits than can fit on the average planet can run in a Quantum Computer that fits in a normal computer centre. The number of states a classical computer can deal with is proportional to the number of processors one adds; but the number of states a Quantum Computer can deal with goes up exponentially as you add Qubits.

The devil, of course, is in the detail (see a later blog) and keeping Qubits around long enough to be useful is tricky and they sometimes give “poor quality” answers. One tends to run the quantum calculations many times and plot the results, looking for a peak corresponding to the right answer (the speed of a Quantum Computer is sufficient for the extra “experiments” not to matter too much).

Reducing errors in Quantum Computers is vital and most of what you see in my picture of Q is just liquid helium plumbing, used to get the Qubit chips as close to 0 degrees Kelvin as possible, so they can do useful work. Noise is the very devil in Quantum Computing, and there is some noise (zero point energy) even at absolute zero. What this all means is that building an effective Quantum Computer is cutting-edge engineering and hackers aren’t going to be building them (well, not using this architecture anyway) in their bedrooms.

Older generations of Quantum Computer are available to play with, as services, over the Web, but aren’t powerful enough to do anything damaging. The very latest generations of Quantum Computers are also available on the Web, but (depending on vendor) often only to a restricted community, and even they aren’t all that powerful yet (although the 50 Qubit Q in my picture is, now, somewhat more powerful than its approximate simulation on a classical computer). The 16 Qubit version of IBM’s Q is available, free but with registration, to anyone.

To program these machines, you used to have to understand quantum mechanics and program in a fairly abstruse assembler language – but high-level languages for Quantum Computing have been available for some time (see here). Now, IBM has even introduced an SDK – QISkit (see here) – for programming a Quantum Computer in Python.

Think of a Quantum Computer as a sort of graphics co-processor but for a very specific set of highly parallelisable operations. It isn’t going to power your word processor or do general-purpose computing but, potentially, when you ask your software to find an optimised solution to a problem, it will have the option of calling in (over the web) a subroutine running on a quantum computer that returns the optimised solution quickly enough for you to still find it useful. Nevertheless, Quantum Computers aren’t magic, they have limits. They can’t, for example, solve NP-complete (see here) problems as far as we know, which means that (rather to my surprise) they probably can’t help much with the Travelling Salesman Problem (see here).

In short, although Quantum Computing is a thing that you can play with now, and is well on course to do some very useful things for a very specific subset of computing problems in fashionable fields (usually given as AI, Analytics, and Materials Science (chemistry); as well as cryptography) currently we are now at the stage where real quantum computers are only just starting to deliver their results somewhat more effectively than an approximate simulation of a quantum computer on a classical digital computer can.

There is an algorithm today (Shor’s algorithm) for factoring large numbers on a quantum computer but to attack current cryptography in the business world would require larger numbers of fault-tolerant (“high quality”) Qubits than we are likely to have for many years. Even so, that could always change with breakthroughs in technology – there is a real and significant risk. The days of current public key cryptography, used to secure everything from e-Commerce and Digital Signatures, to internet communications, are definitely numbered. It’s just that we aren’t sure how numbered they are (and exciting modern schemes like Elliptic curve cryptography using shorter keys will be at risk sooner than boring old schemes like RSA, which use longer keys).

This matters, even today, because we store some encrypted information for a long time, often for legal reasons. Years or decades into the future, this information, if encrypted by public key encryption, will not be protected any longer – and remember that even if the data itself is protected by quantum-resistant secret key encryption (as it often is, for performance reasons) the secret keys will usually be protected by vulnerable public key encryption. Once you have decrypted the key, the archive is open. There are well-established safe ways to apply quantum-resistant encryption to vulnerable encrypted data you already have, without decrypting it, but doing this this might be a logistics nightmare, and also introduces the opportunity to do it wrong. So, ideally, we need to begin to address the impending cryptography issue now, before Quantum Computers become powerful enough to break our current public key cryptography algorithms.

Lattice Cryptography

Lattice Cryptography

Luckily, there are encryption algorithms today, such as lattice-based cryptography (being developed by IBM), which rely on areas of mathematics that a Quantum Computer isn’t especially good at. These algorithms, often called “quantum resistant algorithms” run perfectly well on classical, non-quantum computers and we can start moving over to them now, especially for data we expect to store well into the future, before Quantum Computers render current cryptography obsolescent. According to IBM, the algorithms behind lattice-based cryptography, for example “have been widely studied since the 1980s and have not succumbed to any algorithmic attacks, either classical or quantum” – Christopher Sciacca, Communications Manager, EMEA, IBM Research, says that IBM is “already open to pilot lattices with clients today”.

So, there really is no need to panic. We have time to keep encryption secure, as long as we start around now and remain aware of the development of Quantum Computing generally. The US National Institute of Science and Technology, NIST, has already initiated a process to solicit, evaluate, and standardise one or more quantum-resistant public-key cryptographic algorithms and a decision is expected within the next few years. There is also, the SAFECrypto project, under the European Horizon 2020 initiative.

Quantum Computers using different QUBits/technologies are being developed by Intel, Google, Microsoft, D-Wave, Rigetti and probably others; all with differing degrees of hype (from pretty low to outrageous – vapourware is always perfect), and one of these technologies might suddenly get lucky. Nevertheless, although IBM plausibly claims: “Today, quantum computing is a researcher’s playground. In five years, it will be mainstream”, I rather like a quote from Jim Clarke, director of quantum hardware at Intel Labs, from a Scientific American article in May 2018: “We and others have been saying [commercially effective Quantum Computing is] 10 years away. Some are saying it’s just three years away, and I would argue that they don’t have an understanding of how complex the technology is” – Jim Clarke, director of quantum hardware at Intel Labs. The point is, predicting the future is easy, mainstream Quantum Computing will happen; but saying exactly when it will happen is hard.

In subsequent blogs, I will look at questions such as “What is Quantum Computing”, “How do the different approaches differ” and “How does quantum-resistant encryption work?”. I hope to look at more positive uses of Quantum Computers than the breaking of encryption but I don’t anticipate any very easy reading!

David Norfolk

My current main client is Bloor Research International, where I am Practice Leader with responsibility for Development and Governance. I am also Executive Editor (on a freelance basis) for Croner's IT Policy and Procedures (a part-work on IT policies). I am also on the committee of the BCS Configuration Management Specialist Group (BCS-CMSG). I became Associate Editor with The Register online magazine – a courtesy title as I write on a freelance basis – in 2005. Register Developer, a spin-off title, started at the end of 2005, and I was launch editor for this (with Martin Banks). I helped plan, document and photograph the CMMI Made Practical conference at the IoD, London in 2005 (http://ww.cmminews.com). I have also written many research reports including one on IT Governance for Thorogood. I was freelance Co-Editor (and part owner) of Application Development Advisor (a magazine, www.appdevadvisor.co.uk, now defunct) for several years. Before I became a journalist in 1992, I worked for Swiss Bank Corporation (SBC). At various times I was responsible for Systems Development Method for the London operation, the Technical Risk Management framework in Internal Control, and was Network Manager for Corporate group. I carried out a major risk evaluation for PC systems connecting across the Bank’s perimeter to external systems and prioritised major security issues for resolution by the Bank’s top management in London. I also formulated a Security Policy for London Branch and designed a secure NetWare network for the Personnel Dept. Before 1988 I was an Advisory Systems Engineer in Bank of America, Croydon in database administration (DBA). on COBOL-based IMS business systems. Before 1982, I worked in the Australian Public Service, first as a DBA in the Dept of Health (responsible for IMS mainframe systems) and latterly as a Senior Rserach Officer 2 in the Bureau of Transport Economics. Specialties: I have the ability to extract the essence of significant technical developments and present it for general consumption, at various levels, without compromising the underlying technical truth.

Have Your Say: