Less trust, more randomness

June 3, 2019

Continuing my quest to catch up on explaining our research results, here is another paper we put out back in April, in collaboration with colleagues in Geneva and Brussels: https://arxiv.org/abs/1904.04819

Like the one I wrote about last week, it is about how to use quantum physics to generate strong random numbers. In that work, we put a general bound on how much randomness one could possible generate in any setup, where the measurements are not trusted. Here, we go the other way and present a specific scheme which generates randomness with untrusted devices – and we implement it experimentally.

Quoting from my last post, random numbers are crucial for secure digital communication. They are needed for cryptography, which e.g. keeps your credit card details safe online. And they are also used in computer simulations of complicated processes (for predicting the weather, for example), and in games and gambling. But good random numbers are not that easy to create.

For security applications, “good” means unpredictable – no spy should be able to predict them in advance (and since we don’t know who might try to spy on us, that means no-one at all).

Something may seem random to you but perfectly predictable to someone else. Say I’m a magician and I practised coin flipping a lot. When I flip a coin, by giving it just the right spin I can make it land on heads or tails as I wish. To you the flip looks random, but to me the outcome is completely predictable. What we want is a guarantee that the numbers we generate are random to anyone – we want to sure that no magician could be playing tricks on us.

Ideally, we would like to have to assume as little as possible about what these ‘anyone’ can know about the devices used to make the numbers. The less we need to assume, the less risk that any of our assumptions turn out to be wrong, and so the stronger our guarantee on the randomness.

In a classical world, knowing everything there is to know about a system at some point in time in principle allows predicting everything that will happen at all later times. The classical world is deterministic, and there is no randomness, unless we make assumptions about how much an observer knows. It is one of big surprises in quantum physics that there is fundamental randomness in nature. In quantum mechanics it is impossible to predict the outcome of certain measurements even when you know all that can possibly be known about the devices used.

In fact, quantum physics allows us to guarantee randomness under a range of different assumptions about the devices used. On one end of the scale, the measurements made by the devices are assumed to be known, and they are chosen such that their outcomes are unpredictable. In this case, the devices need to be well characterised, but they are relatively easy to implement and random numbers can be generated at very high rates (millions of bits per second). Commercial quantum randomness generators operate in this regime. On the other end of the scale, essentially nothing is assumed about what the devices are doing. Randomness can be guaranteed just be looking at the statistics of the data the devices generate. This regime is known as ‘device-independent’, and offers an extremely secure form of randomness. However, it requires that the data violates a so-called Bell inequality. This is technologically very challenging to do without filtering the data in some way that might compromise the randomness. For this reason, the rates that have been achieved so far for device-independent generation of random numbers are relatively low (some bits per minute).

In between the two extremes, there is plenty of room to explore – to look for a good set of assumptions which gives a strong guarantee on the randomness but still allows for reasonable rates to be realised in practice.

One would like the assumptions to be well justified physically. This means that ideally, it should be something that one can check by measuring. A nice route towards this goal was pointed out by Thomas van Himbeeck and co-workers (https://arxiv.org/abs/1612.06828). They considered prepare-and-measure setups with two devices. One prepares quantum states, the other measures them. They showed that when the measurement device is untrusted, one can still certify the quantum behaviour of the experiment just from the observed data, provided that the energy of the prepared states is bounded.

The energy can be measured, and so it is possible to check whether this assumption holds in a given experiment. In our experimental implementation, the prepared states corresponds to pulses of laser light with different intensities, and they are measured by a detector which just distinguishes between the presence or absence of photons (single quanta of light). This way, we can generate millions of random bits per second, with a very strong guarantee on how unpredictable they are. A user can verify in real time that the setup works correctly, based on the detector output and a bound on the energy in the laser pulses, which can also be justified directly from measurements.

Compared with earlier works (by myself and others), we’ve made the assumptions required to guarantee randomness much easier to justify, without loosing very much on the rate. So, we’ve improved the trade-off between between trust in the devices (how strong the randomness is), and the random bit rate (how much randomness we get per time).

Published paper: https://journals.aps.org/pra/abstract/10.1103/PhysRevA.100.062338