How random can a black box be?

May 25, 2019

I am way way behind on writing human-readable summaries of our new research results. Somehow, I’ve been continuously super busy after moving back to Denmark… I’ll try to start catching up though.

As a start, here is one which was published recently: https://doi.org/10.1103/PhysRevA.99.052338. It was done in collaboration with Marie Ioannou and Nicolas Brunner and is about how much randomness can be generated from a quantum black box. The open-access arXiv preprint is here: https://arxiv.org/abs/1811.02313 .

Random numbers are crucial for secure digital communication. They are needed for cryptography, which e.g. keeps your credit card details safe online. And they are also used in computer simulations of complicated processes (for predicting the weather, for example), and in games and gambling. But good random numbers are not that easy to create.

For security applications, “good” means unpredictable – no spy should be able to predict them in advance (and since we don’t know who might try to spy on us, that means no-one at all).

Something may seem random to you but perfectly predictable to someone else. Say I’m a magician and I practised coin flipping a lot. When I flip a coin, by giving it just the right spin I can make it land on heads or tails as I wish. To you the flip looks random, but to me the outcome is completely predictable. What we want is a guarantee that the numbers we generate are random to anyone – we want to sure that no magician could be playing tricks on us.

Ideally, we would like to have to assume as little as possible about what these ‘anyone’ can know about the devices used to make the numbers. The less we need to assume, the less risk that any of our assumptions turn out to be wrong, and so the stronger our guarantee on the randomness.

In a classical world, knowing everything there is to know about a system at some point in time in principle allows predicting everything that will happen at all later times. The classical world is deterministic, and there is no randomness, unless we make assumptions about how much an observer knows. Not so in the quantum world. It is one of the profound implications of quantum physics that there is fundamental randomness in nature. In quantum mechanics it is impossible to predict the outcome of certain measurements even when you know all that can possibly be know about the devices used.

In fact, quantum physics allows us to guarantee randomness under a range of different strength of assumptions about the devices used.

On one end of the scale, the measurements made by the devices are assumed to be known, and they are chosen such that their outcomes are unpredictable. In this case, the devices need to be well characterised, but they are relatively easy to implement and random numbers can be generated at very high rates (millions of bits per second). Commercial quantum randomness generators operate in this regime.

On the other end of the scale, essentially nothing is assumed to be known about what the devices are doing. Randomness can be guaranteed just be looking at the statistics of the data the devices generate. This regime is known as ‘device-independent’, and offers an extremely secure form of randomness. However, it requires that the data violates a so-called Bell inequality. This is technologically very challenging to do without filtering the data in some way that might compromise the randomness. For this region, the rates that have been achieved so far for device-independent generation of random numbers are relatively low (some bits per minute).

In between these two extremes, there is a wealth of different possibilities for trade-offs between how much is assumed about the devices, how fast randomness can be generated, and how strong guarantees can be made about it. In particular, many proposals have explored settings where the measurement device is uncharacterised but something may be known about the quantum states being measured on.

In this work, we derive a universal upper bound on how much randomness can be generated in any such scenario. It turns out to be harder to generate randomness in this manner than one might first think.

In particular, one might intuitively think that the randomness can always be increased by increasing the number of possible measurement outcomes. If I throw a coin, there are two outcomes (heads or tails). So the probability to guess the outcome correctly is one out of two. In this case, one bit of randomness is generated for each throw. If instead I throw a die, there are six possible outcomes and the probability to guess is now only one out of six. The outcome is more unpredictable, so there is more than one bit of randomness generated. One could keep increasing the number of outcomes and it seems that the randomness would also keep increasing.

For devices that are completely trusted, this is indeed the case. However, if the measurement device is uncharacterised, it turns out to be wrong. The amount of randomness which can be guaranteed is limited not just by the number of outputs, but also by the number of quantum states which are measured on – i.e. the number of inputs. Thus, no matter how ingenious a scheme one may come up with, for a fixed number of inputs, the randomness that can be generated per round is limited. In fact, the number of inputs required grows very fast with the number of desired random bits (more precisely, exponentially fast).

This means that while generating many random bits per round is still possible theoretically, it would probably not be practical because of the large number of inputs required. Instead, to get high rates, one can focus on identifying experimentally friendly schemes with relatively few inputs which allow a high repetition rate (many rounds per second). For few inputs, we show that our bound can be reached – that is, there exist schemes which achieve the maximum possible randomness per round. This is probably true for any number of inputs.

Published paper: https://doi.org/10.1103/PhysRevA.99.052338