How tough are quantum correlations?

December 20, 2014

Yesterday we had another paper out on the arXiv (the last of 2014 for me) :

As I have written about in previous posts, quantum physics allows for correlations which are in a certain sense stronger than any which are possible in classical physics. These so-called nonlocal correlations can be exploited cryptography and for generation of random numbers, as I’ve explained here and here They are also interesting in their own right as a natural phenomenon which is hard to grasp and goes against intuition based on our everyday experience of the world.

While applications of nonlocality are already being developed, there are still many things about it which we have not fully understood. In this paper we study the robustness of nonlocal correlations to loss. We do this for a certain type of highly entangled quantum states known as Dicke states. These states can be understood e.g. as a state of excitations stored in atoms. Take a large number of atoms which are all in their lowest energy state. Now put some of the atoms in an excited state with higher energy. Dicke states are states where the number of excitations are fixed, but all possible combinations of which atoms are in the ground state and which are excited are explored (the atomic ensemble is in a coherent superposition of all these posibilities). Dicke states give rise to nonlocal correlations, and we study how robust this nonlocality is to particle loss – i.e. how many atoms can one lose before the nonlocality disappears? – and to loss of excitations – i.e. if an excited atom has some probability to decay back to the ground state, how much decay can the nonlocality tolerate?

Published paper:

Cause and effect in the quantum world

November 19, 2014

Today we have a new paper out on the arXiv, linking the curious phenomenon of nonlocality in quantum physics to techniques from causal inference: .

In medicine, one would often like to know whether something is the cause of something else. For example, whether a drug remedies a given symptom or whether a given food leads to a certain illness. When possible, one would like to distinguish between causation and mere correlation. For example, diabetes is associated with high blood pressure. But is high blood pressure causing diabetes? Or does diabetes cause high blood pressure? Or is there perhaps some common factor which causes both of them? The field of causality, which is a relatively young subfield of probability theory and statistics, deals with this kind of questions in rigorous mathematical terms. In particular, there is an elegant graphical approach, which makes such questions clear by expressing them in terms of, well… graphs! In our paper, we borrow the machinery from causality and apply it in a very different setting – quantum physics.

In quantum physics, observations can be at odds with our everyday understanding of the world. By measuring on particles in so-called entangled states, two separate experimenters can obtain correlated data which is not compatible with any classical explanation that obeys a few, very natural assumptions. In particular, (i) that each experimenter can choose freely what measurement to make, independent of the other experimenter and of how the particles were prepared, and (ii) that the results obtained by one experimenter cannot be influenced by any action of the other. These can be understood as assumptions about the causal structure of the experiment: (i) says that the other experimenter, or the source of particles, cannot be causes of first experimenter’s choice of measurement. (ii) says e.g. that the first experimenter’s measurement choice cannot be a cause of the other experimenter’s outcome. Any classical model which attempts to explain the observations in terms of underlying variables and which obeys (i) and (ii) is bound to fail. This was first pointed out by John Bell in the 60s, and has by now been confirmed in lots of experiments. The fact that such classical models can be ruled out by experimental observation is both a marvel of nature, and also the basis for quantum cryptography and random number generation, which I have recently written about.

In this paper, we develop a framework for dealing with the ways that different causal models restrict possible observations in a systematic and quantifiable way. Causal models can be represented systematically using graphs, which represent Bayesian networks. Essentially, one draws a picture with a symbol for each of the relevant parameters, such as the experimenter’s measurement choices and outcomes, and the underlying classical variables, and then draws arrows between them representing possible cause and effect. Based on such pictures, one can then say quite a lot about what the models imply on the level of observed data. For example, rather than just saying that no classical explanation with causal structure (i) and (ii) can explain the data, one might ask how much (i) or (ii) has to be relaxed for such a model to explain the data. We show that this type of questions can in many cases be formulated as linear programs, which means that their answer can be computed efficiently using standard techniques. As a package, we think it looks like the framework could prove to be a very useful tool.

Published paper:

Measuring magnets: yes, you can do better with quantum

November 5, 2014

We have another new paper out on the arXiv today:

This one is about measuring magnetic fields with very high precision by harnessing quantum effects. And more generally about showing that quantum effects can be useful for precision measurements even when there is some noise present.

Measuring magnetic fields precisely is useful for imaging brain activity as well as lots of other applications (see for example this list on Wikipedia More broadly, precise estimation of parameters is fundamental in science. For example, the most precise clocks we have are atomic clocks which are based on measuring the frequency of an atomic transition. Another example is big experiments like LIGO and GEO600 which are looking for signs of gravitational waves. They split a laser beam in two, send the parts along different directions, and then look for a tiny phase difference between them when they are reflected back.

It has long been known that in the estimation of a phase or a frequency, in principle the precision can be improved a lot by harnessing quantum effects. Imagine that the parameter is estimated by using N probe particles. Classically, each of these probe particles sense the parameter independently. Taking the average from measurements on each particle, the uncertainty in your best estimate then goes down with the square root of N (this follows from a very general result about probabilities know as the Central Limit Theorem). One the other hand, if the N particles are prepared in a so-called entangled quantum state, then one can arrange it so that the estimate error goes down linearly with N. There is a quadratic improvement in precision. If there are N = 10^12 probe particles, as one might have for example in an atomic magnetometer, then this is an improvement in precision by a factor of one million!

The quadratic improvement, however, holds in an ideal case free of the noise and imperfections which are always present in real systems. More recently, researchers have shown that quite generally, as soon as you put a little bit of noise in the system, the quadratic improvement goes away. The estimate error you get with quantum probes goes down with the square root of N, like in the classical case. Quantum methods may still improve the precision by a constant factor, but it is not going to scale with N.

This is a bit disappointing. Fortunately though, there are some loopholes in the ‘quite generally’ of these no-go results. That is, there are cases which are not covered and where quantum may still give a scaling advantage. The question is, are these exceptions relevant in practice? And if so, how much of an advantage can you still get?

In our paper we show that one of the noise models which isn’t covered seems to apply well to an actual atomic magnetometry setup which was realised recently, and we show that with reasonable entangled states and measurements which can be done in the lab, one still gets a scaling quantum advantage. This is nice, because it shows that quantum techniques still have a large potential to improve precision measurements. Depending on experimental parameters, the improvement for the specific magnetometer could be anything from ten-fold to thousand-fold. It is also nice because even if there is a mismatch between our noise model and the noise in the real experiment, such that the no-go results do actually apply, if the mismatch is not too big, the constant (non-scaling) improvement that quantum techniques give can be large.

Published paper:

Randomness from quantum light

October 29, 2014

Got another paper out today, here:

It’s about making random numbers based on quantum optics. If you read my last post (if not, go read it 🙂 ) then you know that guaranteeing something is random isn’t as easy as it sounds. Something may seem random to you but perfectly non-random to someone else. Say I’m a magician and I practised coin flipping a lot. When I flip a coin, by giving it just the right spin I can make it land on heads or tails as I wish. To you the flip looks random, but to me the outcome is completely predictable. What we want is a guarantee that the numbers we generate are random to anyone, no matter how much extra knowledge they have of the physical systems used to generate them.

Remarkably, this is actually possible in quantum physics: it is impossible to predict the outcome of some quantum measurements even when you know all there is to know about the devices used. So if you know what your quantum system is and what measurement is being made on it, it is possible to certify randomness. This is the basis for commercial quantum random numbers generators (yes, you can actually buy such a thing,

What is even more remarkable though is that randomness can be certified even when you know essentially nothing about what is being measured or what the system is. In quantum physics, correlations generated by some experiments can be stronger than anything classical experiments can generate, and this shows up at the level of the data. I can take two black boxes that take some inputs, e.g. they have some buttons you can press, and give some outputs, for example some lights light up when you press the buttons. After playing with them for a while I can generate some statistics about what lights light up when certain buttons are pressed. If this statistics violates a so-called Bell inequality, then we know for sure that what is going on inside those boxes must be quantum. We don’t need to look inside the boxes to know this – no classical box could generate the same statistics. What is more, there is guaranteed to be some randomness in the statistics, which we can extract.

This idea is known as device-independent randomness generation, because the guarantees on the randomness do not depend on knowing anything about what is inside the black boxes. It’s a beautiful idea, and it has been around for a while, but so far there has only been one experiment which implemented it. It generated 42 random bits over about one month. Not exactly a high rate! The reason is that it is very hard to violate a Bell inequality in practice without throwing away some of the experimental data. Real experiments have losses and imperfections – sometimes the detectors in the experiment just don’t click. In the black-box picture, you press a button but no light lights up. Many experiments have violated Bell inequalities by disregarding those experimental runs. This is fine for some purposes, but for randomness generation it is a big no-go. You cannot have any device-independent guarantees on the randomness if you do that.

Fortunately, last year new experiments with light and photodetectors were finally able to get a Bell violation without throwing away any data. This is nice because experiments with light can reach much higher rates than those 42 bits per month, which were generated by measuring on trapped ions. In our paper we analyse just how much randomness one could optimally get out of such optical experiments considering realistic imperfections and using some technical results which allow us to look not just at one Bell inequality, but at all possible Bell inequalities at once and optimise over them.

Published paper:

A self-testing quantum random number generator

October 13, 2014

New paper out on arXiv:

We’ve built a machine that makes random numbers. Sounds easy? True randomness is quite tricky… Here is a bit of background and the idea of the paper:

Random numbers are important for quite a few applications, including cryptography (e.g. keeping your credit card details safe), computer simulations (of anything from your local weather report to astrophysics), and gambling.

In particular, in cryptography it is very important that your random numbers cannot be predicted by anyone else. If someone else can guess the numbers they can hack you. A few years ago, quite a few keys used on the internet were broken exactly because the randomness used to generate them was not good enough .

The easiest, and most common, way to generate random numbers is to take some input which is likely to be quite random, such as timing of keystroke inputs, the time of day, temperature etc. and then run it through a computer algorithm which spits out bits that look random. This works fine for some purposes, but although the output looks random it is really completely determined by the input, since anything the computer can do amounts to applying some fixed set of rules. If the inputs are not picked very carefully, the output may be less random than you think, and security can be compromised as it happened a couple of years back.

In fact, the problem is even more fundamental than that. Finding some truly random inputs to use is not easy at all. Think of flipping a coin for example. The outcome seems to be random, but if you would know the coin’s initial state (position, velocity, speed of rotation etc.) then the outcome could be predicted from Newtonian mechanics. This is true for all processes in classical physics – they are deterministic. So, we turn to the only place in nature where, as far as we know at the moment, we can get some true randomness: quantum physics.

In quantum physics, the outcome of measurements are not predictable even in principle. That is, even if you know the initial state of a system perfectly, it is not possible to predict with certainty the outcome of all measurements that can be made on it. That is very good from the perspective of creating randomness! If I generate my random numbers by measuring a quantum system, even if an attacker would know everything about how my system works and all the input parameters I use, there is no way he could guess my random bits. This is great, and in fact it is already used commercially for randomness generation. You can go and by a quantum random number generator, e.g. here:

In practice, to have some guarantee not just that your system is random, but on how random it is, you need to charaterise it quite well. Imagine that you are generating random numbers from a classical coin. Ideally, heads and tails are equally likely, but for any real coin there will be a slight bias towards one or the other. Similarly for a quantum random process, not all outcomes will be equally likely and this must be accounted for when extracting randomness. This can be done, but may be a little cumbersome, and in particular if the characteristics of the device changes over time, the guarantees that you had initially may no longer hold.

Here is where our new work comes in: we have a way to guarantee and quantify randomness in the output based on only a few general assumptions about the physical process. That is, in our approach you do not need to characterise the system very carefully. If there is a bias in the outcomes, the protocol will automatically correct for it, always ensuring that the output bits are completely random. Importantly, it does so in real time, so even if the bias drifts, it adapts. We can see this in our experiment by switching on an off the aircon in the lab. The change in temperature influences the quantum system, changing the bias, and we see a jump in the rate of randomness generation.

Published Paper:

In the press: