Cause and effect in the quantum world

November 19, 2014

Today we have a new paper out on the arXiv, linking the curious phenomenon of nonlocality in quantum physics to techniques from causal inference: https://arxiv.org/abs/1411.4648 .

In medicine, one would often like to know whether something is the cause of something else. For example, whether a drug remedies a given symptom or whether a given food leads to a certain illness. When possible, one would like to distinguish between causation and mere correlation. For example, diabetes is associated with high blood pressure. But is high blood pressure causing diabetes? Or does diabetes cause high blood pressure? Or is there perhaps some common factor which causes both of them? The field of causality, which is a relatively young subfield of probability theory and statistics, deals with this kind of questions in rigorous mathematical terms. In particular, there is an elegant graphical approach, which makes such questions clear by expressing them in terms of, well… graphs! In our paper, we borrow the machinery from causality and apply it in a very different setting – quantum physics.

In quantum physics, observations can be at odds with our everyday understanding of the world. By measuring on particles in so-called entangled states, two separate experimenters can obtain correlated data which is not compatible with any classical explanation that obeys a few, very natural assumptions. In particular, (i) that each experimenter can choose freely what measurement to make, independent of the other experimenter and of how the particles were prepared, and (ii) that the results obtained by one experimenter cannot be influenced by any action of the other. These can be understood as assumptions about the causal structure of the experiment: (i) says that the other experimenter, or the source of particles, cannot be causes of first experimenter’s choice of measurement. (ii) says e.g. that the first experimenter’s measurement choice cannot be a cause of the other experimenter’s outcome. Any classical model which attempts to explain the observations in terms of underlying variables and which obeys (i) and (ii) is bound to fail. This was first pointed out by John Bell in the 60s, and has by now been confirmed in lots of experiments. The fact that such classical models can be ruled out by experimental observation is both a marvel of nature, and also the basis for quantum cryptography and random number generation, which I have recently written about.

In this paper, we develop a framework for dealing with the ways that different causal models restrict possible observations in a systematic and quantifiable way. Causal models can be represented systematically using graphs, which represent Bayesian networks. Essentially, one draws a picture with a symbol for each of the relevant parameters, such as the experimenter’s measurement choices and outcomes, and the underlying classical variables, and then draws arrows between them representing possible cause and effect. Based on such pictures, one can then say quite a lot about what the models imply on the level of observed data. For example, rather than just saying that no classical explanation with causal structure (i) and (ii) can explain the data, one might ask how much (i) or (ii) has to be relaxed for such a model to explain the data. We show that this type of questions can in many cases be formulated as linear programs, which means that their answer can be computed efficiently using standard techniques. As a package, we think it looks like the framework could prove to be a very useful tool.

Published paper: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.114.140403

Measuring magnets: yes, you can do better with quantum

November 5, 2014

We have another new paper out on the arXiv today: https://arxiv.org/abs/1411.0716

This one is about measuring magnetic fields with very high precision by harnessing quantum effects. And more generally about showing that quantum effects can be useful for precision measurements even when there is some noise present.

Measuring magnetic fields precisely is useful for imaging brain activity as well as lots of other applications (see for example this list on Wikipedia https://en.wikipedia.org/wiki/Magnetometer#Uses). More broadly, precise estimation of parameters is fundamental in science. For example, the most precise clocks we have are atomic clocks which are based on measuring the frequency of an atomic transition. Another example is big experiments like LIGO and GEO600 which are looking for signs of gravitational waves. They split a laser beam in two, send the parts along different directions, and then look for a tiny phase difference between them when they are reflected back.

It has long been known that in the estimation of a phase or a frequency, in principle the precision can be improved a lot by harnessing quantum effects. Imagine that the parameter is estimated by using N probe particles. Classically, each of these probe particles sense the parameter independently. Taking the average from measurements on each particle, the uncertainty in your best estimate then goes down with the square root of N (this follows from a very general result about probabilities know as the Central Limit Theorem). One the other hand, if the N particles are prepared in a so-called entangled quantum state, then one can arrange it so that the estimate error goes down linearly with N. There is a quadratic improvement in precision. If there are N = 10^12 probe particles, as one might have for example in an atomic magnetometer, then this is an improvement in precision by a factor of one million!

The quadratic improvement, however, holds in an ideal case free of the noise and imperfections which are always present in real systems. More recently, researchers have shown that quite generally, as soon as you put a little bit of noise in the system, the quadratic improvement goes away. The estimate error you get with quantum probes goes down with the square root of N, like in the classical case. Quantum methods may still improve the precision by a constant factor, but it is not going to scale with N.

This is a bit disappointing. Fortunately though, there are some loopholes in the ‘quite generally’ of these no-go results. That is, there are cases which are not covered and where quantum may still give a scaling advantage. The question is, are these exceptions relevant in practice? And if so, how much of an advantage can you still get?

In our paper we show that one of the noise models which isn’t covered seems to apply well to an actual atomic magnetometry setup which was realised recently, and we show that with reasonable entangled states and measurements which can be done in the lab, one still gets a scaling quantum advantage. This is nice, because it shows that quantum techniques still have a large potential to improve precision measurements. Depending on experimental parameters, the improvement for the specific magnetometer could be anything from ten-fold to thousand-fold. It is also nice because even if there is a mismatch between our noise model and the noise in the real experiment, such that the no-go results do actually apply, if the mismatch is not too big, the constant (non-scaling) improvement that quantum techniques give can be large.

Published paper: https://journals.aps.org/prx/abstract/10.1103/PhysRevX.5.031010