# Measuring magnets: yes, you can do better with quantum

November 5, 2014

We have another new paper out on the arXiv today: https://arxiv.org/abs/1411.0716

This one is about measuring magnetic fields with very high precision by harnessing quantum effects. And more generally about showing that quantum effects can be useful for precision measurements even when there is some noise present.

Measuring magnetic fields precisely is useful for imaging brain activity as well as lots of other applications (see for example this list on Wikipedia https://en.wikipedia.org/wiki/Magnetometer#Uses). More broadly, precise estimation of parameters is fundamental in science. For example, the most precise clocks we have are atomic clocks which are based on measuring the frequency of an atomic transition. Another example is big experiments like LIGO and GEO600 which are looking for signs of gravitational waves. They split a laser beam in two, send the parts along different directions, and then look for a tiny phase difference between them when they are reflected back.

It has long been known that in the estimation of a phase or a frequency, in principle the precision can be improved a lot by harnessing quantum effects. Imagine that the parameter is estimated by using N probe particles. Classically, each of these probe particles sense the parameter independently. Taking the average from measurements on each particle, the uncertainty in your best estimate then goes down with the square root of N (this follows from a very general result about probabilities know as the Central Limit Theorem). One the other hand, if the N particles are prepared in a so-called entangled quantum state, then one can arrange it so that the estimate error goes down linearly with N. There is a quadratic improvement in precision. If there are N = 10^12 probe particles, as one might have for example in an atomic magnetometer, then this is an improvement in precision by a factor of one million!

The quadratic improvement, however, holds in an ideal case free of the noise and imperfections which are always present in real systems. More recently, researchers have shown that quite generally, as soon as you put a little bit of noise in the system, the quadratic improvement goes away. The estimate error you get with quantum probes goes down with the square root of N, like in the classical case. Quantum methods may still improve the precision by a constant factor, but it is not going to scale with N.

This is a bit disappointing. Fortunately though, there are some loopholes in the ‘quite generally’ of these no-go results. That is, there are cases which are not covered and where quantum may still give a scaling advantage. The question is, are these exceptions relevant in practice? And if so, how much of an advantage can you still get?

In our paper we show that one of the noise models which isn’t covered seems to apply well to an actual atomic magnetometry setup which was realised recently, and we show that with reasonable entangled states and measurements which can be done in the lab, one still gets a scaling quantum advantage. This is nice, because it shows that quantum techniques still have a large potential to improve precision measurements. Depending on experimental parameters, the improvement for the specific magnetometer could be anything from ten-fold to thousand-fold. It is also nice because even if there is a mismatch between our noise model and the noise in the real experiment, such that the no-go results do actually apply, if the mismatch is not too big, the constant (non-scaling) improvement that quantum techniques give can be large.

Published paper: https://journals.aps.org/prx/abstract/10.1103/PhysRevX.5.031010

# Randomness from quantum light

October 29, 2014

Got another paper out today, here: https://arxiv.org/abs/1410.7629

It’s about making random numbers based on quantum optics. If you read my last post http://jonatanbohrbrask.dk/2014/10/13/a-self-testing-quantum-random-number-generator/ (if not, go read it ðŸ™‚ ) then you know that guaranteeing something is random isn’t as easy as it sounds. Something may seem random to you but perfectly non-random to someone else. Say I’m a magician and I practised coin flipping a lot. When I flip a coin, by giving it just the right spin I can make it land on heads or tails as I wish. To you the flip looks random, but to me the outcome is completely predictable. What we want is a guarantee that the numbers we generate are random to anyone, no matter how much extra knowledge they have of the physical systems used to generate them.

Remarkably, this is actually possible in quantum physics: it is impossible to predict the outcome of some quantum measurements even when you know all there is to know about the devices used. So if you know what your quantum system is and what measurement is being made on it, it is possible to certify randomness. This is the basis for commercial quantum random numbers generators (yes, you can actually buy such a thing, https://www.idquantique.com/random-number-generation/products/).

What is even more remarkable though is that randomness can be certified even when you know essentially nothing about what is being measured or what the system is. In quantum physics, correlations generated by some experiments can be stronger than anything classical experiments can generate, and this shows up at the level of the data. I can take two black boxes that take some inputs, e.g. they have some buttons you can press, and give some outputs, for example some lights light up when you press the buttons. After playing with them for a while I can generate some statistics about what lights light up when certain buttons are pressed. If this statistics violates a so-called Bell inequality, then we know for sure that what is going on inside those boxes must be quantum. We don’t need to look inside the boxes to know this – no classical box could generate the same statistics. What is more, there is guaranteed to be some randomness in the statistics, which we can extract.

This idea is known as device-independent randomness generation, because the guarantees on the randomness do not depend on knowing anything about what is inside the black boxes. It’s a beautiful idea, and it has been around for a while, but so far there has only been one experiment which implemented it. It generated 42 random bits over about one month. Not exactly a high rate! The reason is that it is very hard to violate a Bell inequality in practice without throwing away some of the experimental data. Real experiments have losses and imperfections – sometimes the detectors in the experiment just don’t click. In the black-box picture, you press a button but no light lights up. Many experiments have violated Bell inequalities by disregarding those experimental runs. This is fine for some purposes, but for randomness generation it is a big no-go. You cannot have any device-independent guarantees on the randomness if you do that.

Fortunately, last year new experiments with light and photodetectors were finally able to get a Bell violation without throwing away any data. This is nice because experiments with light can reach much higher rates than those 42 bits per month, which were generated by measuring on trapped ions. In our paper we analyse just how much randomness one could optimally get out of such optical experiments considering realistic imperfections and using some technical results which allow us to look not just at one Bell inequality, but at all possible Bell inequalities at once and optimise over them.

Published paper: https://iopscience.iop.org/article/10.1088/1367-2630/17/2/022003

# A self-testing quantum random number generator

October 13, 2014

New paper out on arXiv: https://arxiv.org/abs/1410.2790

We’ve built a machine that makes random numbers. Sounds easy? True randomness is quite tricky… Here is a bit of background and the idea of the paper:

Random numbers are important for quite a few applications, including cryptography (e.g. keeping your credit card details safe), computer simulations (of anything from your local weather report to astrophysics), and gambling.

In particular, in cryptography it is very important that your random numbers cannot be predicted by anyone else. If someone else can guess the numbers they can hack you. A few years ago, quite a few keys used on the internet were broken exactly because the randomness used to generate them was not good enough http://benlog.com/2012/02/16/its-the-randomness-stupid/ .

The easiest, and most common, way to generate random numbers is to take some input which is likely to be quite random, such as timing of keystroke inputs, the time of day, temperature etc. and then run it through a computer algorithm which spits out bits that look random. This works fine for some purposes, but although the output looks random it is really completely determined by the input, since anything the computer can do amounts to applying some fixed set of rules. If the inputs are not picked very carefully, the output may be less random than you think, and security can be compromised as it happened a couple of years back.

In fact, the problem is even more fundamental than that. Finding some truly random inputs to use is not easy at all. Think of flipping a coin for example. The outcome seems to be random, but if you would know the coin’s initial state (position, velocity, speed of rotation etc.) then the outcome could be predicted from Newtonian mechanics. This is true for all processes in classical physics – they are deterministic. So, we turn to the only place in nature where, as far as we know at the moment, we can get some true randomness: quantum physics.

In quantum physics, the outcome of measurements are not predictable even in principle. That is, even if you know the initial state of a system perfectly, it is not possible to predict with certainty the outcome of all measurements that can be made on it. That is very good from the perspective of creating randomness! If I generate my random numbers by measuring a quantum system, even if an attacker would know everything about how my system works and all the input parameters I use, there is no way he could guess my random bits. This is great, and in fact it is already used commercially for randomness generation. You can go and by a quantum random number generator, e.g. here: https://www.idquantique.com/random-number-generation/products/

In practice, to have some guarantee not just that your system is random, but on how random it is, you need to charaterise it quite well. Imagine that you are generating random numbers from a classical coin. Ideally, heads and tails are equally likely, but for any real coin there will be a slight bias towards one or the other. Similarly for a quantum random process, not all outcomes will be equally likely and this must be accounted for when extracting randomness. This can be done, but may be a little cumbersome, and in particular if the characteristics of the device changes over time, the guarantees that you had initially may no longer hold.

Here is where our new work comes in: we have a way to guarantee and quantify randomness in the output based on only a few general assumptions about the physical process. That is, in our approach you do not need to characterise the system very carefully. If there is a bias in the outcomes, the protocol will automatically correct for it, always ensuring that the output bits are completely random. Importantly, it does so in real time, so even if the bias drifts, it adapts. We can see this in our experiment by switching on an off the aircon in the lab. The change in temperature influences the quantum system, changing the bias, and we see a jump in the rate of randomness generation.