Good thermometry with bad measurements

May 31, 2021

Popular summary by PhD student Mathias Rønnow Jørgensen.

About a week ago our new paper “Optimal quantum thermometry with coarse-grained measurements” appeared in PRX Quantum. Preprint on the arXiv here.

This work is about taking the temperature of a quantum system. Ordinary thermometers, like the ones used to check if you have a fewer, work well for large systems with a temperature which is not too cold and not too hot. When measuring temperatures at the quantum scale, however, we need to design thermometers which can operate at very low temperatures and which won’t disturb the system too much. An intriguing possibility is to use individual quantum probes as thermometers. For example we could take a single atom, allow the atom to interact with the thermal system for some time, and then measure the energy of the atom. The outcome of the measurement reveals something about the system temperature.

In our paper we investigate how to design optimal thermometers, that is, thermometers with maximal precision, given that the available measurements themselves have limited precision. It is, in fact, well known that the measurements from which give the most information about temperature are precise measurements of the total energy of the full system. Energy in quantum systems is quantised (hence the name) and the best measurement should distinguish all the distinct energy levels of the system. However, this becomes extremely difficult in even moderately sized systems, as the number of levels grows rapidly and they become closely spaced. Realistic energy measurements in such systems always involve some coarse graining over the individual energy levels. Using tools from signal processing theory, we derive an equation describing the structure of optimal coarse-grained measurements. Surprisingly, we find that good temperature estimates can generally be attained even the number of distinct measurement outcomes is small. That is, for very coarse-grained measurements.

We apply our results to many-body quantum systems and nonequilibrium thermometry. For the latter, in particular, we consider a probe of given dimension interacting with the sample, followed by a measurement of the probe. We derive an upper bound on arbitrary, nonequilibrium strategies for such probe-based thermometry and illustrate it for thermometry on an ultra cold gas (specifically, a Bose-Einstein condensate) using a single atom as a probe. We find that even for a probe with just two energy levels, the coarse-graining constraint still allows approximately 64% of the best possible thermometric precision.

Published paper: https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuantum.2.020322

Multitasking: what are the limits?

December 2, 2020

On the “first day of Christmas” (or at least the Christmas month), we had a paper out in Physical Review Research: https://journals.aps.org/prresearch/pdf/10.1103/PhysRevResearch.2.043302. I didn’t have time to write about when it was first posted on arXiv, but at least I get around to it now… It is about how to quantify the efficiency of thermal machines which perform more than one useful task at the same time.

Useful tasks, such as generating electric power, cooling stuff down in a refrigerator, or heating stuff in an oven, are all performed by machines which exchange energy with some energetic reservoirs. Classical thermodynamics places strict bounds on the efficiency of machines which consume heat to either produce work, cool, or heat. However, more complex machines which perform multiple tasks at once are also possible. In this work, we introduce efficiencies applicable to such hybrid machines and establish their thermodynamic limits.

A hybrid machine might, for example, simultaneously cool by removing heat from a cold reservoir while also producing work using heat from a hot reservoir — that is, simultaneously be a fridge and a power station. More generally, we use a very broad approach where the machines are allowed to exchange not only energy with the reservoirs, but also other conserved quantities such as particles, and to perform any number of tasks involving these quantities. This enables complex situations where one conserved quantity can be exchanged for another to favour one or the other task. The fundamental thermodynamic limits on performance are then governed by an generalised variant of the 2nd law of thermodynamics (that’s the one which says that disorder can never decrease and which gives us the arrow of time). Despite the complexity, by starting from the 2nd law, we are able to give simple expressions for the overall efficiency of general hybrid machines, as well as the efficiency of each individual tasks performed by the machines.

We also study the possibility to build hybrid machines in practise. We show that a minimal machine, with two conserved quantities and up to three useful tasks, can be implemented in tiny electronic circuits coupled to quantum dots – artifical, atom-like structures embedded in the electronics, where the energy is quantised. The device uses energy an particles (electrons) as the conserved quantities, and can cool, heat, and produce electrical work. It should be possible to realise such a setup with current technology, so our results can be tested in experiment. The also provide new insight into thermodynamics – in particular in the quantum regime – and could potentially be used to guide the design of new kinds of thermal machines.

Another nice thing about this work is that it resulted from a very international collaboration with colleagues at the Austrian Academy of Sciences, the Universidad Autónoma in Madrid, Spain, ETH Zürich and the University of Geneva in Switzerland, and the University of Lund in Sweden. Various subsets of us met and discussed these ideas in various workshops and conferences and scientific visits. But I am not sure we were ever all in one place at the same time. Were we? The internet is a wondrous thing…

Published paper: https://journals.aps.org/prresearch/pdf/10.1103/PhysRevResearch.2.043302

Taking a temperature – how hard is it, really?

January 15, 2020

On Tuesday, we had a paper our on the arXiv: https://arxiv.org/abs/2001.04096. This one is about ‘taking the temperature’ of quantum systems. In particular, we ask how precisely it is possible to measure the temperature, when the temperature is very low and the thermometer is imperfect.

Everyday thermometers – like the ones you use to check if your kids have a fever or whether your beard is likely to grow icicles on your way to work – have a precision of about 0.1°C. At everyday temperatures, that is. If things get much warmer or much colder, these thermometers tend to be less precise. As you might imagine, a medical thermometer is no good for measuring inside a 1000 °C melting furnace (even if it wouldn’t melt itself…) or at cryogenic temperatures in a vacuum chamber, as used in many physics experiments. Of course, at these extreme temperatures, no-one sane would use a regular medical thermometer. Instead, specialised instruments adapted to the temperature range as developed. Any sensor – like a thermometer – is only good within a certain range, and one should clearly use one designed for the range one is interested in. But even so, there may be restrictions on how good the precision can be. Maybe it is just fundamentally harder to measure at very cold or very hot temperatures?

In this paper, we look at such fundamental restrictions when the temperature tends to absolute zero (that is, as cold as it gets). How hard is it to measure cold temperatures?

One answer, which was already well known previously, is that for physical systems that are quantum, it is very hard. In the following sense. In quantum systems, the energy is – well, quantised. That is, different configurations of the system have energies that differ by a nonzero amount. In particular, it requires some amount of energy to take the system from the configuration with the least energy (usually called the ground state) to the configuration with the second lowest energy (usually called the first excited state). There is a gap in energy between these two states, and we say that the system is gapped. When the temperature is so low that the corresponding thermal energy becomes smaller than this gap, then the precision of any temperature measurement starts to become worse very quickly. In fact, it goes bad exponentially quickly as the temperature decreases.

This limits how well any thermometer can do, when there are no other constraints. However, the gap is not always the relevant energy scale. In many cases, the fundamental energy levels of the system cannot be resolved by measurements accessible in practice. For example, they grow closer and closer as the system becomes bigger, so for large systems, we may not have access to any measurement that can distinguish them. The size of our thermometer also limits the kind of energy resolution it has. In these situations, the relevant limit on precision is not provided by the gap size. Nevertheless, the temperature may still be cold relative to other energy scales of interest. And we may well ask, how the precision then behaves with temperature?

This is the question, which we answer in this paper. In previous work, together with Patrick Potts and Nicolas Brunner, we developed a mathematical framework able to deal with temperature measurements that have a finite energy resolution. However, at that time we were not able to determine what the ultimate precision at low temperatures would be. In the new paper, identifying a new criterion for finite resolution, and extending our previous framework, Mathias Rønnow Jørgensen was able to derive a tight bound on how the best precision behaves with temperature. Quite surprisingly, the precision can actually get better with smaller temperature, for systems with an energy spectrum that is just right.

Published paper: https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.2.033394

Connect to cooperate

August 19, 2019

With my sister Josefine Bohr Brask, we a have a new paper out on the arXiv today: https://arxiv.org/abs/1908.05923

It deals with the question of how evolution can lead to cooperation. According to Darwin, individuals tend to behave in ways that maximise their own gain and chance of survival – ‘survival of the fittest’. Cooperative behaviour seems to contradict this, if we understand cooperation to mean that individuals help others without directly getting anything out of it. But we do see cooperation in nature, in many different species, from insects to humans. This is a major puzzle for science. In fact, explaining the evolution of cooperation has been called one of the biggest challenges facing scientist today.

This topic doesn’t have a lot to do with my usual business in quantum physics, but it is a very exciting research field in which Josefine is active, and she got me onboard for this project :). In the paper, we study certain models for evolution of cooperation based on combining game theory and social networks.

Game theory provides a simplified but promising approach to understanding the evolution of cooperation. In evolutionary game theory, the tension between selfishly focusing on one’s own gain or working for the common good is captured by simple games between two players. A famous example is the co-called Prisoner’s Dilemma:

Two prisoners are facing jail time, but their sentence depends on whether they are each willing to rat out the other. They are offered the following bargain:

  • If they both stay silent, they each get 1 year in jail.
  • If only one of them tells on the other, the one who tells goes free and the other gets 3 years.
  • If they both tell on each other, they both get 2 years.

Seen from the perspective of one prisoner, it is always better to rat the other out. If the other doesn’t say anything you go free, and if the other tells on you, you get 2 years instead of 3 by also telling on them. So selfish prisoners would end up both telling and so getting 2 years each. But that is worse that if they would have cooperated! If they both stay silent, each only gets 1 year.

What does that have to do with evolution?

We can use the game to model evolution in the following way. Imagine a large population of individuals. Each individual has a fixed strategy – either always cooperate (i.e. stay silent) or always defect (i.e. rat the opponent out). We let the individuals in the population play against each other and register how many games they win or loose. Then we allow them to adapt their strategies. For example by copying the strategy of more successful individuals. And then we do it again. And again. After many rounds, one strategy may start to dominate in the population. For example, the cooperators die out and only defectors are left. By varying the parameters and rules of the game, we can try to figure out, under what conditions cooperation can survive and spread.

It turns out that if everyone in the population just plays against everyone else, cooperation doesn’t stand a very good chance. The same is true if individuals are just randomly paired up in every round. When there is no structure in the population, cooperation generally cannot survive. Something more is needed.

A ‘something more’ which can make cooperation survive, is social network structure. In nature – and in human contexts – an individual does not usually interact with every other individual in the population, but mostly with a particular bunch of individuals. Each of these in turn are connected to certain other individuals, and so on. Just like you are connected to you friends on Facebook (or in the real world), and they each have their own circles of friends. Evolution on a network can be modelled by having each individual play only against its neighbours in the network and adapting its strategy based on the strategies and performance of its neighbours.

A number of studies have found that social network structure can in fact stabilise cooperation, even for games with a strong incentive for selfishness, such as Prisoner’s Dilemma. So network structure is a strong candidate for explaining, how it is possible for cooperation to evolve.

Of course, the actual dynamics of interactions, adaption, and survival in nature are very much more complicated than simple two-player games with just two strategies. However, simple models can also be powerful exactly because they potentially allow us to cut through the noise and identify the key underlying mechanisms. But one must be careful to check how general conclusions can really be drawn from them. Evolutionary game theory models for cooperation are often studied using numerical simulations as they are not easily solved analytically. In that case, one needs to be sure that the technical details of the simulations do not affect the general conclusions (about whether cooperation survives) too much.

In our paper, we study the effect of initially placing cooperators and defectors in different types of positions in the network. Most simulations start by distributing equal numbers of cooperators and defectors in the network at random. We wanted to know, if changing this initial distribution affects the outcome, and how. For example, if we initially place cooperators in positions with many neighbours and defectors in positions with few neighbours that might give the cooperators an advantage (more neighbours might copy their strategy thus becoming cooperators too).

We find that in certain cases, the conclusions about the evolution of cooperation are robust. But for some commonly studied kinds of networks, correlating the initial positions of cooperators with the number of neighbours strongly affects whether cooperation survives or dies out. So one does need to be careful!

Less trust, more randomness

June 3, 2019

Continuing my quest to catch up on explaining our research results, here is another paper we put out back in April, in collaboration with colleagues in Geneva and Brussels: https://arxiv.org/abs/1904.04819

Like the one I wrote about last week, it is about how to use quantum physics to generate strong random numbers. In that work, we put a general bound on how much randomness one could possible generate in any setup, where the measurements are not trusted. Here, we go the other way and present a specific scheme which generates randomness with untrusted devices – and we implement it experimentally.

Quoting from my last post, random numbers are crucial for secure digital communication. They are needed for cryptography, which e.g. keeps your credit card details safe online. And they are also used in computer simulations of complicated processes (for predicting the weather, for example), and in games and gambling. But good random numbers are not that easy to create.

For security applications, “good” means unpredictable – no spy should be able to predict them in advance (and since we don’t know who might try to spy on us, that means no-one at all).

Something may seem random to you but perfectly predictable to someone else. Say I’m a magician and I practised coin flipping a lot. When I flip a coin, by giving it just the right spin I can make it land on heads or tails as I wish. To you the flip looks random, but to me the outcome is completely predictable. What we want is a guarantee that the numbers we generate are random to anyone – we want to sure that no magician could be playing tricks on us.

Ideally, we would like to have to assume as little as possible about what these ‘anyone’ can know about the devices used to make the numbers. The less we need to assume, the less risk that any of our assumptions turn out to be wrong, and so the stronger our guarantee on the randomness.

In a classical world, knowing everything there is to know about a system at some point in time in principle allows predicting everything that will happen at all later times. The classical world is deterministic, and there is no randomness, unless we make assumptions about how much an observer knows. It is one of big surprises in quantum physics that there is fundamental randomness in nature. In quantum mechanics it is impossible to predict the outcome of certain measurements even when you know all that can possibly be known about the devices used.

In fact, quantum physics allows us to guarantee randomness under a range of different assumptions about the devices used. On one end of the scale, the measurements made by the devices are assumed to be known, and they are chosen such that their outcomes are unpredictable. In this case, the devices need to be well characterised, but they are relatively easy to implement and random numbers can be generated at very high rates (millions of bits per second). Commercial quantum randomness generators operate in this regime. On the other end of the scale, essentially nothing is assumed about what the devices are doing. Randomness can be guaranteed just be looking at the statistics of the data the devices generate. This regime is known as ‘device-independent’, and offers an extremely secure form of randomness. However, it requires that the data violates a so-called Bell inequality. This is technologically very challenging to do without filtering the data in some way that might compromise the randomness. For this reason, the rates that have been achieved so far for device-independent generation of random numbers are relatively low (some bits per minute).

In between the two extremes, there is plenty of room to explore – to look for a good set of assumptions which gives a strong guarantee on the randomness but still allows for reasonable rates to be realised in practice.

One would like the assumptions to be well justified physically. This means that ideally, it should be something that one can check by measuring. A nice route towards this goal was pointed out by Thomas van Himbeeck and co-workers (https://arxiv.org/abs/1612.06828). They considered prepare-and-measure setups with two devices. One prepares quantum states, the other measures them. They showed that when the measurement device is untrusted, one can still certify the quantum behaviour of the experiment just from the observed data, provided that the energy of the prepared states is bounded.

The energy can be measured, and so it is possible to check whether this assumption holds in a given experiment. In our experimental implementation, the prepared states corresponds to pulses of laser light with different intensities, and they are measured by a detector which just distinguishes between the presence or absence of photons (single quanta of light). This way, we can generate millions of random bits per second, with a very strong guarantee on how unpredictable they are. A user can verify in real time that the setup works correctly, based on the detector output and a bound on the energy in the laser pulses, which can also be justified directly from measurements.

Compared with earlier works (by myself and others), we’ve made the assumptions required to guarantee randomness much easier to justify, without loosing very much on the rate. So, we’ve improved the trade-off between between trust in the devices (how strong the randomness is), and the random bit rate (how much randomness we get per time).

Published paper: https://journals.aps.org/pra/abstract/10.1103/PhysRevA.100.062338

How random can a black box be?

May 25, 2019

I am way way behind on writing human-readable summaries of our new research results. Somehow, I’ve been continuously super busy after moving back to Denmark… I’ll try to start catching up though.

As a start, here is one which was published recently: https://doi.org/10.1103/PhysRevA.99.052338. It was done in collaboration with Marie Ioannou and Nicolas Brunner and is about how much randomness can be generated from a quantum black box. The open-access arXiv preprint is here: https://arxiv.org/abs/1811.02313 .

Random numbers are crucial for secure digital communication. They are needed for cryptography, which e.g. keeps your credit card details safe online. And they are also used in computer simulations of complicated processes (for predicting the weather, for example), and in games and gambling. But good random numbers are not that easy to create.

For security applications, “good” means unpredictable – no spy should be able to predict them in advance (and since we don’t know who might try to spy on us, that means no-one at all).

Something may seem random to you but perfectly predictable to someone else. Say I’m a magician and I practised coin flipping a lot. When I flip a coin, by giving it just the right spin I can make it land on heads or tails as I wish. To you the flip looks random, but to me the outcome is completely predictable. What we want is a guarantee that the numbers we generate are random to anyone – we want to sure that no magician could be playing tricks on us.

Ideally, we would like to have to assume as little as possible about what these ‘anyone’ can know about the devices used to make the numbers. The less we need to assume, the less risk that any of our assumptions turn out to be wrong, and so the stronger our guarantee on the randomness.

In a classical world, knowing everything there is to know about a system at some point in time in principle allows predicting everything that will happen at all later times. The classical world is deterministic, and there is no randomness, unless we make assumptions about how much an observer knows. Not so in the quantum world. It is one of the profound implications of quantum physics that there is fundamental randomness in nature. In quantum mechanics it is impossible to predict the outcome of certain measurements even when you know all that can possibly be know about the devices used.

In fact, quantum physics allows us to guarantee randomness under a range of different strength of assumptions about the devices used.

On one end of the scale, the measurements made by the devices are assumed to be known, and they are chosen such that their outcomes are unpredictable. In this case, the devices need to be well characterised, but they are relatively easy to implement and random numbers can be generated at very high rates (millions of bits per second). Commercial quantum randomness generators operate in this regime.

On the other end of the scale, essentially nothing is assumed to be known about what the devices are doing. Randomness can be guaranteed just be looking at the statistics of the data the devices generate. This regime is known as ‘device-independent’, and offers an extremely secure form of randomness. However, it requires that the data violates a so-called Bell inequality. This is technologically very challenging to do without filtering the data in some way that might compromise the randomness. For this region, the rates that have been achieved so far for device-independent generation of random numbers are relatively low (some bits per minute).

In between these two extremes, there is a wealth of different possibilities for trade-offs between how much is assumed about the devices, how fast randomness can be generated, and how strong guarantees can be made about it. In particular, many proposals have explored settings where the measurement device is uncharacterised but something may be known about the quantum states being measured on.

In this work, we derive a universal upper bound on how much randomness can be generated in any such scenario. It turns out to be harder to generate randomness in this manner than one might first think.

In particular, one might intuitively think that the randomness can always be increased by increasing the number of possible measurement outcomes. If I throw a coin, there are two outcomes (heads or tails). So the probability to guess the outcome correctly is one out of two. In this case, one bit of randomness is generated for each throw. If instead I throw a die, there are six possible outcomes and the probability to guess is now only one out of six. The outcome is more unpredictable, so there is more than one bit of randomness generated. One could keep increasing the number of outcomes and it seems that the randomness would also keep increasing.

For devices that are completely trusted, this is indeed the case. However, if the measurement device is uncharacterised, it turns out to be wrong. The amount of randomness which can be guaranteed is limited not just by the number of outputs, but also by the number of quantum states which are measured on – i.e. the number of inputs. Thus, no matter how ingenious a scheme one may come up with, for a fixed number of inputs, the randomness that can be generated per round is limited. In fact, the number of inputs required grows very fast with the number of desired random bits (more precisely, exponentially fast).

This means that while generating many random bits per round is still possible theoretically, it would probably not be practical because of the large number of inputs required. Instead, to get high rates, one can focus on identifying experimentally friendly schemes with relatively few inputs which allow a high repetition rate (many rounds per second). For few inputs, we show that our bound can be reached – that is, there exist schemes which achieve the maximum possible randomness per round. This is probably true for any number of inputs.

Published paper: https://doi.org/10.1103/PhysRevA.99.052338

When does the noise add up?

June 27, 2018

Yesterday, our paper on addition of quantum master equation generators was finally published in PRA. It has been underway for what feels like a loooong time (look at the recieved and published dates!) https://doi.org/10.1103/PhysRevA.97.062124.

When we first put this paper online, I apparently didn’t get around to writing a summary, so I’ll do one here:

In the paper, we investigate a somewhat technical question, but the context is not hard to understand.

Imagine you have a cold beer on a warm summer’s day. Clearly, if you leave you beer out in the sun, it’s going to warm up. This is because the beer is not isolated from the environment. Sunlight is hitting it and the warm air is touching it, giving off some heat to the beer. We say that the beer is an open system – it is interacting with its environment.

Now imagine that you want to describe how the beer evolves over time. How warm will it be after 10 minutes? How long does it take before it gets lukewarm and icky? In principle, to figure this out, you should track the trajectory of every air molecule hitting the bottle, to calculate exacly how much energy it gave to the beer, and how many photons of sunlight was absorbed or reflected etc. etc. But that is not very practical! The environment is huge, and keeping track of all its parts is next to impossible. And we are only really interested in what happens to the beer anyway.

Fortunately, if we are just interested in the beer, it is usually enough to account for the average effect of the environment. Instead of describing how every molecule or photon is absorbed or reflected, we can just look at average rates. How many photons arrive per second on average, for example. And that will be enough to tell us, how the beer is warming up. This gives a huge simplification of the calculations.

In quantum physics, we often deal with open systems. We may try to isolate our atoms, ions, or superconducting circuits as much as possible, but there will always be some contact to the environment. Sometimes, we may even want that, for example in quantum thermal machines. So we usually resort to averaging over the environment to get an effective description of how the system of interest evolves. In particular, we often use something called a quantum master equation.

The quantum master equation is a nice mathematical tool which allows us to find the time evolution of a quantum system in contact with a given environment. For every environment, we find the master equation, and then use that to figure out what happens to our quantum system.

The question which we investigate in the paper is this: If the system is interacting with several environments at the same time, and we know the master equation for each of them, can we then just add them up to get the effect of the total environment? For example, the beer is heating up both because of the warm air, and because of the sun shining on it. If we know the rate of heating by the air and the rate of heating by the sunlight, can we then just add them to figure out how was the beer is really heating?

Adding is easy, so calculations are much easier if the answer is yes. For the warming beer, this indeed the case. However, for quantum systems, things are a bit more complicated. Sometimes adding is ok, at other times it results in evolutions that are not correct, or even in equations that do not correspond to any possible physical evolution. In our paper, we establish conditions for when adding is allowed, or gives incorrect or non-physical results.

Published paper: https://doi.org/10.1103/PhysRevA.97.062124

The cost of cooling (quantum) beer

November 16, 2017

Some two weeks ago, we had a new paper out on the arXiv, which I haven’t had the time to write about until now: https://arxiv.org/abs/1710.11624. It has been under way for quite a while, but now we finally managed to put it all on (virtual) paper and get it out there.

This work contributes to building a consistent picture of thermodynamics at the quantum scale.

Thermodynamics explains how machines like steam engines and fridges work. It describes how heat can be moved around or transformed into other useful forms of energy, such as motion in a locomotive. And it tells us the fundamental limits on how well any such machine can perform, no matter how clever and intricate. In turn, the study of ideal machines has taught us fundamental things about nature, such as the second law of thermodynamics, which says that the entropy of an isolated system can never decrease (often thought of as stating the the ‘mess’ of the universe can only get bigger over time). Or the third law, which says that cooling to absolute theory requires infinite resources.

Traditionally, thermodynamics deals with big systems (think, steam locomotive), whose components are well described by classical physics. However, if we would look at smaller and smaller scales, then eventually we will need quantum physics to describe these components, and quantum phenomena will start to become important. What does thermodynamics look like at this quantum scale? Do the well known laws still hold? Can we make sense of such microscopic thermal processes? These are interesting theoretical questions. And by now, experimental techniques are getting so advanced that we can actually begin to build something like steam engines and fridges on the nanoscale. So they are starting to be relevant in practice as well.

A lot is already known about quantum thermodynamics, including generalisations of the Second and Third Laws and many results about the behaviour of thermal machines. However, it is fair to say that it is still work in progress – we do not yet have a full, coherent picture of quantum thermodynamics. Different approaches have been developed and it is not always clear how they fit together.

One point where classical and quantum thermodynamics differ is on how much it ‘costs’ to have control over a system, in terms of work energy (think of work as energy in an ordered, useful form as opposed to disordered heat energy). In the classical world the work cost of control can usually be neglected. Not so in the quantum world. There the cost of control can be a significant part of the cost of operating a machine.

In our paper, we add a piece towards completing the puzzle of quantum thermodynamics by studying the role of control for cooling. By looking at small fridges with more or less available control, we are able to compare different paradigms, which have been developed in the field, and compare how much one can cool under each of them, and how much it costs.

Published paper:

This paper eventually got split into two parts. One focusing on a universal limit to cooling quantum systems, and one focused on the work cost of cooling.

The first paper was published in PRL here: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.123.170605

The second was published in PRE here: https://journals.aps.org/pre/abstract/10.1103/PhysRevE.100.042130

A Chinese-Swiss quantum steam engine

October 20, 2017

On Monday last week, we had a new paper out on the arXiv: https://arxiv.org/abs/1710.02621

In it, we consider a quantum thermal machine for generating entanglement. Entanglement is a form of quantum correlations which are essential in quantum information protocols. It can be used for ultra-precise measurements, for example of magnetic fields, and for quantum computing, among many other applications. The machine uses only differences in temperatures and interactions that do not require any active control, and so entanglement can be obtained just by turning on the machine and waiting, which is neat.

What I find particularly nice is that the paper is a good example of scientific collaboration across the globe. The new scheme improves upon a design for thermal entanglement generation which we developed with colleagues here in Geneva and in Barcelona. I wrote about that work here (have a look for a summary of what makes thermal entanglement generation exciting). Recently, colleagues in Qufu, China, who had been working on similar setups, realised that the entanglement generation could be improved by including an additional thermal bath. Zhong-Xiao Man then contacted me, and with Armin Tavakoli we confirmed their results and thought about how the new scheme could be realised in practice. After just a few emails back and forth, the paper came together. The bulk of the work was done by Zhong-Xiao and his colleagues, but Armin and I also made a significant contribution, and in the end, I think the paper is much better than what any of us might have done alone.

So thanks to Zhong-Xiao for bringing on us on board. And I am happy to do research in the age of the internet, which has made these kind of interaction much much easier, faster, and likelier to happen :).

Published paper: https://iopscience.iop.org/article/10.1088/1402-4896/ab0c51

Talking about quantum noise: local or global?

August 27, 2017

A few weeks back, we had a paper out on the arXiv, which I haven’t had time to write about yet. https://arxiv.org/abs/1707.09211

The topic of the paper is quantum master equations – a somewhat technical subject, but very important for much of the other physics we study, especially small thermal machines, like the ones I have written about here and here.

When we try to describe a thermal machine, we are faced with a problem. The machine necessarily interacts with some thermal reservoirs. These are large, messy systems with many, many particles. In fact, this is true more generally. Any small quantum system interacts with the surrounding environment in some way. We may do our best to isolate it (and experimentalists typically do a good job!), but some weak interaction will always be present. The environment is big and complicated, and it is extremely cumbersome, if not impossible, to describe in detail what is going on with all the individual particles there. It would make our lives miserable if we had to try…

This is where quantum master equations come in. Instead of describing the environment in detail, one can account for the average effect it has on the system. The noiseless behaviour that an isolated system would follow is modified to include noise introduced by the environment. There are various techniques for doing so. The quantum master equation approach is one of the most important and wide spread.

They gives us a powerful computational tool, and we rely on them a lot we try to understand what is going on, for example in quantum thermal machines. They have been around for more than half a century, but there are still aspects which are not completely understood. Since it accounts for the effects of a large, complicated environment, which is not explicitly described, deriving a master equation always involves some approximations. And it can sometimes be unclear when these approximations are reliable.

In our paper, we address one such ambiguity which is particularly relevant for studying small quantum thermal machines, or more generally, energy transport in a small quantum system (this might be relevant e.g. in photosynthesis, where light energy is transported through molecules).

Imagine that the quantum system consists of two particles. Imagine that each particle is in contact with a separate environment, and that the particles also interact with each other. Now one could derive a quantum master equation for the system in two different ways. One could either first account for the noise introduced by the environments on each particle separately, and then account for the interaction between them. Or one could first account for the interaction between the particles, and then find the noise induced by the environments on this composite system. This leads to two different master equations, often referred to as ‘local’ and ‘global’, because in the former case, noise acts locally on each particle, while in the latter it acts on both particles in a collective manner.

There has been quite a bit of discussion in the community on whether the local or global approach is appropriate for describing certain thermal machines, and even results showing that employing a master equation in the wrong regime can lead to violation of fundamental physical principles such as the second law of thermodynamics. In our paper, we compare the two approaches against an exactly solvable model (that is, where the environment can be treated in detail) and study rigorously when one or the other approach holds. We find what could be intuitively expected: When the interaction between the system particles is weak, the local approach is valid and the global fails. On the other hand, when the inter-system interaction is strong, the two particles should be treated as single system, and the global approach is the valid one. For intermediate couplings, both approaches approximate the true evolution well.

This is reassuring, and provides a solid foundation for our (and others’) studies of small quantum thermal machines and other open quantum systems.

Published paper: https://iopscience.iop.org/article/10.1088/1367-2630/aa964f

Lots of entanglement from a thermal machine

August 7, 2017

Today we have a new paper out on the arXiv in which we describe a quantum thermal machine which generates maximal entanglement in any dimension. https://arxiv.org/abs/1708.01428

Thermal machines do a lot of useful tasks for us. Power plants turn heat into electricity. Refrigerators keep our beers cold. Steam locomotives pull trains (OK, maybe they mostly don’t any more – but a steam engine is the archetypical picture of a thermal machine). In general they are machines which move heat around, or transform it. Often by connecting different points of the machine to different temperatures and exploiting the heat flow between them.

Classical thermal machines are big. Think about power plants – and even a fridge is about the size of a person. At these scales, we don’t need to worry about quantum physics to understand what is going on. But what happens if make such a machine smaller and smaller, to the point where quantum effects become important? Say we take a locomotive and scale it down, down, down, until the boiler and gears and so on consist of just a few atoms? Of course, it won’t really be a locomotive any more – but maybe we can learn something interesting?

Indeed, we can. And the machine can still be useful.

In recent years, physicists (such as Paul Skrzypczyk and Marcus Huber) have learned a lot about thermodynamics on the quantum scale by studying such tiny thermal machines. Looking at how fundamental concepts from classical thermodynamics, such as the 2nd law or Carnot efficiency, behave in the quantum regime, we gain new insights into the differences between the classical and quantum worlds.

In the spirit of invention of steam engines (like at the time of the industrial revolution), we can also think about whether there are new tasks that such quantum thermal machines could do.

Entanglement is an essential quantum phenomena. Objects which are entangled behave as if they are a single entity even when separated and manipulated independently. This enables new, powerful applications such as quantum computing and quantum metrology, and is at the heart of the foundations of quantum physics. So creating and studying entanglement is very interesting from both fundamental and applied points of view. Might a thermal machine be used to generate entanglement then?

Entanglement is generally very fragile, and thermal noise tends to wash it out quickly. In fact, a lot of effort in quantum physics experiments goes into keeping the systems cold and isolated, so as to be able to observe the genuinely quantum effects. So it is not at all obvious that using a thermal machine for entanglement generation would work. However, it turns out that connecting with noisy environments can indeed help to create and keep entanglement stable, in certain systems.

This was already realised and studied by other researchers. Then, a couple of years ago, we described a minimal thermal machine generating entanglement (as I wrote about here). That setup was nice because it was really the simplest quantum thermal machine imaginable, using just two quantum bits and two different temperatures, and it turned out this was already sufficient to see entanglement.

However, the amount of entanglement which that machine could generate was rather limited. In our new paper, we present a new quantum thermal machine – not much more complicated – which generates maximal entanglement. And it does this, not only for two quantum bits, but also for two quantum trits, and in fact for two quantum systems of any dimension (which we prove analytically thanks to the hard work of Armin Tavakoli). The new machine again uses just two different temperatures and two quantum systems of some given dimension. One system is connected to a cold bath, one to a hot bath, and they interact with each other. When heat flows from hot to cold through the two systems, they become entangled.

So indeed, a quantum thermal machine can be useful. And maybe in the future we might see quantum computers, or sensitive quantum sensors, based on thermally generated entanglement.

Published paper: https://quantum-journal.org/papers/q-2018-06-13-73/

A tiny chilly thermometer

March 24, 2017

Two weeks ago, we had a paper out on arXiv: https://arxiv.org/abs/1703.03719. In it, we show how a small quantum thermal machine can be used as a thermometer. And how one could build one with superconducting circuits. Here is a bit of context.

Usual thermometers – like the medical one, showing you’re definitely running up a fever, or the good old mercury-in-a-glass-tube by the window showing that yes, it’s bloody cold outside – are based on the so-called ‘zeroth law of thermodynamics’. The 0th law states that if one object is in thermal equilibrium with two other objects, then these two must also be in equilibrium with each other. From there it follows that there is something which all systems in thermal equilibrium have in common. That something is the temperature. Systems in thermal equilibrium with each other have the same temperature.

So then, to build a universal thermometer, one just needs a well characterised object, where the relation between temperature and some observable property is known – a glass tube with mercury in it, for example, where one knows what temperature corresponds to different heights of the mercury. To measure the temperature of anything – say your body – one just sticks the thermometer in that thing, waits for the two to be in thermal equilibrium, and then reads off the thermometer. Because of the zeroth law, the temperature on the thermometer is the same as the temperature of the thing.

This is very neat, because one doesn’t need to know anything about how the thermometer is coupled to the sample being measured. There could be all kinds of complicated couplings going on, but as long as one wait for thermal equilibrium to set in, there is no need to worry about that. Only the thermometer itself has to be characterised.

However, there could be cases where waiting for equilibrium might screw things up. When we put a thermometer and a sample in contact, they exchange heat until equilibrium is reached. For example, if the thermometer is warmer than the sample, the sample will heat up a bit while the thermometer gets colder, until they are at the same temperature. For everyday uses, like taking your temperature, the thermometer is almost always much much smaller than the thing being measured, and so the heating (or cooling) of the sample is tiny, and can be ignored. In quantum physics experiments though, this is not necessarily the case. There, things often need to be very cold to make the quantum features noticeable, and the sample might be tiny itself. In such cases, experimentalists generally work hard to cool their quantum samples. So then, if we can’t make the thermometer colder than the sample, how can we measure its temperature without heating it up?

In our paper, we show that instead of equilibration, one can construct a thermometer which simultaneously cools the sample and allows its temperature to be estimated. The construction is based on a quantum thermal machine – basically a teeny tiny fridge, like this one I have previously written about. The machine has the same nice feature as an ordinary thermometer – that one doesn’t need to know the details of how the sample and thermometer couple to each other – but without causing any heating (this is achieved by operating close to the Carnot efficiency, which you might have heard about in classical thermodynamics).

It is interesting that this is possible for tiny quantum systems – and we also show that it could be done in superconducting circuits with existing experimental techniques. So who knows, perhaps the idea may turn out to be useful in the lab at some point.

Published paper: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.119.090603

Getting more random with quantum

December 21, 2016

We’ve a last paper out today, before the times of comfort, joy, and gluttony set in… https://arxiv.org/abs/1612.06566 . In it, we describe a new way to extract randomness from quantum physics, and demonstrate it experimentally too.

As I’ve written about before (here and here), random numbers are important for cryptography (e.g. keeping your credit card details safe), computer simulations (of anything from your local weather report to astrophysics), and gambling. But good random numbers are not that easy to create. Good means that no one else should be able to predict them in advance.

Something may seem random to you but perfectly non-random to someone else. Say I’m a magician and I practised coin flipping a lot. When I flip a coin, by giving it just the right spin I can make it land on heads or tails as I wish. To you the flip looks random, but to me the outcome is completely predictable. What we want is a guarantee that the numbers we generate are random to anyone – we want to sure that no magician could be playing tricks on us.

Ideally, we would like to have to assume as little as possible about what these ‘anyone’ can know about the devices used to make the numbers. The less we need to assume, the less risk that any of our assumptions turn out to be wrong, and so the stronger our guarantee on the randomness.

In a classical world, knowing everything there is to know about a system at some point in time in principle allows predicting everything that will happen at all later times. The classical world is deterministic, and there is no randomness, unless we make assumptions about how much an observer knows. It is one of big surprises in quantum physics that there is fundamental randomness in nature. In quantum mechanics it is impossible to predict the outcome of certain measurements even when you know all that can possibly be know about the devices used.

In fact, quantum physics allows us to guarantee randomness under a range of different strength of assumptions about the devices used. On one end of the scale, the measurements made by the devices are assumed to be known, and they are chosen such that their outcomes are unpredictable. In this case, the devices need to be well characterised, but they are relatively easy to implement and random numbers can be generated at very high rates (millions of bits per second). Commercial quantum randomness generators operate in this regime. On the other end of the scale, essentially nothing is assumed to be known about what the devices are doing. Randomness can be guaranteed just be looking at the statistics of the data the devices generate. This regime is known as ‘device-independent’, and offers an extremely secure form of randomness. However, it requires that the data violates a so-called Bell inequality. This is technologically very challenging to do without filtering the data in some way that might compromise the randomness. For this region, the rates that have been achieved so far for device-independent generation of random numbers are relatively low (some bits per minute).

In between the two extremes, there is room to explore – to look for a good set of assumptions which gives a strong guarantee on the randomness but still allows for reasonable rates to be realised in practice. With my colleagues in Geneva, we are doing this exploration, and implementing our ideas in the lab to check how practical they are.

In the new paper we look at a prepare-and-measure setup with two devices. One prepares quantum states, the other measures them. We make almost no assumptions about the measurement device, while something is known about the preparation device. It doesn’t need to be fully characterised, but it is known that the quantum states it prepares are not too different from each other (which, in quantum physics, means that they cannot be perfectly distinguished by any measurement). With these assumptions, the guys in the lab was able to generate random numbers with very high rates – millions of bits per second, comparable to commercial devices :).

So, we’ve found a nice trade-off between trust in the devices, and random bit rate. There is still plenty of room to explore though. I’ve made a little plot in the ‘space’ of trust vs. bitrate. On the lower left, with low trust but also low rates, the yellow stars show the results from device-independent experiments. The yellow star in the top right shows a commercial quantum random number generator. It achieves a high rate, but requires more trust in the device. Our new paper is the top green star – it requires an intermediate level of trust and achieves a high rate. The other green star is an experiment we did a little while back, using another set of assumptions and achieving a somewhat lower rate. Different assumptions may be apropriae in different situations, and so there is still lots of unknown territory in this plot.

The big question is, how close can we get to the sweet spot of low trust and high rate on the top left?

Published paper: https://journals.aps.org/prapplied/abstract/10.1103/PhysRevApplied.7.054018

A quantum electronics fridge

July 28, 2016

A week and few days back, we put a new paper out on the arXiv: https://arxiv.org/abs/1607.05218. I was travelling, so didn’t get around to writing anything before now (I’m getting behind on writing these things… with Rafael Chaves, we have another one out today, but it’ll have to wait).

This one proposes to build a teeny tiny fridge using fancy quantum electronics involving superconductors and Josephson junctions. Sounds crazy, but we think this is actually something experimentalists can do (at least pretty soon) :).

A fridge is a thermal machine – in this case a tiny, quantum one, so the context of this work is quantum thermodynamics. I’ve written about that before, so let me quote a previous post to give some background:

Thermal machines – such as the refrigerator which keeps your beer cold and makes ice for your caipirinha, or a steam turbine generating electricity from heat in a power plant – have been studied for a long time. The desire to improve early steam engines led to the development of thermodynamics which is now a very broad physical theory dealing with any process where heat is exchanged or converted into other forms of energy. Thermodynamics now allows us to understand well what goes on in thermal machines.

Quantum mechanics is another very successful theory, which gives us a good description of things on very small scales – the interaction of a few atoms with each other, or of an atom with light and so on. As you may know, on these scales the physics is different from everyday experience, and weird things start to happen. Quantum systems can be in superpositions – the famous Schrödinger’s cat which is neither dead nor alive – and can show correlations that are stronger than in any classical system.

Usually, when we think about thermal processes, such as cooling a beer, large systems with many particles are involved (the beer and the refrigerator consist of zillions of atoms). It is natural to ask though, what happens when we make things so small that quantum effects begin to matter? Can we understand thermodynamics at the quantum level? Can we still define quantities such as heat and work? What happens to important concepts in thermodynamics such as the Carnot efficiency or the second law?

There is a lot of work going on at present trying to answer these questions. One approach is to go back to the beginnings of thermodynamics – steam engines and other thermal machines – and make the machines as small as possible. Such quantum thermal machines are a good testing ground where ideas from thermodynamics and quantum mechanics can be combined. Like previous papers I’ve written about, our new work follows this approach.

The new machine basically consists of three LC circuits that talk to each other and to the surroundings. LC circuits are everywhere in modern electronics. They are what enables a radio to tune in to a desired station and digital watches to keep track of time. An LC circuit is analogous to to a spring or a pendulum: something that vibrates at a particular frequency. Physicist call such systems ‘harmonic oscillators’ and they are our favourite building blocks for… well for everything! In our machine, the whole system is cooled down and the oscillators enter the quantum regime where only a few quanta of vibrations are present at any time. Each oscillator is connected to a thermal bath that it can exchange energy with, at different temeperatures. If nothing else is done, each oscillator will then be at the temperature of its bath – it vibrates more or less according to how hot or cold the bath is. The goal of the fridge is to cool one of the oscillators below the temperature of the coldest bath. We can think of that oscillator as the ‘beer’.

To make the fridge cool, the oscillators are connected to each other through something called a Josephson junction. It is basically a weak link between two parts of a superconducting electronic circuit. The physics of what goes on there is a whole topic by itself, but in this case it just enables energy to hop between the oscillators in a very specific way: it can flow from the hottest and coldest oscillators into the one of intermediate temperature, and vice versa. This is exactly what one needs for cooling. Thermodynamics tells us that heat never flows from cold to hot by itself (if you put a beer in a warm room, it never suddenly gets colder, right?). But by taking energy from the hottest oscillator too, we can make heat go from the cold oscillator into the intermediate one, thereby cooling the cold oscillator below its bath temperature. Viola – the beer gets colder!

So, it works. The machine has a couple of nice features too – once everything is connected, it runs all by itself without any external control. And the cooling can be switched on and off in a simple way. But perhaps the nicest thing is that experimentalists have gotten very good with superconducting electronics, and it looks like the machine can work with feasible parameters, so it should be doable in the lab. Let’s see… :).

Thanks to Patrick Hofer and Martí Perarnau Llobet who were the main forces behind this work!

Published paper: https://journals.aps.org/prb/abstract/10.1103/PhysRevB.94.235420

Spooky action with a single photon

March 21, 2016

We had a new paper out on the arXiv a couple of days ago (but I was ill so didn’t get around to writing anything before now) https://arxiv.org/abs/1603.03589

Using single photons, we experimentally demonstrated so-called Einstein-Podolsky-Rosen steering (the famous ‘spooky action-at-a-distance’ pointed out by Einstein and colleagues). I say ‘we’, but of course I didn’t turn the knobs myself. Rather, it’s been a fun collaboration between me and a couple of other theorists, and some of the excellent quantum optics experimentalists we share roof with. It’s been great to see them turn our abstract ideas into reality in the lab :).

As I have written about in previous posts, quantum physics allows for correlations which are in a certain sense stronger than any which are possible in classical physics. These correlation come in different strengths, according to which scenario they are obtained in. Consider two parties, Alice and Bob. In a scenario where we trust both Alice and Bob (we know what they are doing in each of their labs), we can certify that they share quantum entanglement. This can be exploited e.g. for precision measurements or for some types of quantum cryptography. In a scenario where we do not trust neither Alice nor Bob (we do not make assumptions about what exactly they are doing in their labs), the experimental requirements become more difficult, but if we work hard then we can certify that they share nonlocal correlations. These are even more powerful than entanglement and can be exploited for ultra-secure cryptography and generation of random numbers, as I’ve explained here and here. Steering is an intermediate scenario in which we trust only one of the parties. For example, we trust Bob (we know what he is doing) but not Alice (we don’t make assumptions about what happens in her lab). The correlations obtained in this case are stronger than entanglement, but weaker than full non-locality. And the experimental requirements are also harder than for entanglement, but not quite as hard as for nonlocality.

Our experiment can be seen as a strong demonstration of single-photon entanglement, in a scenario where one party is untrusted. And also as an important step forward towards implementation of full non-locality with single photons. The results are also interesting from a more theoretical point of view, because we had to develop new steering inequalities which are suited for our particular experimental setup.

Published paper: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.117.070404

Single electrons do quantum

November 17, 2015

Today we have a new paper out, on entanglement with single electrons. http://arxiv.org/abs/1511.04450

As you perhaps know, or recall from previous posts of mine, entanglement is a phenomenon which is at the heart of quantum physics and distinguishes it from our everyday world. Objects that are entangled behave, in a sense, as if they are a single entity even when separated and manipulated independently. Entanglement can improve measurement precision, e.g. the precision of atomic clocks, and it enables quantum computing.

Usually we talk about entanglement between two particles, considering some property that each particle has. For example, the energy state of an atom here might be correlated with the energy of an atom in another location, such that the two atoms are entangled. But there doesn’t actually have to be two particles to create entanglement. One is enough.

If we shine light on a half-transparent mirror, half of it will go through, and half of it will be reflected. So we get two beams of light, going off in different directions. If we send a single photon (a particle of light) towards the mirror it can also get transmitted or reflected. So if we put some cameras in the path of the transmitted or reflected light, which measure whether the photon arrives, we will find a correlation: When a photon is detected in one camera, nothing is detected in the other, and vice versa. If we repeat the experiment, sometimes the photon arrives in one camera, sometimes in the other. We don’t know in advance which one it is going to be, but always exactly one camera clicks. This looks like a simple correlation, but quantum mechanics tells us that it is actually something more intricate. When the photon hits the mirror, it doesn’t simply either go through or get reflected. Instead, we get a so-called superposition of these two possibilities. Nature doesn’t decide which possibility is realised until we make a measurement (for example with the cameras), even though this may happen much later than the photon hitting the mirror.

So, considering the two paths after the mirror, we have a superposition of two possibilities: There is a photon in the path on the right of the mirror and none on the left (say), or vice versa. This looks very similar to entanglement. Entanglement between two atoms happens, for example, when we have a superposition where the first atom has a high energy and the second a low one or vice versa. However, now there is just one particle, and it is not a property of the particle that changes (such as the energy), but instead whether the particle is there at all or not, in a given path. Is this entanglement?

This question was debated for quite a while in the past. By now, it is well established that in the case of photons (light), the answer is ‘yes’, and in fact this entanglement is useful for applications, for example ultra secure cryptography. We call this kind of entanglement ‘mode entanglement’. In the example above, each path is a ‘mode’ which can contain different numbers of photons, and the two separate paths are the objects which are entangled. These are not two particles – instead the number of particles provides the degree of freedom in which the paths are entangled.

So the question is settled for light. What about other particles? What about electrons?

It turns out that for electrons, the question is more subtle and the debate is still ongoing. The thing is that to reveal the entanglement, it is not enough just to measure whether the particle is there or not in each path. One needs to also do ‘in between’ measurements, which require the creation of superpositions of zero or one particle locally in the path which is measured. This is ok for photons – it’s not easy, but there is nothing in principle forbidding such superpositions, and we can do measurements which are not quite optimal but good enough. But for particles with electrical charge – such as electrons – the situation is different. As far as we know, superpositions of states with different total charge cannot exist (this is known as the superselection rule for charge). So in particular, superpositions of zero or one electrons are ruled out, and it is unclear if it is possible to do measurements which will reveal mode entanglement for charged particles.

In this paper, we argue that the answer for electrons is also ‘yes’. We propose an experimental setup, which uses single electrons split on a kind of electronic mirror, in an analogue way to how a photon would be split on a mirror for light. In our setup, the entanglement created by the splitting is revealed without breaking any fundamental principles. We need to use two single photons at the same time, split on two separate mirrors. However, while two electrons are involved, we show that the result we obtain would be impossible unless each split single electron creates an entangled state. So, we conclude that single-electron entanglement indeed exists and is observable.

It will be interesting to see the reactions to this paper. Whether our colleagues will be convinced or not. And if they are, whether someone is up for doing the experiment :).

Published paper: https://iopscience.iop.org/article/10.1088/1367-2630/18/4/043036

Quantum beer coolers

August 11, 2015

Today, with Nicolas Brunner, we have a new paper out on the arXiv: https://arxiv.org/abs/1508.02025 . This one studies the behaviour of a small quantum refrigerator.

Thermal machines – such as the refrigerator which keeps your beer cold and makes ice for your caipirinha, or a steam turbine generating electricity from heat in a power plant – have been studied for a long time. The desire to improve early steam engines led to the development of thermodynamics which is now a very broad physical theory dealing with any process where heat is exchanged or converted into other forms of energy. Thermodynamics now allows us to understand well what goes on in thermal machines.

Quantum mechanics is another very successful theory, which gives us a good description of things on very small scales – the interaction of a few atoms with each other, or of an atom with light and so on. As you may know, on these scales the physics is different from everyday experience, and weird things start to happen. Quantum systems can be in superpositions – the famous Schrödinger’s cat which is neither dead nor alive – and can show correlations that are stronger than in any classical system (as I have written about before, for example here).

Usually, when we think about thermal processes, such as cooling a beer, large systems with many particles are involved (the beer and the refrigerator consist of zillions of atoms). It is natural to ask though, what happens when we make things so small that quantum effects begin to matter? Can we understand thermodynamics at the quantum level? Can we still define quantities such as heat and work? What happens to important concepts in thermodynamics such as the Carnot efficiency or the second law?

There is a lot of work going on at present trying to answer these questions. One approach is to go back to the beginnings of thermodynamics – steam engines and other thermal machines – and make the machines as small as possible. Such quantum thermal machines are a good testing ground where ideas from thermodynamics and quantum mechanics can be combined. Our work follows this approach. We look at a small absorption refrigerator consisting of just three two-level systems (think of three atoms) coupled to thermal baths at different temperatures.

This quantum fridge has already been used to find several interesting results, for example that quantum entanglement can improve cooling, and that quantum machines can reach Carnot efficiency. These results were obtained by looking at the fridge in the ‘steady state’ – i.e. after a long time, when the ‘beer’ in the fridge is already cold. In this paper, we take a look at the ‘transient regime’ of the fridge – i.e. what happens in the time between putting a warm beer in the fridge and taking out a cold one. Our contribution is a bit technical. We map out some details of this process and find the time scales for the ‘beer’ to loose it’s quantum character or approach the steady state. Among other things, we find that the ‘beer’ can sometimes get colder at an intermediate time than in the steady state – i.e. if you want the beer cold you shouldn’t leave in the fridge too long. This is a purely quantum effect and that happens to single-atom beers but definitely not to that tasty IPA you were saving for later!

Published paper: https://journals.aps.org/pre/abstract/10.1103/PhysRevE.92.062101

A know-nothing test of entropy

June 1, 2015

Friday we had a new paper out on arXiv, here: https://arxiv.org/abs/1505.07802 . This one is about device-independent tests of entropy.

Generally, to learn about the physical properties of something, such as temperature or mass, we use a measurement device that we already know well. We want our device to be well calibrated and characterised such that we can have confidence in the information it gives about whatever we are studying. But interestingly, there are some properties which can be inferred essentially without knowing anything about the devices used in the experiment.

One example of such a property is dimension, meaning the number of degrees of freedom that a physical system has. We can perform an experiment where a particle is prepared by one device and then sent to another which measures it. For various combinations of settings for the preparation and measurement devices, we collect data about how likely different measurement outcomes are. Without knowing anything more about how the devices function or about what the particle is, just by studying this data we can infer the number of degrees of freedom that the particle must possess.

Another example is quantum entanglement. In a similar experiment with two devices, from the data describing combinations of inputs and outputs one can infer that the devices share an entangled quantum state. Again without knowing anything about what is actually going on inside them.

In this paper we identify another quantity which can be tested in such a device-independent way, namely entropy. In a prepare-and-measure experiment, the entropy measures the number of degrees of freedom which are exploited by the particle on average. That is, while the dimension tells us how many degrees of freedom the particle must have, the entropy tells us to which degree it must exploit them to reproduce the data. These two things can be very different – as we show in the paper, it is possible to have experiments that require arbitrarily high dimension, while the entropy remains very small. We also show that quantum particles do better than classical ones when it comes to entropy – for the same dimension, when the particles are quantum, the same task can be realised with lower entropy than for classical particles.

From a fundamental point of view, it is interesting to ask what quantities can be assessed in a device-independent manner. So it’s nice that we now know entropy is one of them, as well as some of its basic characteristics. From a more practical point of view, device-independent tests of both dimension and entanglement have promising applications for quantum cryptography and random number generation. So it might very well be that device-independent entropy tests will also be useful for quantum information processing.

Published paper: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.115.110501

Making quantum states from heat

April 2, 2015

Today we have a new paper out on the arXiv, on entanglement generation in a quantum thermal machine: https://arxiv.org/abs/1504.00187

Entanglement is a phenomenon which is at the heart of quantum physics and distinguishes it from the everyday world of classical physics. Particles that are entangled behave, in a sense, as if they are a single entity even when separated and manipulated independently. Entanglement gives rise to correlations that are stronger than any which can be found in classical physics (as I have mentioned before, e.g. here), and it is a prerequisite for many applications such as quantum enhanced sensing (which I have also written about before) and of course quantum computing.

In our everyday experience, we do not notice these curious correlations. Quantum entanglement and coherence is fragile – when small quantum systems interact with a noisy environment, they are quickly messed up and become impossible to observe. Because of this, we usually think of entanglement as something that has to be carefully protected by isolating our quantum systems as much as possible from the surroundings and, in many cases, keeping them very cold. This is one of the reasons that building a quantum computer is really tricky. Much of the hard work that go into experiments is about achieving this high degree of isolation from noise, to keep the systems coherent long enough to do something fun with them.

Could it be, however, that there are hot, noisy systems where entanglement nevertheless survives? Maybe even where the noisy environment helps generating the entanglement? There are indications that quantum effects play a role in biological processes such as photosynthesis, where the systems are complex and definitely not well isolated from the environment. So perhaps it should be possible.

It turns out that the answer is yes: there are indeed noisy system where entanglement is stable. In our paper we describe such a system. We are not the first to ask these questions, or to show that entanglement can be generated by noisy processes. But we do have a very neat example, where the system is really stripped down to the bare essentials. It consist of just two quantum two-level systems (qubits), coupled to each other and to two thermal baths at different temperatures. This setup can be understood as a small thermal machine operating out of thermal equilibrium. Heat flows in from the warmer bath, through the qubits, and out into the cold bath. There is no other external control or sources of coherence, but nevertheless we show that the steady state of the qubits is entangled, when the couplings and temperatures are right. So our thermal machine maintains entanglement just by interaction with a noisy, thermal environment. Perhaps similar processes take place in biological systems?

Because the machine is so simple, it looks promising to realise experimentally as well. In the paper we suggest implementations using either superconducting flux qubits, or a double quantum dot. Both of these are systems which have already been studied a lot, and where high degree of control has been achieved. So we hope some of our experimentalists colleagues will be interested in giving it a try. Anyone?

Published paper: https://iopscience.iop.org/article/10.1088/1367-2630/17/11/113029