*Popular summary by PhD student Mathias Rønnow Jørgensen.*

About a week ago our new paper “Optimal quantum thermometry with coarse-grained measurements” appeared in PRX Quantum. Preprint on the arXiv here.

This work is about taking the temperature of a quantum system. Ordinary thermometers, like the ones used to check if you have a fewer, work well for large systems with a temperature which is not too cold and not too hot. When measuring temperatures at the quantum scale, however, we need to design thermometers which can operate at very low temperatures and which won’t disturb the system too much. An intriguing possibility is to use individual quantum probes as thermometers. For example we could take a single atom, allow the atom to interact with the thermal system for some time, and then measure the energy of the atom. The outcome of the measurement reveals something about the system temperature.

In our paper we investigate how to design optimal thermometers, that is, thermometers with maximal precision, given that the available measurements themselves have limited precision. It is, in fact, well known that the measurements from which give the most information about temperature are precise measurements of the total energy of the full system. Energy in quantum systems is quantised (hence the name) and the best measurement should distinguish all the distinct energy levels of the system. However, this becomes extremely difficult in even moderately sized systems, as the number of levels grows rapidly and they become closely spaced. Realistic energy measurements in such systems always involve some coarse graining over the individual energy levels. Using tools from signal processing theory, we derive an equation describing the structure of optimal coarse-grained measurements. Surprisingly, we find that good temperature estimates can generally be attained even the number of distinct measurement outcomes is small. That is, for very coarse-grained measurements.

We apply our results to many-body quantum systems and nonequilibrium thermometry. For the latter, in particular, we consider a probe of given dimension interacting with the sample, followed by a measurement of the probe. We derive an upper bound on arbitrary, nonequilibrium strategies for such probe-based thermometry and illustrate it for thermometry on an ultra cold gas (specifically, a Bose-Einstein condensate) using a single atom as a probe. We find that even for a probe with just two energy levels, the coarse-graining constraint still allows approximately 64% of the best possible thermometric precision.

** Published paper:** https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuantum.2.020322

On the “first day of Christmas” (or at least the Christmas month), we had a paper out in Physical Review Research: https://journals.aps.org/prresearch/pdf/10.1103/PhysRevResearch.2.043302. I didn’t have time to write about when it was first posted on arXiv, but at least I get around to it now… It is about how to quantify the efficiency of thermal machines which perform more than one useful task at the same time.

Useful tasks, such as generating electric power, cooling stuff down in a refrigerator, or heating stuff in an oven, are all performed by machines which exchange energy with some energetic reservoirs. Classical thermodynamics places strict bounds on the efficiency of machines which consume heat to either produce work, cool, or heat. However, more complex machines which perform multiple tasks at once are also possible. In this work, we introduce efficiencies applicable to such hybrid machines and establish their thermodynamic limits.

A hybrid machine might, for example, simultaneously cool by removing heat from a cold reservoir while also producing work using heat from a hot reservoir — that is, simultaneously be a fridge and a power station. More generally, we use a very broad approach where the machines are allowed to exchange not only energy with the reservoirs, but also other conserved quantities such as particles, and to perform any number of tasks involving these quantities. This enables complex situations where one conserved quantity can be exchanged for another to favour one or the other task. The fundamental thermodynamic limits on performance are then governed by an generalised variant of the 2nd law of thermodynamics (that’s the one which says that disorder can never decrease and which gives us the arrow of time). Despite the complexity, by starting from the 2nd law, we are able to give simple expressions for the overall efficiency of general hybrid machines, as well as the efficiency of each individual tasks performed by the machines.

We also study the possibility to build hybrid machines in practise. We show that a minimal machine, with two conserved quantities and up to three useful tasks, can be implemented in tiny electronic circuits coupled to quantum dots – artifical, atom-like structures embedded in the electronics, where the energy is quantised. The device uses energy an particles (electrons) as the conserved quantities, and can cool, heat, and produce electrical work. It should be possible to realise such a setup with current technology, so our results can be tested in experiment. The also provide new insight into thermodynamics – in particular in the quantum regime – and could potentially be used to guide the design of new kinds of thermal machines.

Another nice thing about this work is that it resulted from a very international collaboration with colleagues at the Austrian Academy of Sciences, the Universidad Autónoma in Madrid, Spain, ETH Zürich and the University of Geneva in Switzerland, and the University of Lund in Sweden. Various subsets of us met and discussed these ideas in various workshops and conferences and scientific visits. But I am not sure we were ever all in one place at the same time. Were we? The internet is a wondrous thing…

** Published paper: **https://journals.aps.org/prresearch/pdf/10.1103/PhysRevResearch.2.043302

On Tuesday, we had a paper our on the arXiv: https://arxiv.org/abs/2001.04096. This one is about ‘taking the temperature’ of quantum systems. In particular, we ask how precisely it is possible to measure the temperature, when the temperature is very low and the thermometer is imperfect.

Everyday thermometers – like the ones you use to check if your kids have a fever or whether your beard is likely to grow icicles on your way to work – have a precision of about 0.1°C. At everyday temperatures, that is. If things get much warmer or much colder, these thermometers tend to be less precise. As you might imagine, a medical thermometer is no good for measuring inside a 1000 °C melting furnace (even if it wouldn’t melt itself…) or at cryogenic temperatures in a vacuum chamber, as used in many physics experiments. Of course, at these extreme temperatures, no-one sane would use a regular medical thermometer. Instead, specialised instruments adapted to the temperature range as developed. Any sensor – like a thermometer – is only good within a certain range, and one should clearly use one designed for the range one is interested in. But even so, there may be restrictions on how good the precision can be. Maybe it is just fundamentally harder to measure at very cold or very hot temperatures?

In this paper, we look at such fundamental restrictions when the temperature tends to absolute zero (that is, as cold as it gets). How hard is it to measure cold temperatures?

One answer, which was already well known previously, is that for physical systems that are quantum, it is very hard. In the following sense. In quantum systems, the energy is – well, quantised. That is, different configurations of the system have energies that differ by a nonzero amount. In particular, it requires some amount of energy to take the system from the configuration with the least energy (usually called the ground state) to the configuration with the second lowest energy (usually called the first excited state). There is a gap in energy between these two states, and we say that the system is gapped. When the temperature is so low that the corresponding thermal energy becomes smaller than this gap, then the precision of any temperature measurement starts to become worse very quickly. In fact, it goes bad exponentially quickly as the temperature decreases.

This limits how well any thermometer can do, when there are no other constraints. However, the gap is not always the relevant energy scale. In many cases, the fundamental energy levels of the system cannot be resolved by measurements accessible in practice. For example, they grow closer and closer as the system becomes bigger, so for large systems, we may not have access to any measurement that can distinguish them. The size of our thermometer also limits the kind of energy resolution it has. In these situations, the relevant limit on precision is not provided by the gap size. Nevertheless, the temperature may still be cold relative to other energy scales of interest. And we may well ask, how the precision then behaves with temperature?

This is the question, which we answer in this paper. In previous work, together with Patrick Potts and Nicolas Brunner, we developed a mathematical framework able to deal with temperature measurements that have a finite energy resolution. However, at that time we were not able to determine what the ultimate precision at low temperatures would be. In the new paper, identifying a new criterion for finite resolution, and extending our previous framework, Mathias Rønnow Jørgensen was able to derive a tight bound on how the best precision behaves with temperature. Quite surprisingly, the precision can actually get better with smaller temperature, for systems with an energy spectrum that is just right.

** Published paper: **https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.2.033394

With my sister Josefine Bohr Brask, we a have a new paper out on the arXiv today: https://arxiv.org/abs/1908.05923

It deals with the question of how evolution can lead to cooperation. According to Darwin, individuals tend to behave in ways that maximise their own gain and chance of survival – ‘survival of the fittest’. Cooperative behaviour seems to contradict this, if we understand cooperation to mean that individuals help others without directly getting anything out of it. But we do see cooperation in nature, in many different species, from insects to humans. This is a major puzzle for science. In fact, explaining the evolution of cooperation has been called one of the biggest challenges facing scientist today.

This topic doesn’t have a lot to do with my usual business in quantum physics, but it is a very exciting research field in which Josefine is active, and she got me onboard for this project :). In the paper, we study certain models for evolution of cooperation based on combining game theory and social networks.

Game theory provides a simplified but promising approach to understanding the evolution of cooperation. In evolutionary game theory, the tension between selfishly focusing on one’s own gain or working for the common good is captured by simple games between two players. A famous example is the co-called Prisoner’s Dilemma:

Two prisoners are facing jail time, but their sentence depends on whether they are each willing to rat out the other. They are offered the following bargain:

- If they both stay silent, they each get 1 year in jail.
- If only one of them tells on the other, the one who tells goes free and the other gets 3 years.
- If they both tell on each other, they both get 2 years.

Seen from the perspective of one prisoner, it is always better to rat the other out. If the other doesn’t say anything you go free, and if the other tells on you, you get 2 years instead of 3 by also telling on them. So selfish prisoners would end up both telling and so getting 2 years each. But that is worse that if they would have cooperated! If they both stay silent, each only gets 1 year.

What does that have to do with evolution?

We can use the game to model evolution in the following way. Imagine a large population of individuals. Each individual has a fixed strategy – either always cooperate (i.e. stay silent) or always defect (i.e. rat the opponent out). We let the individuals in the population play against each other and register how many games they win or loose. Then we allow them to adapt their strategies. For example by copying the strategy of more successful individuals. And then we do it again. And again. After many rounds, one strategy may start to dominate in the population. For example, the cooperators die out and only defectors are left. By varying the parameters and rules of the game, we can try to figure out, under what conditions cooperation can survive and spread.

It turns out that if everyone in the population just plays against everyone else, cooperation doesn’t stand a very good chance. The same is true if individuals are just randomly paired up in every round. When there is no structure in the population, cooperation generally cannot survive. Something more is needed.

A ‘something more’ which can make cooperation survive, is social network structure. In nature – and in human contexts – an individual does not usually interact with every other individual in the population, but mostly with a particular bunch of individuals. Each of these in turn are connected to certain other individuals, and so on. Just like you are connected to you friends on Facebook (or in the real world), and they each have their own circles of friends. Evolution on a network can be modelled by having each individual play only against its neighbours in the network and adapting its strategy based on the strategies and performance of its neighbours.

A number of studies have found that social network structure can in fact stabilise cooperation, even for games with a strong incentive for selfishness, such as Prisoner’s Dilemma. So network structure is a strong candidate for explaining, how it is possible for cooperation to evolve.

Of course, the actual dynamics of interactions, adaption, and survival in nature are very much more complicated than simple two-player games with just two strategies. However, simple models can also be powerful exactly because they potentially allow us to cut through the noise and identify the key underlying mechanisms. But one must be careful to check how general conclusions can really be drawn from them. Evolutionary game theory models for cooperation are often studied using numerical simulations as they are not easily solved analytically. In that case, one needs to be sure that the technical details of the simulations do not affect the general conclusions (about whether cooperation survives) too much.

In our paper, we study the effect of initially placing cooperators and defectors in different types of positions in the network. Most simulations start by distributing equal numbers of cooperators and defectors in the network at random. We wanted to know, if changing this initial distribution affects the outcome, and how. For example, if we initially place cooperators in positions with many neighbours and defectors in positions with few neighbours that might give the cooperators an advantage (more neighbours might copy their strategy thus becoming cooperators too).

We find that in certain cases, the conclusions about the evolution of cooperation are robust. But for some commonly studied kinds of networks, correlating the initial positions of cooperators with the number of neighbours strongly affects whether cooperation survives or dies out. So one does need to be careful!

]]>Continuing my quest to catch up on explaining our research results, here is another paper we put out back in April, in collaboration with colleagues in Geneva and Brussels: https://arxiv.org/abs/1904.04819

Like the one I wrote about last week, it is about how to use quantum physics to generate strong random numbers. In that work, we put a general bound on how much randomness one could possible generate in any setup, where the measurements are not trusted. Here, we go the other way and present a specific scheme which generates randomness with untrusted devices – and we implement it experimentally.

Quoting from my last post, random numbers are crucial for secure digital communication. They are needed for cryptography, which e.g. keeps your credit card details safe online. And they are also used in computer simulations of complicated processes (for predicting the weather, for example), and in games and gambling. But good random numbers are not that easy to create.

For security applications, “good” means unpredictable – no spy should be able to predict them in advance (and since we don’t know who might try to spy on us, that means no-one at all).

Something may seem random to you but perfectly predictable to someone else. Say I’m a magician and I practised coin flipping a lot. When I flip a coin, by giving it just the right spin I can make it land on heads or tails as I wish. To you the flip looks random, but to me the outcome is completely predictable. What we want is a guarantee that the numbers we generate are random to anyone – we want to sure that no magician could be playing tricks on us.

Ideally, we would like to have to assume as little as possible about what these ‘anyone’ can know about the devices used to make the numbers. The less we need to assume, the less risk that any of our assumptions turn out to be wrong, and so the stronger our guarantee on the randomness.

In a classical world, knowing everything there is to know about a system at some point in time in principle allows predicting everything that will happen at all later times. The classical world is deterministic, and there is no randomness, unless we make assumptions about how much an observer knows. It is one of big surprises in quantum physics that there is fundamental randomness in nature. In quantum mechanics it is impossible to predict the outcome of certain measurements even when you know all that can possibly be known about the devices used.

In fact, quantum physics allows us to guarantee randomness under a range of different assumptions about the devices used. On one end of the scale, the measurements made by the devices are assumed to be known, and they are chosen such that their outcomes are unpredictable. In this case, the devices need to be well characterised, but they are relatively easy to implement and random numbers can be generated at very high rates (millions of bits per second). Commercial quantum randomness generators operate in this regime. On the other end of the scale, essentially nothing is assumed about what the devices are doing. Randomness can be guaranteed just be looking at the statistics of the data the devices generate. This regime is known as ‘device-independent’, and offers an extremely secure form of randomness. However, it requires that the data violates a so-called Bell inequality. This is technologically very challenging to do without filtering the data in some way that might compromise the randomness. For this reason, the rates that have been achieved so far for device-independent generation of random numbers are relatively low (some bits per minute).

In between the two extremes, there is plenty of room to explore – to look for a good set of assumptions which gives a strong guarantee on the randomness but still allows for reasonable rates to be realised in practice.

One would like the assumptions to be well justified physically. This means that ideally, it should be something that one can check by measuring. A nice route towards this goal was pointed out by Thomas van Himbeeck and co-workers (https://arxiv.org/abs/1612.06828). They considered prepare-and-measure setups with two devices. One prepares quantum states, the other measures them. They showed that when the measurement device is untrusted, one can still certify the quantum behaviour of the experiment just from the observed data, provided that the energy of the prepared states is bounded.

The energy can be measured, and so it is possible to check whether this assumption holds in a given experiment. In our experimental implementation, the prepared states corresponds to pulses of laser light with different intensities, and they are measured by a detector which just distinguishes between the presence or absence of photons (single quanta of light). This way, we can generate millions of random bits per second, with a very strong guarantee on how unpredictable they are. A user can verify in real time that the setup works correctly, based on the detector output and a bound on the energy in the laser pulses, which can also be justified directly from measurements.

Compared with earlier works (by myself and others), we’ve made the assumptions required to guarantee randomness much easier to justify, without loosing very much on the rate. So, we’ve improved the trade-off between between trust in the devices (how strong the randomness is), and the random bit rate (how much randomness we get per time).

** Published paper: **https://journals.aps.org/pra/abstract/10.1103/PhysRevA.100.062338

I am way way behind on writing human-readable summaries of our new research results. Somehow, I’ve been continuously super busy after moving back to Denmark… I’ll try to start catching up though.

As a start, here is one which was published recently: https://doi.org/10.1103/PhysRevA.99.052338. It was done in collaboration with Marie Ioannou and Nicolas Brunner and is about how much randomness can be generated from a quantum black box. The open-access arXiv preprint is here: https://arxiv.org/abs/1811.02313 .

Random numbers are crucial for secure digital communication. They are needed for cryptography, which e.g. keeps your credit card details safe online. And they are also used in computer simulations of complicated processes (for predicting the weather, for example), and in games and gambling. But good random numbers are not that easy to create.

For security applications, “good” means unpredictable – no spy should be able to predict them in advance (and since we don’t know who might try to spy on us, that means no-one at all).

Something may seem random to you but perfectly predictable to someone else. Say I’m a magician and I practised coin flipping a lot. When I flip a coin, by giving it just the right spin I can make it land on heads or tails as I wish. To you the flip looks random, but to me the outcome is completely predictable. What we want is a guarantee that the numbers we generate are random to anyone – we want to sure that no magician could be playing tricks on us.

Ideally, we would like to have to assume as little as possible about what these ‘anyone’ can know about the devices used to make the numbers. The less we need to assume, the less risk that any of our assumptions turn out to be wrong, and so the stronger our guarantee on the randomness.

In a classical world, knowing everything there is to know about a system at some point in time in principle allows predicting everything that will happen at all later times. The classical world is deterministic, and there is no randomness, unless we make assumptions about how much an observer knows. Not so in the quantum world. It is one of the profound implications of quantum physics that there is fundamental randomness in nature. In quantum mechanics it is impossible to predict the outcome of certain measurements even when you know all that can possibly be know about the devices used.

In fact, quantum physics allows us to guarantee randomness under a range of different strength of assumptions about the devices used.

On one end of the scale, the measurements made by the devices are assumed to be known, and they are chosen such that their outcomes are unpredictable. In this case, the devices need to be well characterised, but they are relatively easy to implement and random numbers can be generated at very high rates (millions of bits per second). Commercial quantum randomness generators operate in this regime.

On the other end of the scale, essentially nothing is assumed to be known about what the devices are doing. Randomness can be guaranteed just be looking at the statistics of the data the devices generate. This regime is known as ‘device-independent’, and offers an extremely secure form of randomness. However, it requires that the data violates a so-called Bell inequality. This is technologically very challenging to do without filtering the data in some way that might compromise the randomness. For this region, the rates that have been achieved so far for device-independent generation of random numbers are relatively low (some bits per minute).

In between these two extremes, there is a wealth of different possibilities for trade-offs between how much is assumed about the devices, how fast randomness can be generated, and how strong guarantees can be made about it. In particular, many proposals have explored settings where the measurement device is uncharacterised but something may be known about the quantum states being measured on.

In this work, we derive a universal upper bound on how much randomness can be generated in any such scenario. It turns out to be harder to generate randomness in this manner than one might first think.

In particular, one might intuitively think that the randomness can always be increased by increasing the number of possible measurement outcomes. If I throw a coin, there are two outcomes (heads or tails). So the probability to guess the outcome correctly is one out of two. In this case, one bit of randomness is generated for each throw. If instead I throw a die, there are six possible outcomes and the probability to guess is now only one out of six. The outcome is more unpredictable, so there is more than one bit of randomness generated. One could keep increasing the number of outcomes and it seems that the randomness would also keep increasing.

For devices that are completely trusted, this is indeed the case. However, if the measurement device is uncharacterised, it turns out to be wrong. The amount of randomness which can be guaranteed is limited not just by the number of outputs, but also by the number of quantum states which are measured on – i.e. the number of inputs. Thus, no matter how ingenious a scheme one may come up with, for a fixed number of inputs, the randomness that can be generated per round is limited. In fact, the number of inputs required grows very fast with the number of desired random bits (more precisely, exponentially fast).

This means that while generating many random bits per round is still possible theoretically, it would probably not be practical because of the large number of inputs required. Instead, to get high rates, one can focus on identifying experimentally friendly schemes with relatively few inputs which allow a high repetition rate (many rounds per second). For few inputs, we show that our bound can be reached – that is, there exist schemes which achieve the maximum possible randomness per round. This is probably true for any number of inputs.

** Published paper: **https://doi.org/10.1103/PhysRevA.99.052338

Yesterday, our paper on addition of quantum master equation generators was finally published in PRA. It has been underway for what feels like a loooong time (look at the recieved and published dates!) https://doi.org/10.1103/PhysRevA.97.062124.

When we first put this paper online, I apparently didn’t get around to writing a summary, so I’ll do one here:

In the paper, we investigate a somewhat technical question, but the context is not hard to understand.

Imagine you have a cold beer on a warm summer’s day. Clearly, if you leave you beer out in the sun, it’s going to warm up. This is because the beer is not isolated from the environment. Sunlight is hitting it and the warm air is touching it, giving off some heat to the beer. We say that the beer is an open system – it is interacting with its environment.

Now imagine that you want to describe how the beer evolves over time. How warm will it be after 10 minutes? How long does it take before it gets lukewarm and icky? In principle, to figure this out, you should track the trajectory of every air molecule hitting the bottle, to calculate exacly how much energy it gave to the beer, and how many photons of sunlight was absorbed or reflected etc. etc. But that is not very practical! The environment is huge, and keeping track of all its parts is next to impossible. And we are only really interested in what happens to the beer anyway.

Fortunately, if we are just interested in the beer, it is usually enough to account for the average effect of the environment. Instead of describing how every molecule or photon is absorbed or reflected, we can just look at average rates. How many photons arrive per second on average, for example. And that will be enough to tell us, how the beer is warming up. This gives a huge simplification of the calculations.

In quantum physics, we often deal with open systems. We may try to isolate our atoms, ions, or superconducting circuits as much as possible, but there will always be some contact to the environment. Sometimes, we may even want that, for example in quantum thermal machines. So we usually resort to averaging over the environment to get an effective description of how the system of interest evolves. In particular, we often use something called a quantum master equation.

The quantum master equation is a nice mathematical tool which allows us to find the time evolution of a quantum system in contact with a given environment. For every environment, we find the master equation, and then use that to figure out what happens to our quantum system.

The question which we investigate in the paper is this: If the system is interacting with several environments at the same time, and we know the master equation for each of them, can we then just add them up to get the effect of the total environment? For example, the beer is heating up both because of the warm air, and because of the sun shining on it. If we know the rate of heating by the air and the rate of heating by the sunlight, can we then just add them to figure out how was the beer is really heating?

Adding is easy, so calculations are much easier if the answer is yes. For the warming beer, this indeed the case. However, for quantum systems, things are a bit more complicated. Sometimes adding is ok, at other times it results in evolutions that are not correct, or even in equations that do not correspond to any possible physical evolution. In our paper, we establish conditions for when adding is allowed, or gives incorrect or non-physical results.

** Published paper**:

Some two weeks ago, we had a new paper out on the arXiv, which I haven’t had the time to write about until now: https://arxiv.org/abs/1710.11624. It has been under way for quite a while, but now we finally managed to put it all on (virtual) paper and get it out there.

This work contributes to building a consistent picture of thermodynamics at the quantum scale.

Thermodynamics explains how machines like steam engines and fridges work. It describes how heat can be moved around or transformed into other useful forms of energy, such as motion in a locomotive. And it tells us the fundamental limits on how well any such machine can perform, no matter how clever and intricate. In turn, the study of ideal machines has taught us fundamental things about nature, such as the second law of thermodynamics, which says that the entropy of an isolated system can never decrease (often thought of as stating the the ‘mess’ of the universe can only get bigger over time). Or the third law, which says that cooling to absolute theory requires infinite resources.

Traditionally, thermodynamics deals with big systems (think, steam locomotive), whose components are well described by classical physics. However, if we would look at smaller and smaller scales, then eventually we will need quantum physics to describe these components, and quantum phenomena will start to become important. What does thermodynamics look like at this quantum scale? Do the well known laws still hold? Can we make sense of such microscopic thermal processes? These are interesting theoretical questions. And by now, experimental techniques are getting so advanced that we can actually begin to build something like steam engines and fridges on the nanoscale. So they are starting to be relevant in practice as well.

A lot is already known about quantum thermodynamics, including generalisations of the Second and Third Laws and many results about the behaviour of thermal machines. However, it is fair to say that it is still work in progress – we do not yet have a full, coherent picture of quantum thermodynamics. Different approaches have been developed and it is not always clear how they fit together.

One point where classical and quantum thermodynamics differ is on how much it ‘costs’ to have control over a system, in terms of work energy (think of work as energy in an ordered, useful form as opposed to disordered heat energy). In the classical world the work cost of control can usually be neglected. Not so in the quantum world. There the cost of control can be a significant part of the cost of operating a machine.

In our paper, we add a piece towards completing the puzzle of quantum thermodynamics by studying the role of control for cooling. By looking at small fridges with more or less available control, we are able to compare different paradigms, which have been developed in the field, and compare how much one can cool under each of them, and how much it costs.

*Published paper:*

This paper eventually got split into two parts. One focusing on a universal limit to cooling quantum systems, and one focused on the work cost of cooling.

The first paper was published in PRL here: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.123.170605

The second was published in PRE here: https://journals.aps.org/pre/abstract/10.1103/PhysRevE.100.042130

]]>On Monday last week, we had a new paper out on the arXiv: https://arxiv.org/abs/1710.02621

In it, we consider a quantum thermal machine for generating entanglement. Entanglement is a form of quantum correlations which are essential in quantum information protocols. It can be used for ultra-precise measurements, for example of magnetic fields, and for quantum computing, among many other applications. The machine uses only differences in temperatures and interactions that do not require any active control, and so entanglement can be obtained just by turning on the machine and waiting, which is neat.

What I find particularly nice is that the paper is a good example of scientific collaboration across the globe. The new scheme improves upon a design for thermal entanglement generation which we developed with colleagues here in Geneva and in Barcelona. I wrote about that work here (have a look for a summary of what makes thermal entanglement generation exciting). Recently, colleagues in Qufu, China, who had been working on similar setups, realised that the entanglement generation could be improved by including an additional thermal bath. Zhong-Xiao Man then contacted me, and with Armin Tavakoli we confirmed their results and thought about how the new scheme could be realised in practice. After just a few emails back and forth, the paper came together. The bulk of the work was done by Zhong-Xiao and his colleagues, but Armin and I also made a significant contribution, and in the end, I think the paper is much better than what any of us might have done alone.

So thanks to Zhong-Xiao for bringing on us on board. And I am happy to do research in the age of the internet, which has made these kind of interaction much much easier, faster, and likelier to happen :).

** Published paper: **https://iopscience.iop.org/article/10.1088/1402-4896/ab0c51

A few weeks back, we had a paper out on the arXiv, which I haven’t had time to write about yet. https://arxiv.org/abs/1707.09211

The topic of the paper is quantum master equations – a somewhat technical subject, but very important for much of the other physics we study, especially small thermal machines, like the ones I have written about here and here.

When we try to describe a thermal machine, we are faced with a problem. The machine necessarily interacts with some thermal reservoirs. These are large, messy systems with many, many particles. In fact, this is true more generally. Any small quantum system interacts with the surrounding environment in some way. We may do our best to isolate it (and experimentalists typically do a good job!), but some weak interaction will always be present. The environment is big and complicated, and it is extremely cumbersome, if not impossible, to describe in detail what is going on with all the individual particles there. It would make our lives miserable if we had to try…

This is where quantum master equations come in. Instead of describing the environment in detail, one can account for the average effect it has on the system. The noiseless behaviour that an isolated system would follow is modified to include noise introduced by the environment. There are various techniques for doing so. The quantum master equation approach is one of the most important and wide spread.

They gives us a powerful computational tool, and we rely on them a lot we try to understand what is going on, for example in quantum thermal machines. They have been around for more than half a century, but there are still aspects which are not completely understood. Since it accounts for the effects of a large, complicated environment, which is not explicitly described, deriving a master equation always involves some approximations. And it can sometimes be unclear when these approximations are reliable.

In our paper, we address one such ambiguity which is particularly relevant for studying small quantum thermal machines, or more generally, energy transport in a small quantum system (this might be relevant e.g. in photosynthesis, where light energy is transported through molecules).

Imagine that the quantum system consists of two particles. Imagine that each particle is in contact with a separate environment, and that the particles also interact with each other. Now one could derive a quantum master equation for the system in two different ways. One could either first account for the noise introduced by the environments on each particle separately, and then account for the interaction between them. Or one could first account for the interaction between the particles, and then find the noise induced by the environments on this composite system. This leads to two different master equations, often referred to as ‘local’ and ‘global’, because in the former case, noise acts locally on each particle, while in the latter it acts on both particles in a collective manner.

There has been quite a bit of discussion in the community on whether the local or global approach is appropriate for describing certain thermal machines, and even results showing that employing a master equation in the wrong regime can lead to violation of fundamental physical principles such as the second law of thermodynamics. In our paper, we compare the two approaches against an exactly solvable model (that is, where the environment can be treated in detail) and study rigorously when one or the other approach holds. We find what could be intuitively expected: When the interaction between the system particles is weak, the local approach is valid and the global fails. On the other hand, when the inter-system interaction is strong, the two particles should be treated as single system, and the global approach is the valid one. For intermediate couplings, both approaches approximate the true evolution well.

This is reassuring, and provides a solid foundation for our (and others’) studies of small quantum thermal machines and other open quantum systems.

** Published paper:** https://iopscience.iop.org/article/10.1088/1367-2630/aa964f