Thursday, January 19, 2017

Dark matter’s hideout just got smaller, thanks to supercomputers.

Lattice QCD. Artist’s impression.
Physicists know they are missing something. Evidence that something’s wrong has piled up for decades: Galaxies and galaxy clusters don’t behave like Einstein’s theory of general relativity predicts. The observed discrepancies can be explained either by modifying general relativity, or by the gravitational pull of some, so-far unknown type of “dark matter.”

Theoretical physicists have proposed many particles which could make up dark matter. The most popular candidates are a class called “Weakly Interacting Massive Particles” or WIMPs. They are popular because they appear in supersymmetric extensions of the standard model, and also because they have a mass and interaction strength in just the right ballpark for dark matter. There have been many experiments, however, trying to detect the elusive WIMPs, and one after the other reported negative results.

The second popular dark matter candidate is a particle called the “axion,” and the worse the situation looks for WIMPs the more popular axions are becoming. Like WIMPs, axions weren’t originally invented as dark matter candidates.

The strong nuclear force, described by Quantum ChromoDynamics (QCD), could violate a symmetry called “CP symmetry,” but it doesn’t. An interaction term that could give rise to this symmetry-violation therefore has a pre-factor – the “theta-parameter” (θ) – that is either zero or at least very, very small. That nobody knows just why the theta-parameter should be so small is known as the “strong CP problem.” It can be solved by promoting the theta-parameter to a field which relaxes to the minimum of a potential, thereby setting the coupling to the troublesome term to zero, an idea that dates back to Peccei and Quinn in 1977.

Much like the Higgs-field, the theta-field is then accompanied by a particle – the axion – as was pointed out by Steven Weinberg and Frank Wilczek in 1978.

The original axion was ruled out within a few years after being proposed. But theoretical physicists quickly put forward more complicated models for what they called the “hidden axion.” It’s a variant of the original axion that is more weakly interacting and hence more difficult to detect. Indeed it hasn’t been detected. But it also hasn’t been ruled out as a dark matter candidate.

Normally models with axions have two free parameters: one is the mass of the axion, the other one is called the axion decay constant (usually denoted f_a). But these two parameters aren’t actually independent of each other. The axion gets its mass by the breaking of a postulated new symmetry. A potential, generated by non-perturbative QCD effects, then determines the value of the mass.

If that sounds complicated, all you need to know about it to understand the following is that it’s indeed complicated. Non-perturbative QCD is hideously difficult. Consequently, nobody can calculate what the relation is between the axion mass and the decay constant. At least so far.

The potential which determines the particle’s mass depends on the temperature of the surrounding medium. This is generally the case, not only for the axion, it’s just a complication often omitted in the discussion of mass-generation by symmetry breaking. Using the potential, it can be shown that the mass of the axion is inversely proportional to the decay constant. The whole difficulty then lies in calculating the factor of proportionality, which is a complicated, temperature-dependent function, known as the topological susceptibility of the gluon field. So, if you could calculate the topological susceptibility, you’d know the relation between the axion mass and the coupling.

This isn’t a calculation anybody presently knows how to do analytically because the strong interaction at low temperatures is, well, strong. The best chance is to do it numerically by putting the quarks on a simulated lattice and then sending the job to a supercomputer.

And even that wasn’t possible until now because the problem was too computationally intensive. But in a new paper, recently published in Nature, a group of researchers reports they have come up with a new method of simplifying the numerical calculation. This way, they succeeded in calculating the relation between the axion mass and the coupling constant.

    Calculation of the axion mass based on high-temperature lattice quantum chromodynamics
    S. Borsanyi et al
    Nature 539, 69–71 (2016)

(If you don’t have journal access, it’s not the exact same paper as this but pretty close).

This result is a great step forward in understanding the physics of the early universe. It’s a new relation which can now be included in cosmological models. As a consequence, I expect that the parameter-space in which the axion can hide will be much reduced in the coming months.

I also have to admit, however, that for a pen-on-paper physicist like me this work has a bittersweet aftertaste. It’s a remarkable achievement which wouldn’t have been possible without a clever formulation of the problem. But in the end, it’s progress fueled by technological power, by bigger and better computers. And maybe that’s where the future of our field lies, in finding better ways to feed problems to supercomputers.

Friday, January 13, 2017

What a burst! A fresh attempt to see space-time foam with gamma ray bursts.

It’s an old story: Quantum fluctuations of space-time might change the travel-time of light. Light of higher frequencies would be a little faster than that of lower frequencies. Or slower, depending on the sign of an unknown constant. Either way, the spectral colors of light would run apart, or ‘disperse’ as they say if they don’t want you to understand what they say.

Such quantum gravitational effects are miniscule, but added up over long distances they can become observable. Gamma ray bursts are therefore ideal to search for evidence of such an energy-dependent speed of light. Indeed, the energy-dependent speed of light has been sought for and not been found, and that could have been the end of the story.

Of course it wasn’t because rather than giving up on the idea, the researchers who’d been working on it made their models for the spectral dispersion increasingly difficult and became more inventive when fitting them to unwilling data. Last thing I saw on the topic was a linear regression with multiple curves of freely chosen offset – sure way to fit any kind of data on straight lines of any slope – and various ad-hoc assumptions to discard data that just didn’t want to fit, such as energy cuts or changes in the slope.

These attempts were so desperate I didn’t even mention them previously because my grandma taught me if you have nothing nice to say, say nothing.

But here’s a new twist to the story, so now I have something to say, and something nice in addition.

On June 25 2016, the Fermi Telescope recorded a truly remarkable burst. The event, GRB160625, had a total duration of 770s and had three separate sub-bursts with the second, and largest, sub-burst lasting 35 seconds (!). This has to be contrasted with the typical burst lasting a few seconds in total.

This gamma ray burst for the first time allowed researchers to clearly quantify the relative delay of the different energy channels. The analysis can be found in this paper
    A New Test of Lorentz Invariance Violation: the Spectral Lag Transition of GRB 160625B
    Jun-Jie Wei, Bin-Bin Zhang, Lang Shao, Xue-Feng Wu, Peter Mészáros
    arXiv:1612.09425 [astro-ph.HE]

Unlike supernovae IIa, which have very regular profiles, gamma ray bursts are one of a kind and they can therefore be compared only to themselves. This makes it very difficult to tell whether or not highly energetic parts of the emission are systematically delayed because one doesn’t know when they were emitted. Until now, the analysis relied on some way of guessing the peaks in three different energy channels and (basically) assuming they were emitted simultaneously. This procedure sometimes relied on as little as one or two photons per peak. Not an analysis you should put a lot of trust in.

But the second sub-burst of GRB160625 was so bright, the researchers could break it down in 38 energy channels – and the counts were still high enough to calculate the cross-correlation from which the (most likely) time-lag can be extracted.

Here are the 38 energy channels for the second sub-burst

Fig 1 from arXiv:1612.09425

For the 38 energy channels they calculate 37 delay-times relative to the lowest energy channel, shown in the figure below. I find it a somewhat confusing convention, but in their nomenclature a positive time-lag corresponds to an earlier arrival time. The figure therefore shows that the photons of higher energy arrive earlier. The trend, however, isn’t monotonically increasing. Instead, it turns around at a few GeV.

Fig 2 from arXiv:1612.09425

The authors then discuss a simple model to fit the data. First, they assume that the emission has an intrinsic energy-dependence due to astrophysical effects which cause a positive lag. They model this with a power-law that has two free parameters: an exponent and an overall pre-factor.

Second, they assume that the effect during propagation – presumably from the space-time foam – causes a negative lag. For the propagation-delay they also make a power-law ansatz which is either linear or quadratic. This ansatz has one free parameter which is an energy scale (expected to be somewhere at the Planck energy).

In total they then have three free parameters, for which they calculate the best-fit values. The fitted curves are also shown in the image above, labeled n=1 (linear) and n=2 (quadratic). At some energy, the propagation-delay becomes more relevant than the intrinsic delay, which leads to the turn-around of the curve.

The best-fit value of the quantum gravity energy is 10q GeV with q=15.66 for the linear and q=7.17 for the quadratic case. From this they extract a lower limit on the quantum gravity scale at the 1 sigma confidence level, which is 0.5 x 1016 GeV for the linear and 1.4 x 107 GeV for the quadratic case. As you can see in the above figure, the data in the high energy bins has large error-bars owing to the low total count, so the evidence that there even is a drop isn’t all that great.

I still don’t buy there’s some evidence for space-time foam to find here, but I have to admit that this data finally convinces me that at least there is a systematic lag in the spectrum. That’s the nice thing I have to say.

Now to the not-so-nice. If you want to convince me that some part of the spectral distortion is due to a propagation-effect, you’ll have to show me evidence that its strength depends on the distance to the source. That is, in my opinion, the only way to make sure one doesn’t merely look at delays present already at emission. And even if you’d done that, I still wouldn’t be convinced that it has anything to do with space-time foam.

I’m skeptic of this because the theoretical backing is sketchy. Quantum fluctuations of space-time in any candidate-theory for quantum gravity do not lead to this effect. One can work with phenomenological models, in which such effects are parameterized and incorporated as new physics into the known theories. This is all well and fine. Unfortunately, in this case existing data already constrains the parameters so that the effect on the propagation of light is unmeasurably small. It’s already ruled out. Such models introduce a preferred frame and break Lorentz-invariance and there is loads of data speaking against it.

It has been claimed that the already existing constraints from Lorentz-invariance violation can be circumvented if Lorentz-invariance is not broken but instead deformed. In this case the effective field theory limit supposedly doesn’t apply. This claim is also quoted in the paper above (see end of section 3.) However, if you look at the references in question, you will not find any argument for how one manages to avoid this. Even if one can make such an argument though (I believe it’s possible, not sure why it hasn’t been done), the idea suffers from various other theoretical problems that, to make a very long story very short, make me think the quantum gravity-induced spectral lag is highly implausible.

However, leaving aside my theory-bias, this newly proposed model with two overlaid sources for the energy-dependent time-lag is simple and should be straight-forward to test. Most likely we will soon see another paper evaluating how well the model fits other bursts on record. So stay tuned, something’s happening here.

Sunday, January 08, 2017

Stephen Hawking turns 75. Congratulations! Here’s what to celebrate.

If people know anything about physics, it’s the guy in a wheelchair who speaks with a computer. Google “most famous scientist alive” and the answer is “Stephen Hawking.” But if you ask a physicist, what exactly is he famous for?

Hawking became “officially famous” with his 1988 book “A Brief History of Time.” Among physicists, however, he’s more renowned for the singularity theorems. In his 1960s work together with Roger Penrose, Hawking proved that singularities form under quite general conditions in General Relativity, and they developed a mathematical framework to determine when these conditions are met.

Before Hawking and Penrose’s work, physicists had hoped that the singularities which appeared in certain solutions to General Relativity were mathematical curiosities of little relevance for physical reality. But the two showed that this was not so, that, to the very contrary, it’s hard to avoid singularities in General Relativity.

Since this work, the singularities in General Relativity are understood to signal the breakdown of the theory in regions of high energy-densities. In 1973, together with George Ellis, Hawking published the book “The Large Scale Structure of Space-Time” in which this mathematical treatment is laid out in detail. Still today it’s one of the most relevant references in the field.

Only a year later, in 1974, Hawking published a seminal paper in which he demonstrates that black holes give off thermal radiation, now referred to as “Hawking radiation.” This evaporation of black holes results in the black hole information loss paradox which is still unsolved today. Hawking’s work demonstrated clearly that the combination of General Relativity with the quantum field theories of the standard model spells trouble. Like the singularity theorems, it’s a result that doesn’t merely indicate, but prove that we need a theory of quantum gravity in order to consistently describe nature.

While the 1974 paper was predated by Bekenstein’s finding that black holes resemble thermodynamical systems, Hawking’s derivation was the starting point for countless later revelations. Thanks to it, physicists understand today that black holes are a melting pot for many different fields of physics – besides general relativity and quantum field theory, there is thermodynamics and statistical mechanics, and quantum information and quantum gravity. Let’s not forget astrophysics, and also mix in a good dose of philosophy. In 2017, “black hole physics” could be a subdiscipline in its own right – and maybe it should be. We owe much of this to Stephen Hawking.

In the 1980s, Hawking worked with Jim Hartle on the no-boundary proposal according to which our universe started in a time-less state. It’s an appealing idea whose time hasn’t yet come, but I believe this might change within the next decade or so.

After this, Hawking tries several times to solve the riddle of black hole information loss that he posed himself, most recently in early 2016. While his more recent work has been met with interest in the community, it hasn’t been hugely impactful – it attracts significantly more attention by journalists than by physicists.

As a physicist myself, I frequently get questions about Stephen Hawking: “What’s he doing these days?” – I don’t know. “Have you ever met him?” – He slept right through it. “Do you also work on the stuff that he works on?” – I try to avoid it. “Will he win a Nobel Prize?” – Ah. Good question.

Hawking’s shot at the Nobel Prize is the Hawking radiation. The astrophysical black holes which we can presently observe have a temperature way too small to be measured in the foreseeable future. But since the temperature increases for smaller mass, lighter black holes are hotter, and could allow us to measure Hawking radiation.

Black holes of sufficiently small masses could have formed from density fluctuations in the early universe and are therefore referred to as “primordial black holes.” However, none of them have been seen, and we have tight observational constraints on their existence from a variety of data. It isn’t yet entirely excluded that they are around, but I consider it extremely unlikely that we’ll observe one of these within my lifetime.

For what the Nobel is concerned, this leaves the Hawking radiation in gravitational analogues. In this case, one uses a fluid to mimic a curved space-time background. The mathematical formulation of this system is (in certain approximations) identical to that of an actual black hole, and consequently the gravitational analogues should also emit Hawking radiation. Indeed, Jeff Steinhauer claims that he has measured this radiation.

At the time of writing, it’s still somewhat controversial whether Steinhauer has measured what he thinks he has. But I have little doubt that sooner or later this will be settled – the math is clear: The radiation should be there. It might take some more experimental tinkering, but I’m confident sooner or later it’ll be measured.

Sometimes I hear people complain: “But it’s only an analogy.” I don’t understand this objection. Mathematically it’s the same. That in the one case the background is an actually curved space-time and in the other case it’s an effectively curved space-time created by a flowing fluid doesn’t matter for the calculation. In either situation, measuring the radiation would demonstrate the effect is real.

However, I don’t think that measuring Hawking radiation in an analogue gravity system would be sufficient to convince the Nobel committee Hawking deserves the prize. For that, the finding would have to have important implications beyond confirming a 40-years-old computation.

One way this could happen, for example, would be if the properties of such condensed matter systems could be exploited as quantum computers. This isn’t as crazy as it sounds. Thanks to work built on Hawking’s 1974 paper we know that black holes are both extremely good at storing information and extremely efficient at distributing it. If that could be exploited in quantum computing based on gravitational analogues, then I think Hawking would be in line for a Nobel. But that’s a big “if.” So don’t bet on it.

Besides his scientific work, Hawking has been and still is a master of science communication. In 1988, “A Brief History of Time” was a daring book about abstract ideas in a fringe area of theoretical physics. Hawking, to everybody’s surprise, proved that the public has an interest in esoteric problems like what happens if you fall into a black hole, what happed at the Big Bang, or whether god had any choice when he created the laws of nature.

Since 1988, the popular science landscape has changed dramatically. There are more books about theoretical physics than ever before and they are more widely read than ever before. I believe that Stephen Hawking played a big role in encouraging other scientists to write about their own research for the public. It certainly was an inspiration for me.

So, Happy Birthday, Stephen, and thank you.

Tuesday, January 03, 2017

The Bullet Cluster as Evidence against Dark Matter

Once upon a time, at the far end of the universe, two galaxy clusters collided. Their head-on encounter tore apart the galaxies and left behind two reconfigured heaps of stars and gas, separating again and moving apart from each other, destiny unknown.

Four billion years later, a curious group of water-based humanoid life-forms tries to make sense of the galaxies’ collision. They point their telescope at the clusters’ relics and admire its odd shape. They call it the “Bullet Cluster.”

In the below image of the Bullet Cluster you see three types of data overlaid. First, there are the stars and galaxies in the optical regime. (Can you spot the two foreground objects?) Then there are the regions colored red which show the distribution of hot gas, inferred from X-ray measurements. And the blue-colored regions show the space-time curvature, inferred from gravitational lensing which deforms the shape of galaxies behind the cluster.

The Bullet Cluster.
[Img Src: APOD. Credits: NASA]

The Bullet Cluster comes to play an important role in the humanoids’ understanding of the universe. Already a generation earlier, they had noticed that their explanation for the gravitational pull of matter did not match observations. The outer stars of many galaxies, they saw, moved faster than expected, meaning that the gravitational pull was stronger than what their theories could account for. Galaxies which combined in clusters, too, were moving too fast, indicating more pull than expected. The humanoids concluded that their theory, according to which gravity was due to space-time curvature, had to be modified.

Some of them, however, argued it wasn’t gravity they had gotten wrong. They thought there was instead an additional type of unseen, “dark matter,” that was interacting so weakly it wouldn’t have any consequences besides the additional gravitational pull. They even tried to catch the elusive particles, but without success. Experiment after experiment reported null results. Decades passed. And yet, they claimed, the dark matter particles might just be even more weakly interacting. They built larger experiments to catch them.

Dark matter was a convenient invention. It could be distributed in just the right amounts wherever necessary and that way the data of every galaxy and galaxy cluster could be custom-fit. But while dark matter worked well to fit the data, it failed to explain how regular the modification of the gravitational pull seemed to be. On the other hand, a modification of gravity was difficult to work with, especially for handling the dynamics of the early universe, which was much easier to explain with particle dark matter.

To move on, the curious scientists had to tell apart their two hypotheses: Modified gravity or particle dark matter? They needed an observation able to rule out one of these ideas, a smoking gun signal – the Bullet Cluster.

The theory of particle dark matter had become known as the “concordance model” (also: ΛCDM). It heavily relied on computer simulations which were optimized so as to match the observed structures in the universe. From these simulations, the scientists could tell the frequency by which galaxy clusters should collide and the typical relative speed at which that should happen.

From the X-ray observations, the scientists inferred that the collision speed of the galaxies in the Bullet Cluster must have taken place at approximately 3000 km/s. But such high collision speeds almost never occurred in the computer simulations based on particle dark matter. The scientists estimated the probability for a Bullet-Cluster-like collision to be about one in ten billion, and concluded: that we see such a collision is incompatible with the concordance model. And that’s how the Bullet Cluster became strong evidence in favor of modified gravity.

However, a few years later some inventive humanoids had optimized the dark-matter based computer simulations and arrived at a more optimistic estimate of a probability of 4.6×10-4 for seeing something like the Bullet-Cluster. Briefly later they revised the probability again to 6.4×10−6.

Either way, the Bullet Cluster remained a stunningly unlikely event to happen in the theory of particle dark matter. It was, in contrast, easy to accommodate in theories of modified gravity, in which collisions with high relative velocity occur much more frequently.

It might sound like a story from a parallel universe – but it’s true. The Bullet Cluster isn’t the incontrovertible evidence for particle dark matter that you have been told it is. It’s possible to explain the Bullet Cluster with models of modified gravity. And it’s difficult to explain it with particle dark matter.

How come we so rarely read about the difficulties the Bullet Cluster poses for particle dark matter? It’s because the pop sci media doesn’t like anything better than a simple explanation that comes with an image that has “scientific consensus” written all over it. Isn’t it obvious the visible stuff is separated from the center of the gravitational pull?

But modifying gravity works by introducing additional fields that are coupled to gravity. There’s no reason that, in a dynamical system, these fields have to be focused at the same place where the normal matter is. Indeed, one would expect that modified gravity too should have a path dependence that leads to such a delocalization as is observed in this, and other, cluster collisions. And never mind that when they pointed at the image of the Bullet Cluster nobody told you how rarely such an event occurs in models with particle dark matter.

No, the real challenge for modified gravity isn’t the Bullet Cluster. The real challenge is to get the early universe right, to explain the particle abundances and the temperature fluctuations in the cosmic microwave background. The Bullet Cluster is merely a red-blue herring that circulates on social media as a shut-up argument. It’s a simple explanation. But simple explanations are almost always wrong.

Monday, January 02, 2017

How to use an "argument from authority"

I spent the holidays playing with the video animation software. As a side-effect, I produced this little video.

If you'd rather read than listen, here's the complete voiceover:

It has become a popular defense of science deniers to yell “argument from authority” when someone quotes an experts’ opinion. Unfortunately, the argument from authority is often used incorrectly.

What is an “argument from authority”?

An “argument from authority” is a conclusion drawn not by evaluating the evidence itself, but by evaluating an opinion about that evidence. It is also sometimes called an “appeal to authority”.

Consider Bob. Bob wants to know what follows from A. To find out, he has a bag full of knowledge. The perfect argument would be if Bob starts with A and then uses his knowledge to get to B to C to D and so on until he arrives at Z. But reality is never perfect.

Let’s say Bob wants to know what’s the logarithm of 350,000. In reality he can’t find anything useful in his bag of knowledge to answer that question. So instead he calls his friend, the Pope. The Pope says “The log is 4.8.” So, Bob concludes, the log of 350,000 is 4.8 because the Pope said so.

That’s an argument from authority – and you have good reasons to question its validity.

But unlike other logical fallacies, an argument from authority isn’t necessarily wrong. It’s just that, without further information about the authority that has been consulted, you don’t know how good the argument it is.

Suppose Bob hadn’t asked the Pope what’s the log of 350,000 but instead he’d have asked his calculator. The calculator says it’s approximately 5.544.

We don’t usually call this an argument from authority. But in terms of knowledge evaluation it’s the same logical structure as exporting an opinion to a trusted friend. It’s just that in this case the authority is your calculator and it’s widely known to be an expert in calculation. Indeed, it’s known to be pretty much infallible.

You believe that your friend the calculator is correct not because you’ve tried to verify every result it comes up with. You believe it’s correct because you trust all the engineers and scientists who have produced it and who also use calculators themselves.

Indeed, most of us would probably trust a calculator more than our own calculations, or that of the Pope. And there is a good reason for that – we have a lot of prior knowledge about whose opinion on this matter is reliable. And that is also relevant knowledge.

Therefore, an argument from authority can be better than an argument lacking authority if you take into account evidence for the authority’s expertise in the subject area.

Logical fallacies were widely used by the Greeks in their philosophical discourse. They were discussing problems like “Can a circle be squared?” But many of today’s problems are of an entirely different kind, and the Greek rules aren’t always helpful.

The problems we face today can be extremely complex, like the question “What’s the origin of climate change?” “Is it a good idea to kill off mosquitoes to eradicate malaria?” or “Is dark matter made of particles?” Most of us simply don’t have all the necessary evidence and knowledge to arrive at a conclusion. We also often don’t have the time to collect the necessary evidence and knowledge.

And when a primary evaluation isn’t possible, the smart thing to do is a secondary evaluation. For this, you don’t try to answer the question itself, but you try to answer the question “Where do I best get an answer to this question?” That is, you ask an authority.

We do this all the time: You see a doctor to have him check out that strange rush. You ask your mother how to stuff the turkey. And when the repair man says your car needs a new crankshaft sensor, you don’t yell “argument from authority.” And you shouldn’t, because you’ve smartly exported your primary evaluation of evidence to a secondary system that, you are quite confident, will actually evaluate the evidence *better* than you yourself could do.

But… the secondary evidence you need is how knowledgeable the authority is on the topic of question. The more trustworthy the authority, the more reliable the information.

This also means that if you reject an argument from authority you claim that the authority isn’t trustworthy. You can do that. But it’s here’s where things most often go wrong.

The person who doesn’t want to accept the opinion of scientific experts implicitly claims that their own knowledge is more trustworthy. Without explicitly saying so, they claim that science doesn’t work, or that certain experts cannot be trusted – and that they themselves can do better. That is a claim which can be made. But science has an extremely good track record in producing correct conclusions. Questioning that it’s faulty therefore carries a heavy burden of proof.

So. To use an argument from authority correctly, you have to explain why the authority’s knowledge is not trustworthy on the question under consideration.

But what should you do if someone dismisses scientific findings by claiming an argument from authority?

I think we should have a name for such a mistaken use of the term argument from authority. We could call it the fallacy of the “omitted knowledge prior.” This means it’s a mistake to not take into account evidence for the reliability of knowledge, including one’s own knowledge. You, your calculator, and the pope aren’t equally reliable when it comes to evaluating logarithms. And that counts for something.

Sunday, January 01, 2017

The 2017 Edge Annual Question: Which Scientific Term or Concept Ought To Be More Widely Known?

My first thought when I heard the 2017 Edge Annual Question was “Wasn’t that last year's question?” It wasn’t. But it’s almost identical to the 2011 question, “What scientific concept would improve everybody’s cognitive toolkit.” That’s ok, I guess, the internet has an estimated memory of 2 days, so after 5 years it’s reasonable to assume nobody will remember their improved toolkit.

After that first thought, the reply that came to my mind was “Effective Field Theory,” immediately followed by “But Sean Carroll will cover that.” He didn’t, he went instead for “Bayes's Theorem.” But Lisa Randall went for “Effective Theory.”

I then considered, in that order, “Free Will,” “Emergence," and “Determinism,” only to discard them again because each of these would have required me to first explain effective field theory. You find “Emergence” explained by Garrett Lisi, and determinism and free will (or its absence, respectively), is taken on by Jerry A. Coyne, whom I don’t know, but I entirely agree with his essay. My argument would have been almost identical, you can read my blogpost about free will here.

Next I reasoned that this question calls for a broader answer, so I thought of “uncertainty” and then science itself, but decided that had been said often enough. Lawrence Krauss went for uncertainty. You find Scientific Realism represented by Rebecca Newberger Goldstein, and the scientist by Stuart Firestein.

I then briefly considered social and cognitive biases, but was pretty convinced these would be well-represented by people who know more about sociology than me. Then I despaired for a bit over my unoriginality.

Back to my own terrain, I decided the one thing that everybody should know about physics is the principle of least action. The name hides its broader implications though, so I instead went for “Optimization.” A good move, because Janna Levin went for “The Principle of Least Action.”

I haven’t read all essays, but it’ll be a nice way to start the new year by browsing them. Happy New Year everybody!

Sunday, December 25, 2016

Physics is good for your health

Book sandwich
[Img src:]
Yes, physics is good for your health. And that’s not only because it’s good to know that peeing on high power lines is a bad idea. It’s also because, if they wheel you to the hospital, physics is your best friend. Without physics, there’d be no X-rays and no magnetic resonance imaging. There’d be no ultrasound and no spectroscopy, no optical fiber imaging and no laser surgery. There wouldn’t even be centrifuges.

But physics is good for your health in another way – as the resort of sanity.

Human society may have entered a post-factual era, but the laws of nature don’t give a shit. Planet Earth is a crazy place, full with crazy people, getting crazier by the minute. But the universe still expands, atoms still decay, electric currents still take the path of least resistance. Electrons don’t care if you believe in them and supernovae don’t want your money. And that’s the beauty of knowledge discovery: It’s always waiting for you. Stupid policy decisions can limit our collective benefit from science, but the individual benefit is up to each of us.

In recent years I’ve found it impossible to escape the “mindfulness” movement. Its followers preach that focusing on the present moment will ease your mental tension. I don’t know about you, but most days focusing on the present moment is the last thing I want. I’ve done a lot of breaths and most of them were pretty unremarkable – I’d much rather think about something more interesting.

And physics is there for you: Find peace of mind in Hubble images of young nebulae or galaxy clusters billions of light years away. Gauge the importance of human affairs by contemplating the enormous energies released in black hole mergers. Remember how lucky we are that our planet is warmed but not roasted by the Sun, then watch some videos of recent solar eruptions. Reflect on the long history of our own galaxy, seeded by tiny density fluctuations whose imprint still see today in the cosmic microwave background.

Or stretch your imagination and try to figure out what happens when you fall into a black hole, catch light like Einstein, or meditate over the big questions: Does time exist? Is the future determined? What, if anything, happened before the big bang? And if there are infinitely many copies of you in the multiverse, does that mean you are immortal?

This isn’t to say the here and now doesn’t matter. But if you need to recharge, physics can be a welcome break from human insanity.

And if everything else fails, there’s always the 2nd law of thermodynamics to remind us: All this will pass.

Wednesday, December 21, 2016

Reasoning in Physics

I’m just back from a workshop about “Reasoning in Physics” at the Center for Advanced Studies in Munich. I went because it seemed a good idea to improve my reasoning, but as I sat there, something entirely different was on my mind: How did I get there? How did I, with my avowed dislike of all things -ism and -ology, end up in a room full of philosophers, people who weren’t discussing physics, but the philosophical underpinning of physicists’ arguments. Or, as it were, the absence of such underpinnings.

The straight-forward answer is that they invited me, or invited me back, I should say, since this was my third time visiting the Munich philosophers. Indeed, they invited me to stay some longer for a collaborative project, but I’ve successfully blamed the kids for my inability to reply with either yes or no.

So I sat there, in one of these awkwardly quiet rooms where everyone will hear your stomach gargle, trying to will my stomach not to gargle and instead listen to the first talk. It was Jeremy Butterfield, speaking about a paper which I commented on here. Butterfield has been praised to me as one of the four good physics philosophers, but I’d never met him. The praise was deserved – he turned out to be very insightful and, dare I say, reasonable.

The talks of the first day focused on multiple multiverse measures (meta meta), inflation (still eternal), Bayesian inference (a priori plausible), anthropic reasoning (as observed), and arguments from mediocrity and typicality which were typically mediocre. Among other things, I noticed with consternation that the doomsday argument is still being discussed in certain circles. This consterns me because, as I explained a decade ago, it’s an unsound abuse of probability calculus. You can’t randomly distribute events that are causally related. It’s mathematical nonsense, end of story. But it’s hard to kill a story if people have fun discussing it. Should “constern” be a verb? Discuss.

In a talk by Mathias Frisch I learned of a claim by Huw Price that time-symmetry in quantum mechanics implies retro-causality. It seems the kind of thing that I should have known about but didn’t, so I put the paper on the reading list and hope that next week I’ll have read it last year.

The next day started with two talks about analogue systems of which I missed one because I went running in the morning without my glasses and, well, you know what they say about women and their orientation skills. But since analogue gravity is a topic I’ve been working on for a couple of years now, I’ve had some time to collect thoughts about it.

Analogue systems are physical systems whose observables can, in a mathematically precise way, be mapped to – usually very different – observables of another system. The best known example is sound-waves in certain kinds of fluids which behave exactly like light does in the vicinity of a black hole. The philosophers presented a logical scheme to transfer knowledge gained from observational test of one system to the other system. But to me analogue systems are much more than a new way to test hypotheses. They’re fundamentally redefining what physicists mean by doing science.

Presently we develop a theory, express it in mathematical language, and compare the theory’s predictions with data. But if you can directly test whether observations on one system correctly correspond to that of another, why bother with a theory that predicts either? All you need is the map between the systems. This isn’t a speculation – it’s what physicists already do with quantum simulations: They specifically design one system to learn how another, entirely different system, will behave. This is usually done to circumvent mathematically intractable problems, but in extrapolation it might just make theories and theorists superfluous.

It then followed a very interesting talk by Peter Mattig, who reported from the DFG research program “Epistemology of the LHC.” They have, now for the 3rd time, surveyed both theoretical and experimental particle physicists to track researchers’ attitudes to physics beyond the standard model. The survey results, however, will only get published in January, so I presently can’t tell you more than that. But once the paper is available you’ll read about it on this blog.

The next talk was by Radin Dardashti who warned us ahead that he’d be speaking about work in progress. I very much liked Radin’s talk at last year’s workshop, and this one didn’t disappoint either. In his new work, he is trying to make precise the notion of “theory space” (in the general sense, not restricted to qfts).

I think it’s a brilliant idea because there are many things that we know about theories but that aren’t about any particular theory, ie we know something about theory space, but we never formalize this knowledge. The most obvious example may be that theories in physics tend to be nice and smooth and well-behaved. They can be extrapolated. They have differentiable potentials. They can be expanded. There isn’t a priori any reason why that should be so; it’s just a lesson we have learned through history. I believe that quantifying meta-theoretical knowledge like this could play a useful role in theory development. I also believe Radin has a bright future ahead.

The final session on Tuesday afternoon was the most physicsy one.

My own talk about the role of arguments from naturalness was followed by a rather puzzling contribution by two young philosophers. They claimed that quantum gravity doesn’t have to be UV-complete, which would mean it’s not a consistent theory up to arbitrarily high energies.

It’s right of course that quantum gravity doesn’t have to be UV-complete, but it’s kinda like saying a plane doesn’t have to fly. If you don’t mind driving, then why put wings on it? If you don’t mind UV-incompleteness, then why quantize gravity?

This isn’t to say that there’s no use in thinking about approximations to quantum gravity which aren’t UV-complete and, in particular, trying to find ways to test them. But these are means to an end, and the end is still UV-completion. Now we can discuss whether it’s a good idea to start with the end rather than the means, but that’s a different story and shall be told another time.

I think this talk confused me because the argument wasn’t wrong, but for a practicing researcher in the field the consideration is remarkably irrelevant. Our first concern is to find a promising problem to work on, and that the combination of quantum field theory and general relativity isn’t UV complete is the most promising problem I know of.

The last talk was by Michael Krämer about recent developments in modelling particle dark matter. In astrophysics – like in particle-physics – the trend is to go away from top-down models and work with slimmer “simplified” models. I think it’s a good trend because the top-down constructions didn’t lead us anywhere. But removing the top-down guidance must be accompanied by new criteria, some new principle of non-empirical theory-selection, which I’m still waiting to see. Otherwise we’ll just endlessly produce models of questionable relevance.

I’m not sure whether a few days with a group of philosophers have improved my reasoning – be my judge. But the workshop helped me see the reason I’ve recently drifted towards philosophy: I’m frustrated by the lack of self-reflection among theoretical physicists. In the foundations of physics, everybody’s running at high speed without getting anywhere, and yet they never stop to ask what might possibly be going wrong. Indeed, most of them will insist nothing’s wrong to begin with. The philosophers are offering the conceptual clarity that I find missing in my own field.

I guess I’ll be back.

Monday, December 19, 2016

Book Review, “Why Quark Rhymes With Pork” by David Mermin

Why Quark Rhymes with Pork: And Other Scientific Diversions
By N. David Mermin
Cambridge University Press (January 2016)

The content of many non-fiction books can be summarized as “the blurb spread thinly,” but that’s a craft which David Mermin’s new essay collection Why Quark Rhymes With Pork cannot be accused of. The best summary I could therefore come up with is “things David Mermin is interested in,” or at least was interested in some time during the last 30 years.

This isn’t as undescriptive as it seems. Mermin is Horace White Professor of Physics Emeritus at Cornell University, and a well-known US-American condensed matter physicist, active in science communication, famous for his dissatisfaction with the Copenhagen interpretation and an obsession with properly punctuating equations. And that’s also what his essays are about: quantum mechanics, academia, condensed matter physicists, writing in general, and obsessive punctuation in particular. Why Quark Rhymes With Pork collects all of Mermin’s Reference Frame columns published in Physics Today from 1988 to 2009, updated with postscripts, plus 13 previously unpublished essays.

The earliest of Mermin’s Reference Frame columns stem from the age of handwritten transparencies and predate the arXiv, the Superconducting Superdisaster, and the “science wars” of the 1990s. I read these first essays with the same delighted horror evoked by my grandma’s tales of slide-rules and logarithmic tables, until I realized that we’re still discussing today the same questions as Mermin did 20 years ago: Why do we submit papers to journals for peer review instead of reviewing them independently of journal publication? Have we learned anything profound in the last half century? What do you do when you give a talk and have mustard on your ear? Why is the sociology of science so utterly disconnected from the practice of science? Does anybody actually read PRL? And, of course, the mother of all questions: How to properly pronounce “quark”?

The later essays in the book mostly focus on the quantum world, just what is and isn’t wrong with it, and include the most insightful (and yet brief) expositions of quantum computing that I have come across. The reader also hears again from Professor Mozart, a semi-fictional character that Mermin introduced in his Reference Frame columns. Several of the previously unpublished pieces are summaries of lectures, birthday speeches, and obituaries.

Even though some of Mermin’s essays are accessible for the uninitiated, most of them are likely incomprehensible without some background knowledge in physics, either because he presumes technical knowledge or because the subject of his writing must remain entirely obscure. The very first essay might make a good example. It channels Mermin’s outrage over “Lagrangeans,” and even though written with both humor and purpose, it’s a spelling that I doubt non-physicists will perceive as properly offensive. Likewise, a 12-verse poem on the standard model or elaborations on how to embed equations into text will find their audience mostly among physicists.

My only prior contact with Mermin’s writing was a Reference Frame in 2009, in which Mermin laid out his favorite interpretation of quantum mechanics, Qbism, a topic also pursued in several of this book’s chapters. Proposed by Carl Caves, Chris Fuchs, and Rüdinger Sachs, Qbism views quantum mechanics as the observers’ rule-book for updating information about the world. In his 2009 column, Mermin argues that it is a “bad habit” to believe in the reality of the quantum state. “I hope you will agree,” he writes, “that you are not a continuous field of operators on an infinite-dimensional Hilbert space.”

I left a comment to this column, lamenting that Mermin’s argument is “polemic” and “uninsightful,” an offhand complaint that Physics Today published a few months later. Mermin replied that his column was “an amateurish attempt” to contribute to the philosophy of science and quantum foundations. But while reading Why Quark Rhymes With Pork, I found his amateurism to be a benefit: In contrast to professional attempts to contribute to the philosophy of science (or linguistics, or sociology, or scholarly publishing) Mermin’s writing is mostly comprehensible. I’m thus happy to leave further complaints to philosophers (or linguists, or sociologists).

Why Quark Rhymes With Pork is a book I’d never have bought. But having read it, I think you should read it too. Because I’d rather not still discuss the same questions 20 years from now.

And the only correct way to pronounce quark is of course the German way as “qvark.”

[This book review appeared in the November 2016 issue of Physics Today.]

Friday, December 16, 2016

Cosmic rays hint at new physics just beyond the reach of the LHC

Cosmic ray shower. Artist’s impression.
[Img Src]
The Large Hadron Collider (LHC) – the world’s presently most powerful particle accelerator – reaches a maximum collision energy of 14 TeV. But cosmic rays that collide with atoms in the upper atmosphere have been measured with collision energies about ten times as high.

The two types of observations complement each other. At the LHC, energies are smaller, but collisions happen in a closely controlled experimental environment, directly surrounded by detectors. This is not the case for cosmic rays – their collisions reach higher energies, but the experimental uncertainties are higher.

Recent results from the Pierre Auger Cosmic Ray observatory at center-of-mass energies of approximately 100 TeV are incompatible with the Standard Model of particle physics and hint at unexplained new phenomena. The statistical significance is not high, currently at 2.1 sigma (or 2.9 for a more optimistic simulation). This is approximately a one-in-100 probability to be due to random fluctuations.

Cosmic rays are created either by protons or light atomic nuclei which come from outer space. These particles are accelerated in galactic magnetic fields, though the exact way how their get their high speeds is often unknown. When they enter the atmosphere of planet Earth, they sooner or later hit an air molecule. This destroys the initial particle and creates a primary shower of new particles. This shower has an electromagnetic part and a part of quarks and gluons that quickly form bound states known as hadrons. These particles undergo further decays and collisions, leading to a secondary shower.

The particles of the secondary shower can be detected on Earth in large detector arrays like Pierre Auger, which is located in Argentina. Pierre Auger has two types of detectors: 1) detectors that directly collect the particles which make it to the ground, and 2) fluorescence detectors which captures the light emitted from the ionization air molecules.

The hadronic component of the shower is dominated by pions, which are the lightest mesons and composed of a quark and an anti-quark. The neutral pions decay quickly, mostly into photons; the charged pions create muons which make it into the ground-based detectors.

It has been known for several years that the muon signal seems too large compared to the electromagnetic signal – the balance between them is off. This however was not based on very solid data analyses because it was dependent on a total energy estimate, and that’s very hard to do if you don’t measure all particles of the shower and have to extrapolate from what you measure.

In the new paper – just published in PRL – the Pierre Auger collaboration used a different analysis method for the data, one that does not depend on the total energy calibration. They individually fit the results of detected showers by comparing them to computer-simulated events. From a previously generated sample, they pick the simulated event that best matches the fluorescence result.

Then they add two parameters to also fit the hadronic result: The one parameter adjusts the energy calibration of the fluorescence signal, the other rescales the number of particles in the hadronic component. Then they look for the best-fit values and find that these are systematically off the standard model prediction. As an aside, their analysis also shows that the energy does not need to be recalibrated.

The main reason for the mismatch with the standard model predictions is that the detectors measure more muons than expected. What’s up with those muons? Nobody knows, but the origin of the mystery seems not in the muons themselves, but in the pions from whose decay they come.

Since the neutral pions have a very short lifetime and decay almost immediately into photons, essentially all energy that goes into neutral pions is lost for the production of muons. Besides the neutral pions there are two charged pions and the more energy is left for these and other hadrons, the more muons are produced in the end. So the result by Pierre Auger indicates that the total energy in neutral pions is smaller than what the present simulations predict.

One possible explanation for this, which has been proposed by Farrar and Allen, is that we misunderstand chiral symmetry breaking. It is the breaking of chiral symmetry that accounts for the biggest part of the masses of nucleons (not the Higgs!). The pions are the (pseudo) Goldstone bosons of that broken symmetry, which is why they are so light and ultimately why they are produced so abundantly. Pions are not exactly massless, and thus “pseudo”, because chiral symmetry is only approximate. The chiral phase transition is believed to be close by the confinement transition, that being the transition from a medium of quarks and gluons to color-neutral hadrons. For all we know, it takes place at a temperature of approximately 150 MeV. Above that temperature chiral symmetry is “restored”.

Chiral symmetry restoration almost certainly plays a role in the cosmic ray collisions, and a more important role than it does at the LHC. So, quite possibly this is the culprit here. But it might be something more exotic, new short-lived particles that become important at high energies and which make interaction probabilities deviate from the standard model extrapolation. Or maybe it’s just a measurement fluke that will go away with more data.

If the signal remains, however, that’s a strong motivation to build the next larger particle collider which could reach 100 TeV. Our accelerators would then be as good as the heavens.

[This post previously appeared on Forbes.]

Saturday, December 10, 2016

Away Note

I'll be in Munich next week, attending a workshop at the Center for Advanced Studies on the topic "Reasoning in Physics." I'm giving a talk about "Naturalness: How religion turned to math" which has attracted criticism already before I've given it. I take that to mean I'm hitting a nerve ;)

Thursday, December 08, 2016

No, physicists have no fear of math. But they should have more respect.

Heart curve. [Img Src]
Even physicists are `afraid’ of mathematics,” a recent headline screamed at me. This, I thought, is ridiculous. You can accuse physicists of many stupidities, but being afraid of math isn’t one of them.

But the headline was supposedly based on scientific research. Someone, somewhere, had written a paper claiming that physicists are more likely to cite papers which are light on math. So, I put aside my confirmation bias and read the paper. It was more interesting than expected.

The paper in question, it turned out, didn’t show that physicists are afraid of math. Instead, it was a reply to a comment on an analysis of an earlier paper which had claimed that biologists are afraid of math.

The original paper, “Heavy use of equations impedes communication among biologists,” was published in 2012 by Tim Fawcett and Andrew Higginson, both at the Centre for Research in Animal Behaviour at the University of Exeter. They analyzed a sample of 649 papers published in the top journals in ecology and evolution and looked for a correlation between the density of equations (equations per text) and the number of citations. They found a statistically significant negative correlation: Papers with a higher density of equations were less cited.

Unexpectedly, a group of physicists came to the defense of biologists. In a paper published last year under the title “Are physicists afraid of mathematics?” Jonathan Kollmer, Thorsten Pöschel, and Jason Galla set out to demonstrate that the statistics underlying the conclusion that biologists are afraid of math were fundamentally flawed. With these methods, the authors claimed, you could show anything, even that physicists are afraid of math. Which is surely absurd. Right? They argued that Fawcett and Higginson had arrived at a wrong conclusion because they had sorted their data into peculiar and seemingly arbitrarily chosen bins.

It’s a good point to make. The chance that you find a correlation with any one binning is much higher than the chance that you find it with one particular binning. Therefore, you can easily screw over measures of statistical significance if you allow a search for a correlation with different binnings.

As example, Kollmer at al used a sample of papers from Physical Review Letters (PRL) and showed that, with the bins used by Fawcett and Higginson, physicists too could be said to be afraid of math. Alas, the correlation goes away with a finer binning and hence is meaningless.

PRL, for those not familiar with it, is one of the most highly ranked journals in physics generally. It publishes papers from all subfields that are of broad interest to the community. PRL also has a strictly enforced page limit: You have to squeeze everything on four pages – an imo completely idiotic policy that more often than not means the authors have to publish a longer, comprehensible, paper elsewhere.

The paper that now made headline is a reply by the authors of the original study to the physicists who criticized it. Fawcett and Higginson explain that the physicists’ data analysis is too naïve. They point out that the citation rates have a pronounced rich-get-richer trend which amplifies any initial differences. This leads to an `overdispersed’ data set in which the standard errors are misleading. In that case, a more complicated statistical analysis is necessary, which is the type of analysis they had done in the original paper. The arbitrarily seeming bins were just chosen to visualize the results, they write, but their finding is independent of that.

Fawcett and Higginson then repeated the same analysis on the physics papers and revealed a clear trend: Physicists too are more likely to cite papers with a smaller density of equations!

I have to admit this doesn’t surprise me much. A paper with fewer verbal explanations per equation assumes the reader is more familiar with the particular formalism being used, and this means the target audience shrinks. The consequence is fewer citations.

But this doesn’t mean physicists are afraid of math, it merely means they have to decide which calculations are worth their time. If it’s a topic they might never have an application for, making their way through a paper heavy on math might not be the so helpful to advance their research. On the other hand, reading a more general introduction or short survey with fewer equations might be useful also on topics farther from one’s own research. These citation habits therefore show mostly that the more specialized a paper, the fewer people will read it.

I had a brief exchange with Andrew Higginson, one of the authors of the paper that’s been headlined as “Physicists are afraid of math.” He emphasizes that their point was that “busy scientists might not have time to digest lots of equations without accompanying text.” But I don’t think that’s the right conclusion to draw. Busy scientists who are familiar with the equations might not have the time to digest much text, and busy scientists might not have the time to digest long papers, period. (The corresponding author of the physicists’ study did not respond to my question for comment.)

In their recent reply, the Fawcett and Higginson suggest that “an immediate, pragmatic solution to this apparent problem would be to reduce the density of equations and add explanatory text for non-specialised readers.”

I’m not sure, however, there is any problem here in need of being solved. Adding text for non-specialized readers might be cumbersome for the specialized readers. I understand the risk that the current practice exaggerates the already pronounced specialization, which can hinder communication. But this, I think, would be better taken care of by reviews and overview papers to be referenced in the, typically short, papers on recent research.

So, I don’t think physicists are afraid of math. Indeed, it sometimes worries me how much and how uncritically they love math.

Math can do a lot of things for you, but in the end it’s merely a device to derive consequences from assumptions. Physics isn’t math, however, and physics papers don’t work by theorems and proofs. Theoretical physicists pride themselves on their intuition and frequently take the freedom to shortcut mathematical proofs by drawing on experience. This, however, amounts to making additional assumptions, for example that a certain relation holds or an expansion is well-defined.

That works well as long as these assumptions are used to arrive at testable predictions. In that case it matters only if the theory works, and the mathematical rigor can well be left to mathematical physicists for clean-up, which is how things went historically.

But today in the foundations of physics, theory-development proceeds largely without experimental feedback. In such cases, keeping track of assumptions is crucial – otherwise it becomes impossible to tell what really follows from what. Or, I should say, it would be crucial because theoretical physicists are bad at this.

The result is that some research areas can amass loosely connected arguments that follow from a set of assumptions that aren’t written down anywhere. This might result in an entirely self-consistent construction and yet not have anything to do with reality. If the underlying assumptions aren’t written down anywhere, the result is conceptual mud in which case we can’t tell philosophy from mathematics.

One such unwritten assumption that is widely used, for example, is the absence of finetuning or that a physical theory be “natural.” This assumption isn’t supported by evidence and it can’t be mathematically derived. Hence, it should be treated as a hypothesis - but that isn’t happening because the assumption itself isn’t recognized for what it is.

Another unwritten assumption is that more fundamental theories should somehow be simpler. This is reflected for example in the belief that the gauge couplings of the standard model should meet in one point. That’s an assumption; it isn’t supported by evidence. And yet it’s not treated as a hypothesis but as a guide to theory-development.

And all presently existing research on the quantization of gravity rests on the assumption that quantum theory itself remains unmodified at short distance scales. This is another assumption that isn’t written down anywhere. Should that turn out to be not true, decades of research will have been useless.

In lack of experimental guidance, what we need in the foundations of physics is conceptual clarity. We need rigorous math, not claims to experience, intuition, and aesthetic appeal. Don’t be afraid, but we need more math.

Friday, December 02, 2016

Can dark energy and dark matter emerge together with gravity?

A macaroni pie? Elephants blowing ballons? 
No, it’s Verlinde’s entangled universe.
In a recent paper, the Dutch physicist Erik Verlinde explains how dark energy and dark matter arise in emergent gravity as deviations from general relativity.

It’s taken me some while to get through the paper. Vaguely titled “Emergent Gravity and the Dark Universe,” it’s a 51-pages catalog of ideas patched together from general relativity, quantum information, quantum gravity, condensed matter physics, and astrophysics. It is clearly still research in progress and not anywhere close to completion.

The new paper substantially expands on Verlinde’s earlier idea that the gravitational force is some type of entropic force. If that was so, it would mean gravity is not due to the curvature of space-time – as Einstein taught us – but instead caused by the interaction of the fundamental elements which make up space-time. Gravity, hence, would be emergent.

I find it an appealing idea because it allows one to derive consequences without having to specify exactly what the fundamental constituents of space-time are. Like you can work out the behavior of gases under pressure without having a model for atoms, you can work out the emergence of gravity without having a model for whatever builds up space-time. The details would become relevant only at very high energies.

As I noted in a comment on the first paper, Verlinde’s original idea was merely a reinterpretation of gravity in thermodynamic quantities. What one really wants from emergent gravity, however, is not merely to get back general relativity. One wants to know which deviations from general relativity come with it, deviations that are specific predictions of the model and which can be tested.

Importantly, in emergent gravity such deviations from general relativity could make themselves noticeable at long distances. The reason is that the criterion for what it means for two points to be close by each other emerges with space-time itself. Hence, in emergent gravity there isn’t a priori any reason why new physics must be at very short distances.

In the new paper, Verlinde argues that his variant of emergent gravity gives rise to deviations from general relativity on long distances, and these deviations correspond to dark energy and dark matter. He doesn’t explain dark energy itself. Instead, he starts with a universe that by assumption contains dark energy like we observe, ie one that has a positive cosmological constant. Such a universe is described approximately by what theoretical physicists call a de-Sitter space.

Verlinde then argues that when one interprets this cosmological constant as the effect of long-distance entanglement between the conjectured fundamental elements, then one gets a modification of the gravitational law which mimics dark matter.

The reason is works is that to get normal gravity one assigns an entropy to a volume of space which scales with the surface of the area that encloses the volume. This is known as the “holographic scaling” of entropy, and is at the core of Verlinde’s first paper (and earlier work by Jacobson and Padmanabhan and others). To get deviations from normal gravity, one has to do something else. For this, Verlinde argues that de Sitter space is permeated by long-distance entanglement which gives rise to an entropy which scales, not with the surface area of a volume, but with the volume itself. It consequently leads to a different force-law. And this force-law, so he argues, has an effect very similar to dark matter.

Not only does this modified force-law from the volume-scaling of the entropy mimic dark matter, it more specifically reproduces some of the achievements of modified gravity.

In his paper, Verlinde derives the observed relation between the luminosity of spiral galaxies and the angular velocity of their outermost stars, known as the Tully-Fisher relation. The Tully-Fisher relation can also be found in certain modifications of gravity, such as Moffat Gravity (MOG), but more generally every modification that approximates Milgrom’s modified Newtonian Dynamics (MOND). Verlinde, however, does more than that. He also derives the parameter which quantifies the acceleration at which the modification of general relativity becomes important, and gets a value that fits well with observations.

It was known before that this parameter is related to the cosmological constant. There have been various attempts to exploit this relation, most recently by Lee Smolin. In Verlinde’s approach the relation between the acceleration scale and the cosmological constant comes out naturally, because dark matter has the same origin of dark energy. Verlinde further offers expressions for the apparent density of dark matter in galaxies and clusters, something that, with some more work, can probably be checked observationally.

I find this is an intriguing link which suggests that Verlinde is onto something. However, I also find the model sketchy and unsatisfactory in many regards. General Relativity is a rigorously tested theory with many achievements. To do any better than general relativity is hard, and thus for any new theory of gravity the most important thing is to have a controlled limit in which General Relativity is reproduced to good precision. How this might work in Verlinde’s approach isn’t clear to me because he doesn’t even attempt to deal with the general case. He starts right away with cosmology.

Now in cosmology we have a preferred frame which is given by the distribution of matter (or by the restframe of the CMB if you wish). In general relativity this preferred frame does not originate in the structure of space-time itself but is generated by the stuff in it. In emergent gravity models, in contrast, the fundamental structure of space-time tends to have an imprint of the preferred frame. This fundamental frame can lead to violations of the symmetries of general relativity and the effects aren’t necessarily small. Indeed, there are many experiments that have looked for such effects and haven’t found anything. It is hence a challenge for any emergent gravity approach to demonstrate just how to avoid such violations of symmetries.

Another potential problem with the idea is the long-distance entanglement which is sprinkled over the universe. The physics which we know so far works “locally,” meaning stuff can’t interact over long distances without a messenger that travels through space and time from one to the other point. It’s the reason my brain can’t make spontaneous visits to the Andromeda nebula, and most days I think that benefits both of us. But like that or not, the laws of nature we presently have are local, and any theory of emergent gravity has to reproduce that.

I have worked for some years on non-local space-time defects, and based on what I learned from that I don’t think the non-locality of Verlinde’s model is going to be a problem. My non-local defects aren’t the same as Verlinde’s entanglement, but guessing that the observational consequences scale similarly, the amount of entanglement that you need to get something like a cosmological constant is too small to leave any other noticeable effects on particle physics. I am therefore more worried about the recovery of local Lorentz-invariance. I went to great pain in my models to make sure I wouldn’t get these, and I can’t see how Verlinde addresses the issue.

The more general problem I have with Verlinde’s paper is the same I had with his 2010 paper, which is that it’s fuzzy. It remained unclear to me exactly what are the necessary assumptions. I hence don’t know whether it’s really necessary to have this interpretation with the entanglement and the volume-scaling of the entropy and with assigning elasticity to the dark energy component that pushes in on galaxies. Maybe it would be sufficient already to add a non-local modification to the sources of general relativity. Having toyed with that idea for a while, I doubt it. But I think Verlinde’s approach would benefit from a more axiomatic treatment.

In summary, Verlinde’s recent paper offers the most convincing argument I have seen so far that dark matter and dark energy are related. However, it is presently unclear if not this approach would also have unwanted side-effects that are in conflict with observation already.

Wednesday, November 30, 2016

Dear Dr. B: What is emergent gravity?

    “Hello Sabine, I've seen a couple of articles lately on emergent gravity. I'm not a scientist so I would love to read one of your easy-to-understand blog entries on the subject.


    Michael Tucker
    Wichita, KS”

Dear Michael,

Emergent gravity has been in the news lately because of a new paper by Erik Verlinde. I’ll tell you some more about that paper in an upcoming post, but answering your question makes for a good preparation.

The “gravity” in emergent gravity refers to the theory of general relativity in the regimes where we have tested it. That means Einstein’s field equations and curved space-time and all that.

The “emergent” means that gravity isn’t fundamental, but instead can be derived from some underlying structure. That’s what we mean by “emergent” in theoretical physics: If theory B can be derived from theory A but not the other way round, then B emerges from A.

You might be more familiar with seeing the word “emergent” applied to objects or properties of objects, which is another way physicists use the expression. Sound waves in the theory of gases, for example, emerge from molecular interactions. Van-der Waals forces emerge from quantum electrodynamics. Protons emerge from quantum chromodynamics. And so on.

Everything that isn’t in the standard model or general relativity is known to be emergent already. And since I know that it annoys so many of you, let me point out again that, yes, to our current best knowledge this includes cells and brains and free will. Fundamentally, you’re all just a lot of interacting particles. Get over it.

General relativity and the standard model are the currently the most fundamental descriptions of nature which we have. For the theoretical physicist, the interesting question is then whether these two theories are also emergent from something else. Most physicists in the field think the answer is yes. And any theory in which general relativity – in the tested regimes – is derived from a more fundamental theory, is a case of “emergent gravity.”

That might not sound like such a new idea and indeed it isn’t. In string theory, for example, gravity – like everything else – “emerges” from, well, strings. There are a lot of other attempts to explain gravitons – the quanta of the gravitational interaction – as not-fundamental “quasi-particles” which emerge, much like sound-waves, because space-time is made of something else. An example for this is the model pursued by Xiao-Gang Wen and collaborators in which space-time, and matter, and really everything is made of qbits. Including cells and brains and so on.

Xiao-Gang’s model stands out because it can also include the gauge-groups of the standard model, though last time I looked chirality was an issue. But there are many other models of emergent gravity which focus on just getting general relativity. Lorenzo Sindoni has written a very useful, though quite technical, review of such models.

Almost all such attempts to have gravity emerge from some underlying “stuff” run into trouble because the “stuff” defines a preferred frame which shouldn’t exist in general relativity. They violate Lorentz-invariance, which we know observationally is fulfilled to very high precision.

An exception to this is entropic gravity, an idea pioneered by Ted Jacobson 20 years ago. Jacobson pointed out that there are very close relations between gravity and thermodynamics, and this research direction has since gained a lot of momentum.

The relation between general relativity and thermodynamics in itself doesn’t make gravity emergent, it’s merely a reformulation of gravity. But thermodynamics itself is an emergent theory – it describes the behavior of very large numbers of some kind of small things. Hence, that gravity looks a lot like thermodynamics makes one think that maybe it’s emergent from the interaction of a lot of small things.

What are the small things? Well, the currently best guess is that they’re strings. That’s because string theory is (at least to my knowledge) the only way to avoid the problems with Lorentz-invariance violation in emergent gravity scenarios. (Gravity is not emergent in Loop Quantum Gravity – its quantized version is directly encoded in the variables.)

But as long as you’re not looking at very short distances, it might not matter much exactly what gravity emerges from. Like thermodynamics was developed before it could be derived from statistical mechanics, we might be able to develop emergent gravity before we know what to derive it from.

This is only interesting, however, if the gravity that “emerges” is only approximately identical to general relativity, and differs from it in specific ways. For example, if gravity is emergent, then the cosmological constant and/or dark matter might emerge with it, whereas in our current formulation, these have to be added as sources for general relativity.

So, in summary “emergent gravity” is a rather vague umbrella term that encompasses a large number of models in which gravity isn’t a fundamental interaction. The specific theory of emergent gravity which has recently made headlines is better known as “entropic gravity” and is, I would say, the currently most promising candidate for emergent gravity. It’s believed to be related to, or maybe even be part of string theory, but if there are such links they aren’t presently well understood.

Thanks for an interesting question!

Aside: Sorry about the issue with the comments. I turned on G+ comments, thinking they'd be displayed in addition, but that instead removed all the other comments. So I've reset this to the previous version, though I find it very cumbersome to have to follow four different comment threads for the same post.

Monday, November 28, 2016

This isn’t quantum physics. Wait. Actually it is.

Rocket science isn’t what it used to be. Now that you can shoot someone to Mars if you can spare a few million, the colloquialism for “It’s not that complicated” has become “This isn’t quantum physics.” And there are many things which aren’t quantum physics. For example, making a milkshake:
“Guys, this isn’t quantum physics. Put the stuff in the blender.”
Or losing weight:
“if you burn more calories than you take in, you will lose weight. This isn't quantum physics.”
Or economics:
“We’re not talking about quantum physics here, are we? We’re talking ‘this rose costs 40p, so 10 roses costs £4’.”
You should also know that Big Data isn’t Quantum Physics and Basketball isn’t Quantum Physics and not driving drunk isn’t quantum physics. Neither is understanding that “[Shoplifting isn’t] a way to accomplish anything of meaning,” or grasping that no doesn’t mean yes.

But my favorite use of the expression comes from Noam Chomsky who explains how the world works (so the modest title of his book):
“Everybody knows from their own experience just about everything that’s understood about human beings – how they act and why – if they stop to think about it. It’s not quantum physics.”
From my own experience, stopping to think and believing one understands other people effortlessly is the root of much unnecessary suffering. Leaving aside that it’s quite remarkable some people believe they can explain the world, and even more remarkable others buy their books, all of this is, as a matter of fact, quantum physics. Sorry, Noam.

Yes, that’s right. Basketballs, milkshakes, weight loss – it’s all quantum physics. Because it’s all happening by the interactions of tiny particles which obey the rules of quantum mechanics. If it wasn’t for quantum physics, there wouldn’t be atoms to begin with. There’d be no Sun, there’d be no drunk driving, and there’d be no rocket science.

Quantum mechanics is often portrayed as the theory of the very small, but this isn’t so. Quantum effects can stretch over large distances and have been measured over distances up to several hundred kilometers. It’s just that we don’t normally observe them in daily life.

The typical quantum effects that you have heard of – things whose position and momentum can’t be measured precisely, are both dead and alive, have a spooky action at a distance and so on – don’t usually manifest themselves for large objects. But that doesn’t mean that the laws of quantum physics suddenly stop applying at a hair’s width. It’s just that the effects are feeble and human experience is limited. There is some quantum physics, however, which we observe wherever we look: If it wasn’t for Pauli’s exclusion principle, you’d fall right through the ground.

Indeed, a much more interesting question is What is not quantum physics?” For all we presently know, the only thing not quantum is space-time and its curvature, manifested by gravity. Most physicists believe, however, that gravity too is a quantum theory, just that we haven’t been able to figure out how this works.

“This isn’t quantum physics,” is the most unfortunate colloquialism ever because really everything is quantum physics. Including Noam Chomsky.

Wednesday, November 23, 2016

I wrote you a song.

I know you’ve all missed my awesome chord progressions and off-tune singing, so I’ve made yet another one of my music videos!

In the attempt to protect you from my own appearance, I recently invested some money into an animation software by name Anime Studio. It has a 350 pages tutorial. Me being myself, I didn’t read it. But I spent the last weekend clicking on any menu item that couldn’t vanish quickly enough, and I’ve integrated the outcome into the above video. I think I kind of figured out now how the basics work. I might do some more of this. It was actually fun to make a visual idea into a movie, something I’ve never done before. Though it might help if I could draw, so excuse the sickly looking tree.

Having said this, I also need to get myself a new video editing software. I’m presently using the Corel VideoStudio Pro which, after the Win10 upgrade works even worse than it did before. I could not for the hell of it export the clip with both good video and audio quality. In the end I sacrificed on the video quality, so sorry about the glitches. They’re probably simply computation errors or, I don’t know, the ghost of Windows 7 still haunting my hard disk.

The song I hope explains itself. One could say it’s the aggregated present mood of my facebook and twitter feeds. You can download the mp3 here.

I wish you all a Happy Thanksgiving, and I want to thank you for giving me some of your attention, every now and then. I especially thank those of you who have paid attention to the donate-button in the top right corner. It’s not much that comes in through this channel, but for me it makes all the difference -- it demonstrates that you value my writing and that keeps me motivated.

I’m somewhat behind with a few papers that I wanted to tell you about, so I’ll be back next week with more words and fewer chords. Meanwhile, enjoy my weltschmerz song ;)

Wednesday, November 16, 2016

A new theory SMASHes problems

Most of my school nightmares are history exams. But I also have physics nightmares, mostly about not being able to recall Newton’s laws. Really, I didn’t like physics in school. The way we were taught the subject, it was mostly dead people’s ideas. On the rare occasion our teacher spoke about contemporary research, I took a mental note every time I heard “nobody knows.” Unsolved problems were what fascinated me, not laws I knew had long been replaced by better ones.

Today, mental noting is no longer necessary – Wikipedia helpfully lists the unsolved problems in physics. And indeed, in my field pretty much every paper starts with a motivation that names at least one of these problems, preferably several.

A recent paper which excels on this count is that of Guillermo Ballesteros and collaborators, who propose a new phenomenological model named SM*A*S*H.
    Unifying inflation with the axion, dark matter, baryogenesis and the seesaw mechanism
    Guillermo Ballesteros, Javier Redondo, Andreas Ringwald, Carlos Tamarit
    arXiv:1608.05414 [hep-ph]

A phenomenological model in high energy particle physics is an extension of the Standard Model by additional particles (or fields, respectively) for which observable, and potentially testable, consequences can be derived. There are infinitely many such models, so to grab the reader’s attention, you need a good motivation why your model in particular is worth the attention. Ballesteros et al do this by tackling not one but five different problems! The name SM*A*S*H stands for Standard Model*Axion*Seesaw*Higgs portal inflation.

First, there are the neutrino oscillations. Neutrinos can oscillate into each other if at least two of them have small but nonzero masses. But neutrinos are fermions and fermions usually acquire masses by a coupling between left-handed and right-handed versions of the particle. Trouble is, nobody has ever seen a right-handed neutrino. We have measured only left-handed neutrinos (or right-handed anti-neutrinos).

So to explain neutrino oscillations, there either must be right-handed neutrinos so heavy we haven’t yet seen them. Or the neutrinos differ from the other fermions – they could be so-called Majorana neutrinos, which can couple to themselves and that way create masses. Nobody knows which is the right explanation.

Ballesteros et al in their paper assume heavy right-handed neutrinos. These create small masses for the left-handed neutrinos by a process called see-saw. This is an old idea, but the authors then try to use these heavy neutrinos also for other purposes.

The second problem they take on is the baryon asymmetry, or the question why matter was left over from the Big Bang but no anti-matter. If matter and anti-matter had existed in equal amounts – as the symmetry between them would suggest – then they would have annihilated to radiation. Or, if some of the stuff failed to annihilate, the leftovers should be equal amounts of both matter and anti-matter. We have not, however, seen any large amounts of anti-matter in the universe. These would be surrounded by tell-tale signs of matter-antimatter annihilation, and none have been observed. So, presently, nobody knows what tilted the balance in the early universe.

In the SM*A*S*H model, the right-handed neutrinos give rise to the baryon asymmetry by a process called thermal leptogenesis. This works basically because the most general way to add right-handed neutrinos to the standard model already offers an option to violate this symmetry. One just has to get the parameters right. That too isn’t a new idea. What’s interesting is that Ballesteros et al point out it’s possible to choose the parameters so that the neutrinos also solve a third problem.

The third problem is dark matter. The universe seems to contain more matter than we can see at any wavelength we have looked at. The known particles of the standard model do not fit the data – they either interact too strongly or don’t form structures efficiently enough. Nobody knows what dark matter is made of. (If it is made of something. Alternatively, it could be a modification of gravity. Regardless of what xkcd says.)

In the model proposed by Ballesteros, the right-handed neutrinos could make up the dark matter. That too is an old idea and it’s working very well: The more massive of the right-handed neutrinos can decay into lighter ones by emitting a photon and this hasn’t been seen. The problem here is getting the mass range of the neutrinos to both work for dark matter and the baryon asymmetry. Ballesteros et al solve this problem by making up dark matter mostly from something else, a particle called the axion. This particle has the benefit of also being good to solve a fourth problem.

Fourth, the strong CP problem. The standard model is lacking a possible interaction term which would cause the strong nuclear force to violate CP symmetry. We know this term is either absent or very tiny because otherwise the neutron would have an electric dipole moment, which hasn’t been observed.

This problem can be fixed by promoting the constant in front of this term (the theta parameter) to a field. The field then will move towards the minimum of the potential, explaining the smallness of the parameter. The field however is accompanied by a particle (dubbed the “axion” by Frank Wilczek) which hasn’t been observed. Nobody knows whether the axion exists.

In the SMASH model, the axion gives rise to dark matter by leaving behind a condensate and particles that are created in the early universe from the decay of topological defects (strings and domain walls). The axion gets its mass from an additional quark-like field (denoted with Q in the paper), and also solves the strong CP problem.

Fifth, inflation, the phase of rapid expansion in the early universe. Inflation was invented to explain several observational puzzles, notably why the temperature of the cosmic microwave background seems to be almost the same in every direction we look (up to small fluctuations). That’s surprising because in a universe without inflation the different parts of the hot plasma in the early universe which created this radiation had never been in contact before. They thus had no chance to exchange energy and come to a common temperature. Inflation solves this problem by blowing up an initially small patch to gigantic size. Nobody knows, however, what causes inflation. It’s normally assumed to be some scalar field. But where that field came from or what happened to it is unclear.

Ballesteros and his collaborators assume that the scalar field which gives rise to inflation is the Higgs – the only fundamental scalar which we have so far observed. This too is an old idea, and one that works badly. To make Higgs inflation works, one needs to introduce an unconventional coupling of the Higgs field to gravity, and this leads to a breakdown of the theory (loss of unitarity) in ranges where one needs it to work (ie the breakdown can’t be blamed on quantum gravity).

The SM*A*S*H model contains an additional scalar field which gives rise to a more complicated coupling and the authors claim that in this case the breakdown doesn’t happen until at the Planck scale (where it can be blamed on quantum gravity).

So, in summary, we have three right-handed neutrinos with their masses and mixing matrix, a new quark-like field and its mass, the axion field, a scalar field, the coupling between the scalar and the Higgs, the self-coupling of the scalar, the coupling of the quark to the scalar, the axion decay constant, the coupling of the Higgs to gravity, and the coupling of the new scalar to gravity. Though I might have missed something.

In case you just scrolled down to see if I think this model might be correct. The answer is almost certainly no. It’s a great model according to the current quality standard in the field. But when you combine several speculative ideas without observational evidence, you don’t get a model that is less speculative and has more evidence speaking for it.