Thursday, September 21, 2017

The Quantum Quartet

I made some drawings recently. For no particular purpose, really, other than to distract myself.






And here is the joker:


Tuesday, September 19, 2017

Interna

I’m still writing on the book. After not much happened for almost a year, my publisher now rather suddenly asked for the final version of the manuscript. Until that’s done not much will be happening on this blog.

We do seem to have settled on a title though: “Lost in Math: How Beauty Leads Physics Astray.” The title is my doing, the subtitle isn’t. I just hope it won’t lead too many readers astray.

The book is supposed to be published in the USA/Canada by Basic Books next year in the Spring, and in Germany by Fischer half a year later. I’ll tell you more about the content at some point but right now I’m pretty sick of the whole book-thing.

In the meantime I have edited another book, this one on “Experimental Search for Quantum Gravity” which you can now preoder on amazon. It’s a, probably rather hard to digest, collection of essays about topics covered at a conference I organized last year. I merely wrote the preface.

Yesterday the twins had their first day in school. As is unfortunately still common in Germany, classes go only until noon. And so, we’re now trying a new arrangement to keep the kids occupied throughout the working day.



Wednesday, September 13, 2017

Away Note

I'm in Switzerland this week, for a conference on "Thinking about Space and Time: 100 Years of Applying and Interpreting General Relativity." I am also behind with several things and blogging will remain slow for the next weeks. If you miss my writing all too much, here is a new paper.

Wednesday, September 06, 2017

Wednesday, August 30, 2017

The annotated math of (almost) everything

Have you heard of the principle of least action? It’s the most important idea in physics, and it underlies everything. According to this principle, our reality is optimal in a mathematically exact way: it minimizes a function called the “action.” The universe that we find ourselves in is the one for which the action takes on the smallest value.

In quantum mechanics, reality isn’t quite that optimal. Quantum fields don’t have to decide on one specific configuration; they can do everything they want, and the action then quantifies the weight of each contribution. The sum of all these contributions – known as the path-integral – describes again what we observe.

This omniscient action has very little to do with “action” as in “action hero”. It’s simply an integral, usually denoted S, over another function, called the Lagrangian, usually denoted L. There’s a Lagrangian for the Standard Model and one for General Relativity. Taken together they encode the behavior of everything that we know of, except dark matter and quantum gravity.

With a little practice, there’s a lot you can read off directly from the Lagrangian, about the behavior of the theory at low or high energies, about the type of fields and mediator fields, and about the type of interaction.

The below figure gives you a rough idea how that works.



I originally made this figure for the appendix of my book, but later removed it. Yes, my editor is still optimistic the book will be published Spring 2018. The decision about this will fall in the next month or so, so stay tuned.

Wednesday, August 23, 2017

I was wrong. You were wrong too. Admit it.

I thought that anti-vaxxers are a US-phenomenon, certainly not to be found among the dutiful Germans. Well, I was wrong. The WHO estimates only 93% of children in Germany receive both measles shots.

I thought that genes determine sex. I was wrong. For certain species of fish and reptiles that’s not the case.

I thought that ultrasound may be a promising way to wirelessly transfer energy. That was wrong too.

Don’t worry, I haven’t suddenly developed a masochist edge. I’ve had an argument. Not my every-day argument about dark matter versus modified gravity and similar academic problems. This one was about Donald Trump and how to be wrong the right way.
Percentage of infants receiving 2nd dose of measles vaccine in Germany.
[Source: WHO]

Trump changes his mind. A lot. May that be about the NATO or about Afghanistan or, really, find me anything he has not changed his mind about.

Now, I suspect that’s because he doesn’t have an opinion, can’t recall what he said last time, and just hopes no one notices he wings that presidency thing. But whatever the reason, Trump’s mental flexibility is a virtue to strive for. You can see how that didn’t sit well with my liberal friends.

It’s usually hard to change someone’s mind, and a depressingly large amount of studies have shown that evidence isn’t enough to do it. Presenting people with evidence contradicting their convictions can even have the very opposite effect of reinforcing their opinions.

We hold on to our opinions, strongly. Constructing consistent explanations for the world is hard work, and we don’t like others picking apart the stories we settled on. The quirks of the human mind can be tricky – tricky to understand and tricky to overcome. Psychology is part of it. But my recent argument over Trump’s wrongness made me think about the part sociology has in our willingness to change opinion. It’s bad enough to admit to yourself you were wrong. It’s far worse to admit to other people you were wrong.

You see this play out in almost every comment section on social media. People defend hopeless positions, go through rhetorical tricks and textbook fallacies, appeal to authority, build straw men, and slide red herrings down slippery slopes. At the end, there’s always good, old denial. Anything, really, to avoid saying “I was wrong.”

And the more public an opinion was stated, the harder it becomes to backpedal. The more you have chosen friends by their like-mindedness, and the more they count on your like-mindedness, the higher the stakes for being unlike. The more widely known you are, the harder it is to tell your followers you won’t deliver arguments for them any longer. Turn your back on them. Disappoint them. Lose them.

It adds to this that public conversations encourage us to make up opinions on the fly. The three examples I listed above had one thing in common. In neither case did I actually know much about what I was saying. It wasn’t that I had wrong information – I simply had no information, and it didn’t occur to me to check, or maybe I just wasn’t interested enough. I was just hoping nobody would notice. I was winging it. You wouldn’t want me as president either.

But enough of the public self-flagellation and back to my usual self. Science is about being wrong more than it is about being right. By the time you have a PhD you’ll have been wrong in countless ways, so many ways indeed it’s not uncommon students despair over their seeming incapability until reassured we’ve all been there.

Science taught me it’s possible to be wrong gracefully, and – as with everything in life – it becomes easier with practice. And it becomes easier if you see other people giving examples. So what have you recently changed your mind about?

Tuesday, August 15, 2017

You don’t expand just because the universe does. Here’s why.

Not how it works.
It’s tough to wrap your head around four dimensions.

We have known that the universe expands since the 1930s, but whether we expand with it is still one of the questions I am asked most frequently. The less self-conscious simply inform me that the universe doesn’t expand but everything in it shrinks – because how could we tell the difference?

The best answer to these questions is, as usual, a lot of math. But it’s hard to find a decent answer online that is not a pile of equations, so here’s a verbal take on it.

The first clue you need to understand the expansion of the universe is that general relativity is a theory for space-time, not for space. As Herman Minkowski put it in 1908:
“Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.”
Speaking about the expansion of space, hence, requires to undo this union.

The second clue is that in science a question must be answerable by measurement, at least in principle. We cannot observe space and neither can we observe space-time. We merely observe how space-time affects matter and radiation, which we can measure in our detectors.

The third clue is that the word “relativity” in “general relativity” means that every observer can chose to describe space-time whatever way he or she wishes. While each observer’s calculation will then differ, they will come to the same conclusion.

Armed with these three knowledge bites, let us see what we can say about the universe’s expansion.

Cosmologists describe the universe with a model known as Friedmann-Robertson-Walker (named after its inventors). The underlying assumption is that space (yes, space) is filled with matter and radiation that has the same density everywhere and in every direction. It is, as the terminology has it, homogeneous and isotropic. This assumption is called the “Cosmological Principle.”

While the Cosmological Principle originally was merely a plausible ad-hoc assumption, it is meanwhile supported by evidence. On large scales – much larger than the typical intergalactic distances – matter is indeed distributed almost the same everywhere.

But clearly, that’s not the case on shorter distances, like inside our galaxy. The Milky Way is disk-shaped with most of the (visible) mass in the center bulge, and this matter isn’t distributed homogeneously at all. The cosmological Friedmann-Robertson-Walker model, therefore, just does not describe galaxies.

This is a key point and missing it is origin of much confusion about the expansion of the universe: The solution of general relativity that describes the expanding universe is a solution on average; it is good only on very large distances. But the solutions that describe galaxies are different – and just don’t expand. It’s not that galaxies expand unnoticeably, they just don’t. The full solution, then, is both stitched together: Expanding space between non-expanding galaxies. (Though these solutions are usually only dealt with by computer simulations due to their mathematical complexity.)

You might then ask, at what distance does the expansion start to take over? That happens when you average over a volume so large that the density of matter inside the volume has a gravitational self-attraction weaker than the expansion’s pull. From atomic nuclei up, the larger the volume you average over, the smaller the average density. But it is only somewhere beyond the scales of galaxy clusters that expansion takes over. On very short distances, when the nuclear and electromagnetic forces aren’t neutralized, these also act against the pull of gravity. This safely prevents atoms and molecules from being torn apart by the universe’s expansion.

But here’s the thing. All I just told you relies on a certain, “natural” way to divide up space in space and time. It’s the cosmic microwave background (CMB) that helps us do it. There is only one way to split space and time so that the CMB looks on average the same in all direction. After that, you can still pick your time-labels, but the split is done.

Breaking up Minkowski’s union between space and time in this way is called a space-time “slicing.” Indeed, it’s much like slicing bread, where each slice is space at some moment of time. There are many ways to slice bread and there are also many ways to slice space-time. Which, as number 3 clued you, are all perfectly allowed.

The reason that physicists chose one slicing over another is usually that calculations can be greatly simplified with a smart choice of slicing. But if you really insist, there are ways to slice the universe so that space does not expand. However, these slicing are awkward: they are hard to interpret and make calculations very difficult. In such a slicing, for example, going forward in time necessarily pushes you around in space – it’s anything but intuitive.

Indeed, you can do this also with space-time around planet Earth. You could slice space-time so that space around us remains flat. Again though, this slicing is awkward and physically meaningless.

This brings us to the relevance of clue #2. We really shouldn’t be talking about space to begin with. Just as you could insist on defining space so that the universe doesn’t expand, by willpower you could also define space so that Brooklyn does expand. Let’s say a block down is a mile. You could simply insist on using units of length in which tomorrow a block down is two miles, and next week it’s ten miles, and so on. That’s pretty idiotic – and yet nobody could stop you from doing this.

But now consider you make a measurement. Say, you bounce a laser-beam back between the ends of the block, at fixed altitude, and use atomic clocks to measure the time that passes between two bounces. You would find that the time-intervals are always the same.

Atomic clocks rely on the constancy of atomic transition frequencies. The gravitational force inside an atom is entirely negligible relative to the electromagnetic force – its about 40 orders of magnitude smaller – and fixing the altitude prevents gravitational redshift caused by the Earth’s gravitational pull. It doesn’t matter which coordinates you used, you’d always find the same and unambiguous measurement result: The time elapsed between bounces of the laser remains the same.

It is similar in cosmology. We don’t measure the size of space between galaxies – how would we do that? We measure the light that comes from distant galaxies. And it turns out to be systematically red-shifted regardless of where we look. A simple way to describe this – a space-time slicing that makes calculations and interpretations easy – is that space between the galaxies expands.

So, the brief answer is: No, Brooklyn doesn’t expand. But the more accurate answer is that you should ask only for the outcome of clearly stated measurement procedures. Light from distant galaxies is shifted to the red meaning they are retreating from us. Light collected from the edges of Brooklyn isn’t redshifted. If we use a space-time slicing in which matter is at rest on the average, then the matter density of the universe is decreasing and was much higher in the past. To the extent that the density of Brooklyn has changed in the past, this can be explained without invoking general relativity.

It may be tough to wrap your head around four dimensions, but it’s always worth the effort.



[This post previously appeared on Starts With A Bang.]

Wednesday, August 09, 2017

Outraged about the Google diversity memo? I want you to think about it.

Chairs. [Image: Verco]
That leaked internal memo from James Damore at Google? The one that says one shouldn’t expect employees in all professions to reflect the demographics of the whole population? Well, that was a pretty dumb thing to write. But not because it’s wrong. Dumb is that Damore thought he could have a reasoned discussion about this. In the USA, out of all places.

The version of Damore’s memo that first appeared on Gizmodo missed references and images. But meanwhile, the diversity memo has its own website and it comes with links and graphics.

Damore’s strikes me as a pamphlet produced by a well-meaning, but also utterly clueless, young white man. He didn’t deserve to get fired for this. He deserved maybe a slap on the too-quickly typing fingers. But in his world, asking for discussion is apparently enough to get fired.

I don’t normally write about the underrepresentation of women in science. Reason is I don’t feel fit to represent the underrepresented. I just can’t seem to appropriately suffer in my male-dominated environment. To the extent that one can trust online personality tests, I’m an awkwardly untypical female. It’s probably unsurprising I ended up in theoretical physics.

There is also a more sinister reason I keep my mouth shut. It’s that I’m afraid of losing what little support I have among the women in science when I fall into their back.

I’ve lived in the USA for three years and for three more years in Canada. On several occasions during these years, I’ve been told that my views about women in science are “hardcore,” “controversial,” or “provocative.” Why? Because I stated the obvious: Women are different from men. On that account, I’m totally with Damore. A male-female ratio close to one is not what we should expect in all professions – and not what we should aim at either.

But the longer I keep my mouth shut, the more I think my silence is a mistake. Because it means leaving the discussion – and with it, power – to those who shout the loudest. Like CNBC. Which wants you to be “shocked” by Damore’s memo in a rather transparent attempt to produce outrage and draw clicks. Are you outraged yet?

Increasingly, media-storms like this make me worry about the impression scientists give to the coming generation. Give to kids like Damore. I’m afraid they think we’re all idiots because the saner of us don’t speak up. And when the kids think they’re oh-so-smart, they’ll produce pamphlets to reinvent the wheel.

Fact is, though, much of the data in Damore’s memo is well backed-up by research. Women indeed are, on the average, more neurotic than men. It’s not an insult, it’s a common term in psychology. Women are also, on the average, more interested in people than in things. They do, on the average, value work-life balance more, react differently to stress, compete by other rules. And so on.

I’m neither a sociologist nor psychologist, but my understanding of the literature is that these are uncontroversial findings. And not new either. Women are different from men, both by nature and by nuture, though it remains controversial just what is nurture and what is nature. But the cause is besides the point for the question of occupation: Women are different in ways that plausibly affect their choice of profession.

No, the problem with Damore’s argument isn’t the starting point, the problem is the conclusions that he jumps to.

To begin with, even I know most of Google’s work is people-centric. It’s either serving people directly, or analyzing people-data, or imagining the people-future. If you want to spend your life with things and ideas rather than people, then go into engineering or physics, but not into software-development.

That coding actually requires “female” skills was spelled out clearly by Yonatan Zunger, a former Google employee. But since I care more about physics than software-development, let me leave this aside.

The bigger mistake in Damore’s memo is one I see frequently: Assuming that job skills and performance can be deduced from differences among demographic groups. This just isn’t so. I believe for example if it wasn’t for biases and unequal opportunities, then the higher ranks in science and politics would be dominated by women. Hence, aiming at a 50-50 representation gives men an unfair advantage. I challenge you to provide any evidence to the contrary.

I’m not remotely surprised, however, that Damore naturally assumes the differences between typically female and male traits mean that men are more skilled. That’s the bias he thinks he doesn’t have. And, yeah, I’m likewise biased in favor of women. Guess that makes us even then.

The biggest problem with Damore’s memo however is that he doesn’t understand what makes a company successful. If a significant fraction of employees think that diversity is important, then it is important. No further justification is needed for this.

Yes, you can argue that increasing diversity may not improve productivity. The data situation on this is murky, to say the least. There’s some story about female CEOs in Sweden that supposedly shows something – but I want to see better statistics before I buy that. And in any case, the USA isn’t Sweden. More importantly, productivity hinges on employees’ well-being. If a diverse workplace is something they value, then that’s something to strive for, period.

What Damore seems to have aimed at, however, was merely to discuss the best way to deal with the current lack of diversity. Biases and unequal opportunities are real. (If you doubt that, you are a problem and should do some reading.) This means that the current representation of women, underprivileged and disabled people, and other minorities, is smaller than it would be in that ideal world which we don’t live in. So what to do about it?

One way to deal with the situation is to wait until the world catches up. Educate people about bias, work to remove obstacles to education, change societal gender images. This works – but it works very slowly.

Worse, one of the biggest obstacles that minorities face is a chicken-and-egg problem that time alone doesn’t cure. People avoid professions in which there are few people like them. This is a hurdle which affirmative action can remove, fast and efficiently.

But there’s a price to pay for preferably recruiting the presently underrepresented. Which is that people supported by diversity efforts face a new prejudice: They weren’t hired because they’re skilled. They were hired because of some diversity policy!

I used to think this backlash has to be avoided at all costs, hence was firmly against affirmative action. But during my years in Sweden, I saw that it does work – at least for women – and also why: It makes their presence unremarkable.

In most of the European North, a woman in a leading position in politics or industry is now commonplace. It’s nothing to stare at and nothing to talk about. And once it’s commonplace, people stop paying attention to a candidate’s gender, which in return reduces bias.

I don’t know, though, if this would also work in science which requires an entirely different skill-set. And social science is messy – it’s hard to tell how much of the success in Northern Europe is due to national culture. Hence, my attitude towards affirmative action remains conflicted.

And let us be clear that, yes, such policies mean every once in a while you will not hire the most skilled person for a job. Therefore, a value judgement must be made here, not a logical deduction from data. Is diversity important enough for you to temporarily tolerate an increased risk of not hiring the most qualified person? That’s the trade-off nobody seems willing to spell out.

I also have to spell out that I am writing this as a European who now works in Europe again. For me, the most relevant contribution to equal opportunity is affordable higher education and health insurance, as well as governmentally paid maternity and parental leave. Without that, socially disadvantaged groups remain underrepresented, and companies continue to fear for revenue when hiring women in their fertile age. That, in all fairness, is an American problem not even Google can solve.

But one also doesn’t solve a problem by yelling “harassment” each time someone asks to discuss whether a diversity effort is indeed effective. I know from my own experience, and a poll conducted at Google confirms, that Damore’s skepticism about current practices is widespread.

It’s something we should discuss. It’s something Google should discuss. Because, for better or worse, this case has attracted much attention. Google’s handling of the situation will set an example for others.

Damore was fired, basically, for making a well-meant, if amateurish, attempt at institutional design, based on woefully incomplete information he picked from published research studies. But however imperfect his attempt, he was fired, in short, for thinking on his own. And what example does that set?

Thursday, August 03, 2017

Self-tuning brings wireless power closer to reality

Cables under my desk.
One of the unlikelier fights I picked while blogging was with an MIT group that aimed to wirelessly power devices – by tunneling:
“If you bring another resonant object with the same frequency close enough to these tails then it turns out that the energy can tunnel from one object to another,” said Professor Soljacic.
They had proposed a new method for wireless power transfer using two electric circuits in magnetic resonance. But there’s no tunneling in such a resonance. Tunneling is a quantum effect. Single particles tunnel. Sometimes. But kilowatts definitely don’t.

I reached out to the professor’s coauthor, Aristeidis Karalis, who told me, even more bizarrely: “The energy stays in the system and does not leak out. It just jumps from one to the other back and forth.”

I had to go and calculate the Poynting vector to make clear the energy is – as always – transmitted from one point to another by going through all points in between. It doesn’t tunnel, and it doesn’t jump either. For the MIT guys’ envisioned powering device with the resonant coils the energy flow is focused between the coils’ centers.

The difference between “jumping” and “flowing” energy is more than just words. Once you know that energy is flowing, you also know that if you’re in its way you might get some of it. And the more focused the energy, the higher the possible damage. This means, large devices have to be close together and the energy must be spread out over large surfaces to comply with safety standards.

Back then, I did some estimates. If you want to transfer, say, 1 Watt, and you distribute it over a coil with radius 30cm, you end up with a density of roughly 1 mW/cm2. That already exceeds the safety limit (in the frequency range 30-300 MHz). And that’s leaving aside there usually must be much more energy in the resonance field than what’s actually transmitted. And 30cm isn’t exactly handy. In summary, it’ll work – but it’s not practical and it won’t charge the laptop without roasting what gets in the way.

The MIT guys meanwhile founded a company, Witricity, and dropped the tunneling tale.

Another problem with using resonance for wireless power is that the efficiency depends on the distance between the circuits. It doesn’t work well when they’re too far, and not when they’re too close either. That’s not great for real-world applications.

But in a recent paper published in Nature, a group from Stanford put forward a solution to this problem. And even though I’m not too enchanted by transfering power by magnetic resonance, it is a really neat idea:
Usually the resonance between two circuits is designed, meaning they receiver’s and sender’s frequencies are tuned to work together. But in the new paper, the authors instead let the frequency of the sender range freely – they merely feed it energy. They then show that the coupled system will automatically tune to a resonance frequency at which efficiency is maximal.

The maximal efficiency they reach is the same as with the fixed-frequency circuits. But it works better for shorter distances. While the usual setting is inefficient both at too short and too long distances, the self-tuned system has a stable efficiency up to some distance, and then decays. This makes the new arrangement much more useful in practice.
Efficiency of energy transfer as a function of distance
between the coils (schematic). Blue curve is for the
usual setting with pre-fixed frequency. Red curve is
for the self-tuned circuits.

The group didn’t only calculate this, they also did an experiment to show it works. One limitation of the present setup though is that it works only in one direction, so still not too practical. But it’s a big step forward.

Personally, I’m more optimistic about using ultrasound for wireless power transfer than about the magnetic resonance because ultrasound presently reaches larger distances. Both technologies, however, are still very much in their infancy, so hard to tell which one will win out.

(Note added: Ultrasound not looking too convincing either, ht Tim, see comments for more.)

Let me not forget to mention that in an ingenious paper which was completely lost on the world I showed you don’t need to transfer the total energy to the receiver. You only need to send the information necessary to decrease entropy in the receiver’s surrounding, then it can draw energy from the environment.

Unfortunately, I could think of how to do this only for a few atoms at a time. And, needless to say, I didn’t do any experiment – I’m a theoretician after all. While I’m sure in a few thousand years everyone will use my groundbreaking insight, until then, it’s coils or ultrasound or good, old cables.

Friday, July 28, 2017

New paper claims string theory can be tested with Bose-Einstein-Condensates

Fluorescence image of
Bose-Einstein-Condensate.
Image Credits: Stefan Kuhr and
Immanuel Bloch, MPQ
String theory is infamously detached from experiment. But in a new paper, a group from Mexico put forward a proposal to change that
    String theory phenomenology and quantum many–body systems
    Sergio Gutiérrez, Abel Camacho, Héctor Hernández
    arXiv:1707.07757 [gr-qc]
Ahead, let me be clear they don’t want to test string theory, but the presence of additional dimensions of space, which is a prediction of string theory.

In the paper, the authors calculate how additional space-like dimensions affect a condensate of ultra-cold atoms, known as Bose-Einstein-Condensate. At such low temperatures, the atoms transition to a state where their quantum wave-function acts as one and the system begins to display quantum effects, such as interference, throughout.

In the presence of extra-dimensions, every particle’s wave-function has higher harmonics because the extra-dimensions have to close up, in the simplest case like circles. The particle’s wave-functions have to fit into the extra dimensions, meaning their wave-length must be an integer fraction of the radius.

Each of the additional dimensions has a radius of about a Planck length, which is 10-35m or 15 orders of magnitude smaller than what even the LHC can probe. To excite these higher harmonics, you correspondingly need an energy of 1015 TeV, or 15 orders of magnitude higher than what the LHC can produce.

How do the extra-dimensions of string theory affect the ultra-cold condensate? They don’t. That’s because at those low temperatures there is no way you can excite any of the higher harmonics. Heck, even the total energy of the condensates presently used isn’t high enough. There’s a reason string theory is famously detached from experiment – because it’s a damned high energy you must reach to see stringy effects!

So what’s the proposal in the paper then? There isn’t one. They simply ignore that the higher harmonics can’t be excited and make a calculation. Then they estimate that one needs a condensate of about a thousand particles to measure a discontinuity in the specific heat, which depends on the number of extra-dimensions.

It’s probably correct that this discontinuity depends on the number of extra-dimensions. Unfortunately the authors don’t go back and check what’s the mass per particle in the condensate that’s needed to make this work. I’ve put in the numbers and get something like a million tons. That gigantic mass becomes necessary because it has to combine with the miniscule temperature of about a nano-Kelvin to have a geometric mean that exceeds the Planck mass.

In summary: Sorry, but nobody’s going to test string theory with Bose-Einstein-Condensates.

Wednesday, July 19, 2017

Penrose claims LIGO noise is evidence for Cyclic Cosmology

Noise is the physicists’ biggest enemy. Unless you are a theorist whose pet idea masquerades as noise. Then you are best friends with noise. Like Roger Penrose.
    Correlated "noise" in LIGO gravitational wave signals: an implication of Conformal Cyclic Cosmology
    Roger Penrose
    arXiv:1707.04169 [gr-qc]

Roger Penrose made his name with the Penrose-Hawking theorems and twistor theory. He is also well-known for writing books with very many pages, most recently “Fashion, Faith, and Fantasy in the New Physics of the Universe.”

One man’s noise is another man’s signal.
Penrose doesn’t like most of what’s currently in fashion, but believes that human consciousness can’t be explained by known physics and that the universe is cyclically reborn. This cyclic cosmology, so his recent claim, gives rise to correlations in the LIGO noise – just like what’s been observed.

The LIGO experiment consists of two interferometers in the USA, separated by about 3,000 km. A gravitational wave signal should pass through both detectors with a delay determined by the time it takes the gravitational wave to sweep from one US-coast to the other. This delay is typically of the order of 10ms, but its exact value depends on where the waves came from.

The correlation between the two LIGO detectors is one of the most important criteria used by the collaboration to tell noise from signal. The noise itself, however, isn’t entirely uncorrelated. Some sources of the correlations are known, but some are not. This is not unusual – understanding the detector is as much part of a new experiment as is the measurement itself. The LIGO collaboration, needless to say, thinks everything is under control and the correlations are adequately taken care of in their signal analysis.

A Danish group of researchers begs to differ. They recently published a criticism on the arXiv in which they complain that after subtracting the signal of the first gravitational wave event, correlations remain at the same time-delay as the signal. That clearly shouldn’t happen. First and foremost it would demonstrate a sloppy signal extraction by the LIGO collaboration.

A reply to the Danes’ criticism by Ian Harry from the LIGO collaboration quickly appeared on Sean Carroll’s blog. Ian pointed out some supposed mistakes in the Danish group’s paper. Turns out though, the mistake was on his site. Once corrected, Harry’s analysis reproduces the correlations which shouldn’t be there. Bummer.

Ian Harry did not respond to my requests for comment. Neither did Alessandra Buonanno from the LIGO collaboration, who was also acknowledged by the Danish group. David Shoemaker, the current LIGO spokesperson, let me know he has “full confidence” in the results, and also, the collaboration is working on a reply, which might however take several months to appear. In other words, go away, there’s nothing to see here.

But while we wait for the LIGO response, speculations abound what might cause the supposed correlation. Penrose beat everyone to it with an explanation, even Craig Hogan, who has run his own experiment looking for correlated noise in interferometers, and who I was counting on.

Penrose’s cyclic cosmology works by gluing the big bang together with what we usually think of as the end of the universe – an infinite accelerated expansion into nothingness. Penrose conjectures that both phases – the beginning and the end – are conformally invariant, which means they possess a symmetry under a stretching of distance scales. Then he identifies the end of the universe with the beginning of a new one, creating a cycle that repeats indefinitely. In his theory, what we think of as inflation – the accelerated expansion in the early universe – becomes the final phase of acceleration in the cycle preceding our own.

Problem is, the universe as we presently see it is not conformally invariant. What screws up conformal invariance is that particles have masses, and these masses also set a scale. Hence, Penrose has to assume that eventually all particle masses fade away so that conformal invariance is restored.

There’s another problem. Since Penrose’s conformal cyclic cosmology has no inflation it also lacks a mechanism to create temperature fluctuations in the cosmic microwave background (CMB). Luckily, however, the theory also gives rise to a new scalar particle that couples only gravitationally. Penrose named it  “erebon” after the ancient Greek God of Darkness, Erebos, that gives rise to new phenomenology.

Erebos, the God of Darkness,
according to YouTube.
The erebons have a mass of about 10-5 gram because “what else could it be,” and they have a lifetime determined by the cosmological constant, presumably also because what else could it be. (Aside: Note that these are naturalness arguments.) The erebons make up dark matter and their decay causes gravitational waves that seed the CMB temperature fluctuations.

Since erebons are created at the beginning of each cycle and decay away through it, they also create a gravitational wave background. Penrose then argues that a gravitational wave signal from a binary black hole merger – like the ones LIGO has observed – should be accompanied by noise-like signals from erebons that decayed at the same time in the same galaxy. Just that this noise-like contribution would be correlated with the same time-difference as the merger signal.

In his paper, Penrose does not analyze the details of his proposal. He merely writes:
“Clearly the proposal that I am putting forward here makes many testable predictions, and it should not be hard to disprove it if it is wrong.”
In my impression, this is a sketchy idea and I doubt it will work. I don’t have a major problem with inventing some particle to make up dark matter, but I have a hard time seeing how the decay of a Planck-mass particle can give rise to a signal comparable in strength to a black hole merger (or why several of them would add up exactly for a larger signal).

Even taking this at face value, the decay signals wouldn’t only come from one galaxy but from all galaxies, so the noise should be correlated all over and at pretty much all time-scales – not just at the 12ms as the Danish group has claimed. Worst of all, the dominant part of the signal would come from our own galaxy and why haven’t we seen this already?

In summary, one can’t blame Penrose for being fashionable. But I don’t think that erebons will be added to the list of LIGO’s discoveries.

Thursday, July 13, 2017

Nature magazine publishes comment on quantum gravity phenomenology, demonstrates failure of editorial oversight

I have a headache and
blame Nature magazine for it.
For about 15 years, I have worked on quantum gravity phenomenology, which means I study ways to experimentally test the quantum properties of space and time. Since 2007, my research area has its own conference series, “Experimental Search for Quantum Gravity,” which took place most recently September 2016 in Frankfurt, Germany.

Extrapolating from whom I personally know, I estimate that about 150-200 people currently work in this field. But I have never seen nor heard anything of Chiara Marletto and Vlatko Vedral, who just wrote a comment for Nature magazine complaining that the research area doesn’t exist.

In their comment, titled “Witness gravity’s quantum side in the lab,” Marletto and Vedral call for “a focused meeting bringing together the quantum- and gravity-physics communities, as well as theorists and experimentalists.” Nice.

If they think such meetings are a good idea, I recommend they attend them. There’s no shortage. The above mentioned conference series is only the most regular meeting on quantum gravity phenomenology. Also the Marcel Grossmann Meeting has sessions on the topic. Indeed, I am writing this from a conference here in Trieste, which is about “Probing the spacetime fabric: from concepts to phenomenology.”

Marletto and Vedral point out that it would be great if one could measure gravitational fields in quantum superpositions to demonstrate that gravity is quantized. They go on to lay out their own idea for such experiments, but their interest in the topic apparently didn’t go far enough to either look up the literature or actually put in the numbers.

Yes, it would be great if we could measure the gravitational field of an object in a superposition of, say, two different locations. Problem is, heavy objects – whose gravitational fields are easy to measure – decohere quickly and don’t have quantum properties. On the other hand, objects which are easy to bring into quantum superpositions are too light to measure their gravitational field.

To be clear, the challenge here is to measure the gravitational field created by the objects themselves. It is comparably easy to measure the behavior of quantum objects in the gravitational field of the Earth. That has something to do with quantum and something to do with gravity, but nothing to do with quantum gravity because the gravitational field isn’t quantized.

In their comment, Marletto and Vedral go on to propose an experiment:
“Likewise, one could envisage an experiment that uses two quantum masses. These would need to be massive enough to be detectable, perhaps nanomechanical oscillators or Bose–Einstein condensates (ultracold matter that behaves as a single super-atom with quantum properties). The first mass is set in a superposition of two locations and, through gravitational interaction, generates Schrödinger-cat states on the gravitational field. The second mass (the quantum probe) then witnesses the ‘gravitational cat states’ brought about by the first.”
This is truly remarkable, but not because it’s such a great idea. It’s because Marletto and Vedral believe they’re the first to think about this. Of course they are not.

The idea of using Schrödinger-cat states, has most recently been discussed here. I didn’t write about the paper on this blog because the experimental realization faces giant challenges and I think it won’t work. There is also Anastopolous and Hu’s CQG paper about “Probing a Gravitational Cat State” and a follow-up paper by Derakhshani, which likewise go unmentioned. I’d really like to know how Marletto and Vedral think they can improve on the previous proposals. Letting a graphic designer make a nice illustration to accompany their comment doesn’t really count much in my book.

The currently most promising attempt to probe quantum gravity indeed uses nanomechanical oscillators and comes from the group of Markus Aspelmeyer in Vienna. I previously discussed their work here. This group is about six orders of magnitude away from being able to measure such superpositions. The Nature comment doesn’t mention it either.

The prospects of using Bose-Einstein condensates to probe quantum gravity has been discussed back and forth for two decades, but clear is that this isn’t presently the best option. The reason is simple: Even if you take the largest condensate that has been created to date – something like 10 million atoms – and you calculate the total mass, you are still way below the mass of the nanomechanical oscillators. And that’s leaving aside the difficulty of creating and sustaining the condensate.

There are some other possible gravitational effects for Bose-Einstein condensates which have been investigated, but these come from violations of the equivalence principle, or rather the ambiguity of what the equivalence principle in quantum mechanics means to begin with. That’s a different story though because it’s not about measuring quantum superpositions of the gravitational field.

Besides this, there are other research directions. Paternostro and collaborators, for example, have suggested that a quantized gravitational field can exchange entanglement between objects in a way that a classical field can’t. That too, however, is a measurement which is not presently technologically feasible. A proposal closer to experimental test is that by Belenchia et al, laid out their PRL about “Tests of Quantum Gravity induced non-locality via opto-mechanical quantum oscillators” (which I wrote about here).

Others look for evidence of quantum gravity in the CMB, in gravitational waves, or search for violations of the symmetries that underlie General Relativity. You can find a little summary in my blogpost “How Can we test Quantum Gravity”  or in my Nautilus essay “What Quantum Gravity Needs Is More Experiments.”

Do Marletto and Vedral mention any of this research on quantum gravity phenomenology? No.

So, let’s take stock. Here, we have two scientists who don’t know anything about the topic they write about and who ignore the existing literature. They faintly reinvent an old idea without being aware of the well-known difficulties, without quantifying the prospects of ever measuring it, and without giving proper credits to those who previously wrote about it. And they get published in one of the most prominent scientific journals in existence.

Wow. This takes us to a whole new level of editorial incompetence.

The worst part isn’t even that Nature magazine claims my research area doesn’t exist. No, it’s that I’m a regular reader of the magazine – or at least have been so far – and rely on their editors to keep me informed about what happens in other disciplines. For example with the comments pieces. And let us be clear that these are, for all I know, invited comments and not selected from among unsolicited submissions. So, some editor deliberately chose these authors.

Now, in this rare case when I can judge their content’s quality, I find the Nature editors picked two people who have no idea what’s going on, who chew up 30 years old ideas, and omit relevant citations of timely contributions.

Thus, for me the worst part is that I will henceforth have to suspect Nature’s coverage of other research areas is equally miserable as this.

Really, doing as much as Googling “Quantum Gravity Phenomenology” is more informative than this Nature comment.

Sunday, July 09, 2017

Stephen Hawking’s 75th Birthday Conference: Impressions

I’m back from Cambridge, where I attended the conference “Gravity and Black Holes” in honor of Stephen Hawking’s 75th birthday.

First things first, the image on the conference poster, website, banner, etc is not a psychedelic banana, but gravitational wave emission in a black hole merger. It’s a still from a numerical simulation done by a Cambridge group that you can watch in full on YouTube.



What do gravitational waves have to do with Stephen Hawking? More than you might think.

Stephen Hawking, together with Gary Gibbons, wrote one of the first papers on the analysis of gravitational wave signals. That was in 1971, briefly after gravitational waves were first “discovered” by Joseph Weber. Weber’s detection was never confirmed by other groups. I don’t think anybody knows just what he measured, but whatever it was, it clearly wasn’t gravitational waves. Also Hawking’s – now famous – area theorem stemmed from this interest in gravitational waves, which is why the paper is titled “Gravitational Radiation from Colliding Black Holes.”

Second things second, the conference launched on Sunday with a public symposium, featuring not only Hawking himself but also Brian Cox, Gabriela Gonzalez, and Martin Rees. I didn’t attend because usually nothing of interest happens at these events. I think it was recorded, but haven’t seen the recording online yet – will update if it becomes available.

Gabriela Gonzalez was spokesperson of the LIGO collaboration when the first (real) gravitational wave detection was announced, so you have almost certainly seen her. She also gave a talk at the conference on Tuesday. LIGO’s second run is almost done now, and will finish in August. Then it’s time for the next schedule upgrade. Maximal design sensitivity isn’t expected to be reached until 2020. Above all, in the coming years, we’ll almost certainly see much better statistics and smaller error bars.

The supposed correlations in the LIGO noise were worth a joke by the session’s chairman, and I had the pleasure of talking to another member of the LIGO collaboration who recognized me as the person who wrote that upsetting Forbes piece. I clearly made some new friends there^^. I’d have some more to say about this, but will postpone this to another time.

Back to the conference. Monday began with several talks on inflation, most of which were rather basic overviews, so really not much new to report. Slava Mukhanov delivered a very Russian presentation, complaining about people who complain that inflation isn’t science. Andrei Linde then spoke about attractors in inflation, something I’ve been looking into recently, so this came in handy.

Monday afternoon, we had Jim Hartle speaking about the No-Boundary proposal – he was not at all impressed by Neil Turok et al’s recent criticism – and Raffael Bousso about the ever-tightening links between general relativity and quantum field theory. Raffael’s was the probably most technical talk of the meeting. His strikes me as a research program that will still run in the next century. There’s much to learn and we’ve barely just begun.

On Tuesday, besides the already mentioned LIGO talk, there were a few other talks about numerical general relativity – informative but also somehow unexciting. In the afternoon, Ted Jacobson spoke about fluid analogies for gravity (which I wrote about here), and Jeff Steinhauer reported on his (still somewhat controversial) measurement of entanglement in the Hawking radiation of such a fluid analogy (which I wrote about here.)

Wednesday began with a rather obscure talk about how to shove information through wormholes in AdS/CFT that I am afraid might have been somehow linked to ER=EPR, but I missed the first half so not sure. Gary Gibbons then delivered a spirited account of gravitational memory, though it didn’t become clear to me if it’s of practical relevance.

Next, Andy Strominger spoke about infrared divergences in QED. Hearing him speak, the whole business of using soft gravitons to solve the information loss problem suddenly made a lot of sense! Unfortunately I immediately forgot why it made sense, but I promise to do more reading on that.

Finally, Gary Horowitz spoke about all the things that string theorists know and don’t know about black hole microstates, which I’d sum up with they know less than I thought they do.

Stephen Hawking attended some of the talks, but didn’t say anything, except for a garbled sentence that seems to have played back by accident and stumped Ted Jacobson.

All together, it was a very interesting and fun meeting, and also a good opportunity to have coffee with friends both old and new. Besides food for thought, I also brought back a conference bag, a matching pen, and a sinus infection which I blame on the air conditioning in the lecture hall.

Now I have a short break to assemble my slides for next week’s conference and then I’m off to the airport again.

Friday, June 30, 2017

To understand the foundations of physics, study numerology

Numbers speak. [Img Src]
Once upon a time, we had problems in the foundations of physics. Then we solved them. That was 40 years ago. Today we spend most of our time discussing non-problems.

Here is one of these non-problems. Did you know that the universe is spatially almost flat? There is a number in the cosmological concordance model called the “curvature parameter” that, according to current observation, has a value of 0.000 plus-minus 0.005.

Why is that a problem? I don’t know. But here is the story that cosmologists tell.

From the equations of General Relativity you can calculate the dynamics of the universe. This means you get relations between the values of observable quantities today and the values they must have had in the early universe.

The contribution of curvature to the dynamics, it turns out, increases relative to that of matter and radiation as the universe expands. This means for the curvature-parameter to be smaller than 0.005 today, it must have been smaller than 10-60 or so briefly after the Big Bang.

That, so the story goes, is bad, because where would you get such a small number from?

Well, let me ask in return, where do we get any number from anyway? Why is 10-60 any worse than, say, 1.778, or exp(67π)?

That the curvature must have had a small value in the early universe is called the “flatness problem,” and since it’s on Wikipedia it’s officially more real than me. And it’s an important problem. It’s important because it justifies the many attempts to solve it.

The presently most popular solution to the flatness problem is inflation – a rapid period of expansion briefly after the Big Bang. Because inflation decreases the relevance of curvature contributions dramatically – by something like 200 orders of magnitude or so – you no longer have to start with some tiny value. Instead, if you start with any curvature parameter smaller than 10197, the value today will be compatible with observation.

Ah, you might say, but clearly there are more numbers smaller than 10197 than there are numbers smaller than 10-60, so isn’t that an improvement?

Unfortunately, no. There are infinitely many numbers in both cases. Besides that, it’s totally irrelevant. Whatever the curvature parameter, the probability to get that specific number is zero regardless of its value. So the argument is bunk. Logical mush. Plainly wrong. Why do I keep hearing it?

Worse, if you want to pick parameters for our theories according to a uniform probability distribution on the real axis, then all parameters would come out infinitely large with probability one. Sucks. Also, doesn’t describe observations*.

And there is another problem with that argument, namely, what probability distribution are we even talking about? Where did it come from? Certainly not from General Relativity because a theory can’t predict a distribution on its own theory space. More logical mush.

If you have trouble seeing the trouble, let me ask the question differently. Suppose we’d manage to measure the curvature parameter today to a precision of 60 digits after the point. Yeah, it’s not going to happen, but bear with me. Now you’d have to explain all these 60 digits – but that is as fine-tuned as a zero followed by 60 zeroes would have been!

Here is a different example for this idiocy. High energy physicists think it’s a problem that the mass of the Higgs is 15 orders of magnitude smaller than the Planck mass because that means you’d need two constants to cancel each other for 15 digits. That’s supposedly unlikely, but please don’t ask anyone according to which probability distribution it’s unlikely. Because they can’t answer that question. Indeed, depending on character, they’ll either walk off or talk down to you. Guess how I know.

Now consider for a moment that the mass of the Higgs was actually about as large as the Planck mass. To be precise, let’s say it’s 1.1370982612166126 times the Planck mass. Now you’d again have to explain how you get exactly those 16 digits. But that is, according to current lore, not a finetuning problem. So, erm, what was the problem again?

The cosmological constant problem is another such confusion. If you don’t know how to calculate that constant – and we don’t, because we don’t have a theory for Planck scale physics – then it’s a free parameter. You go and measure it and that’s all there is to say about it.

And there are more numerological arguments in the foundations of physics, all of which are wrong, wrong, wrong for the same reasons. The unification of the gauge couplings. The so-called WIMP-miracle (RIP). The strong CP problem. All these are numerical coincidence that supposedly need an explanation. But you can’t speak about coincidence without quantifying a probability!

Do my colleagues deliberately lie when they claim these coincidences are problems, or do they actually believe what they say? I’m not sure what’s worse, but suspect most of them actually believe it.

Many of my readers like jump to conclusions about my opinions. But you are not one of them. You and I, therefore, both know that I did not say that inflation is bunk. Rather I said that the most common arguments for inflation are bunk. There are good arguments for inflation, but that’s a different story and shall be told another time.

And since you are among the few who actually read what I wrote, you also understand I didn’t say the cosmological constant is not a problem. I just said its value isn’t the problem. What actually needs an explanation is why it doesn’t fluctuate. Which is what vacuum fluctuations should do, and what gives rise to what Niayesh called the cosmological non-constant problem.

Enlightened as you are, you would also never think I said we shouldn’t try to explain the value of some parameter. It is always good to look for better explanations for the assumption underlying current theories – where by “better” I mean either simpler or can explain more.

No, what draws my ire is that most of the explanations my colleagues put forward aren’t any better than just fixing a parameter through measurement  – they are worse. The reason is the problem they are trying to solve – the smallness of some numbers – isn’t a problem. It’s merely a property they perceive as inelegant.

I therefore have a lot of sympathy for philosopher Tim Maudlin who recently complained that “attention to conceptual clarity (as opposed to calculational technique) is not part of the physics curriculum” which results in inevitable confusion – not to mention waste of time.

In response, a pseudoanonymous commenter remarked that a discussion between a physicist and a philosopher of physics is “like a debate between an experienced car mechanic and someone who has read (or perhaps skimmed) a book about cars.”

Trouble is, in the foundations of physics today most of the car mechanics are repairing cars that run just fine – and then bill you for it.

I am not opposed to using aesthetic arguments as research motivations. We all have to get our inspiration from somewhere. But I do think it’s bad science to pretend numerological arguments are anything more than appeals to beauty. That very small or very large numbers require an explanation is a belief – and it’s a belief that has become adapted by the vast majority of the community. That shouldn’t happen in any scientific discipline.

As a consequence, high energy physics and cosmology is now populated with people who don’t understand that finetuning arguments have no logical basis. The flatness “problem” is preached in textbooks. The naturalness “problem” is all over the literature. The cosmological constant “problem” is on every popular science page. And so the myths live on.

If you break down the numbers, it’s me against ten-thousand of the most intelligent people on the planet. Am I crazy? I surely am.


*Though that’s exactly what happens with bare values.

Away Note

I’ll be traveling the next two weeks. First to Cambridge to celebrate Stephen Hawking’s 75th birthday (which was in January), then in Trieste for a conference on “Probing the spacetime fabric: from concepts to phenomenology.”  Rant coming up later today, but after that please prepare for a slow time.

Monday, June 26, 2017

Dear Dr B: Is science democratic?

    “Hi Bee,

    One of the often repeated phrases here in Italy by so called “science enthusiasts” is that “science is not democratic”, which to me sounds like an excuse for someone to justify some authoritarian or semi-fascist fantasy.

    We see this on countless “Science pages”, one very popular example being Fare Serata Con Galileo. It's not a bad page per se, quite the contrary, but the level of comments including variations of “Democracy is overrated”, “Darwin works to eliminate weak and stupid people” and the usual “Science is not democratic” is unbearable. It underscores a troubling “sympathy for authoritarian politics” that to me seems to be more and more common among “science enthusiasts". The classic example it’s made is “the speed of light is not voted”, which to me, as true as it may be, has some sinister resonance.

    Could you comment on this on your blog?

    Luca S.”


Dear Luca,

Wow, I had no idea there’s so much hatred in the backyards of science communication.

Hand count at convention of the German
party CDU. Image Source: AFP
It’s correct that science isn’t democratic, but that doesn’t mean it’s fascistic. Science is a collective enterprise and a type of adaptive system, just like democracy is. But science isn’t democratic any more than sausage is a fruit just because you can eat both.

In an adaptive system, small modifications create a feedback that leads to optimization. The best-known example is probably Darwinian evolution, in which a species’ genetic information receives feedback through natural selection, thereby optimizing the odds of successful reproduction. A market economy is also an adaptive system. Here, the feedback happens through pricing. A free market optimizes “utility” that is, roughly speaking, a measure of the agents’ (customers/producers) satisfaction.

Democracy too is an adaptive system. Its task is to match decisions that affect the whole collective with the electorate’s values. We use democracy to keep our “is” close to the “ought.”

Democracies are more stable than monarchies or autocracies because an independent leader is unlikely to continuously make decisions which the governed people approve of. And the more governed people disapprove, the more likely they are to chop off the king’s head. Democracy, hence, works better than monarchy for the same reason a free market works better than a planned economy: It uses feedback for optimization, and thereby increases the probability for serving peoples’ interests.

The scientific system too uses feedback for optimization – this is the very basis of the scientific method: A hypothesis that does not explain observations has to be discarded or amended. But that’s about where similarities end.

The most important difference between the scientific, democratic, and economic system is the weight of an individual’s influence. In a free market, influence is weighted by wealth: The more money you can invest, the more influence you can have. In a democracy, each voter’s opinion has the same weight. That’s pretty much the definition of democracy – and note that this is a value in itself.

In science, influence is correlated with expertise. While expertise doesn’t guarantee influence, an expert is more likely to hold relevant knowledge, hence expertise is in practice strongly correlated with influence.

There are a lot of things that can go wrong with scientific self-optimization – and a lot of things do go wrong – but that’s a different story and shall be told another time. Still, optimizing hypotheses by evaluating empirical adequacy is how it works in principle. Hence, science clearly isn’t democratic.

Democracy, however, plays an important role for science.

For science to work properly, scientists must be free to communicate and discuss their findings. Non-democratic societies often stifle discussion on certain topics which can create a tension with the scientific system. This doesn’t have to be the case – science can flourish just fine in non-democratic societies – but free speech strongly links the two.

Science also plays an important role for democracy.

Politics isn’t done with polling the electorate on what future they would like to see. Elected representatives then have to find out how to best work towards this future, and scientific knowledge is necessary to get from “is” to “ought.”

But things often go wrong at the step from “is” to “ought.” Trouble is, the scientific system does not export knowledge in a format that can be directly imported by the political system. The information that elected representatives would need to make decisions is a breakdown of predictions with quantified risks and uncertainties. But science doesn’t come with a mechanism to aggregate knowledge. For an outsider, it’s a mess of technical terms and scientific papers and conferences – and every possible opinion seems to be defended by someone!

As a result, public discourse often draws on the “scientific consensus” but this is a bad way to quantify risk and uncertainty.

To begin with, scientists are terribly disagreeable and the only consensuses I know of are those on thousand years-old questions. More important, counting the numbers of people who agree with a statement simply isn’t an accurate quantifier of certainty. The result of such counting inevitably depends on how much expertise the counted people have: Too little expertise, and they’re likely to be ill-informed. Too much expertise, and they’re likely to have personal stakes in the debate. Worse, still, the head-count can easily be skewed by pouring money into some research programs.

Therefore, the best way we presently have make scientific knowledge digestible for politicians is to use independent panels. Such panels – done well – can both circumvent the problem of personal bias and the skewed head count. In the long run, however, I think we need a fourth arm of government to prevent politicians from attempting to interpret scientific debate. It’s not their job and it shouldn’t be.

But those “science enthusiasts” who you complain about are as wrong-headed as the science deniers who selectively disregard facts that are inconvenient for their political agenda. Both of them confuse opinions about what “ought to be” with the question how to get there. The former is a matter of opinion, the latter isn’t.

That vaccine debate that you mentioned, for example. It’s one question what are the benefits of vaccination and who is at risk from side-effects – that’s a scientific debate. It’s another question entirely whether we should allow parents to put their and other peoples’ children at an increased risk of early death or a life of disability. There’s no scientific and no logical argument that tells us where to draw the line.

Personally, I think parents who don’t vaccinate their kids are harming minors and society shouldn’t tolerate such behavior. But this debate has very little to do with scientific authority. Rather, the issue is to what extent parents are allowed to ruin their offspring’s life. Your values may differ from mine.

There is also, I should add, no scientific and no logical argument for counting the vote of everyone (above some quite arbitrary age threshold) with the same weight. Indeed, as Daniel Gilbert argues, we are pretty bad at predicting what will make us happy. If he’s right, then the whole idea of democracy is based on a flawed premise.

So – science isn’t democratic, never has been, never will be. But rather than stating the obvious, we should find ways to better integrate this non-democratically obtained knowledge into our democracies. Claiming that science settles political debate is as stupid as ignoring knowledge that is relevant to make informed decisions.

Science can only help us to understand the risks and opportunities that our actions bring. It can’t tell us what to do.

Thanks for an interesting question.

Tuesday, June 20, 2017

If tensions in cosmological data are not measurement problems, they probably mean dark energy changes

Galaxy pumpkin.
Src: The Swell Designer
According to physics, the universe and everything in it can be explained by but a handful of equations. They’re difficult equations, all right, but their simplest feature is also the most mysterious one. The equations contain a few dozen parameters that are – for all we presently know – unchanging, and yet these numbers determine everything about the world we inhabit.

Physicists have spent much brain-power on the question where these numbers come from, whether they could have taken any other values than the ones we observe, and whether their exploring their origin is even in the realm of science.

One of the key questions when it comes to the parameters is whether they are really constant, or whether they are time-dependent. If the vary, then their time-dependence would have to be determined by yet another equation, and that would change the whole story that we currently tell about our universe.

The best known of the fundamental parameters that dictate the universe how to behave is the cosmological constant. It is what causes the universe’s expansion to accelerate. The cosmological constant is usually assume to be, well, constant. If it isn’t, it is more generally referred to as ‘dark energy.’ If our current theories for the cosmos are correct, our universe will expand forever into a cold and dark future.

The value of the cosmological constant is infamously the worst prediction ever made using quantum field theory; the math says it should be 120 orders of magnitude larger than what we observe. But that the cosmological constant has a small non-zero value is extremely well established by measurement, well enough that a Nobel Prize was awarded for its discovery in 2011.

The Nobel Prize winners Perlmutter, Schmidt, and Riess, measured the expansion rate of the universe, encoded in the Hubble parameter, by looking at supernovae distributed over various distances. They concluded that the universe is not only expanding, but is expanding at an increasing rate – a behavior that can only be explained by a nonzero cosmological constant.

It is controversial though exactly how fast the expansion is today, or how large the current value of the Hubble constant, H0, is. There are different ways to measure this constant, and physicists have known for a few years that the different measurements give different results. This tension in the data is difficult to explain, and it has so-far remained unresolved.

One way to determine the Hubble constant is by using the cosmic microwave background (CMB). The small temperature fluctuations in the CMB spectrum encode the distribution of plasma in the early universe and the changes of the radiation since. From fitting the spectrum with the parameters that determine the expansion of the universe, physicists get a value for the Hubble constant. The most accurate of such measurements is currently that from the Planck satellite.

Another way to determine the Hubble constant is to deduce the expansion of the universe from the redshift of the light from distant sources. This is the way the Nobel-Prize winners made their discovery, and the precision of this method has since been improved. These two ways to determine the cosmological constant give results that differ with a statistical significance of 3.4 σ. That’s a probability of less than one in thousand to be due to random data fluctuations.

Various explanations for this have since been proposed. One possibility is that it’s a systematic error in the measurement, most likely in the CMB measurement from the Planck mission. There are reasons to be skeptical because the tension goes away when the finer structures (the large multipole moments) of the data is omitted. For many astrophysicists, this is an indicator that something’s amiss either with the Planck measurement or the data analysis.

Or maybe it’s a real effect. In this case, several modifications of the standard cosmological model have been put forward. They range from additional neutrinos to massive gravitons to changes in the cosmological constant.

That the cosmological constant changes from one place to the next is not an appealing option because this tends to screw up the CMB spectrum too much. But the currently most popular explanation for the data tension seems to be that the cosmological constant changes in time.

A group of researchers from Spain, for example, claims that they have a stunning 4.1 σ preference for a time-dependent cosmological constant over an actually constant one.

This claim seems to have been widely ignored, and indeed one should be cautious. They test for a very specific time-dependence, and their statistical analysis does not account for other parameterization they might have previously tried. (The theoretical physicist’s variant of post-selection bias.)

Moreover, they fit their model not only to the two above mentioned datasets, but to a whole bunch of others at the same time. This makes it hard to tell what is the reason their model seems to work better. A couple of cosmologists who I asked why this group’s remarkable results have been ignored complained that the data analysis is opaque.

Be that as it may, just when I put the Spaniards’ paper away, I saw another paper that supported their claim with an entirely independent study based on weak gravitational lensing.

Weak gravitational lensing happens when a foreground galaxy distorts the images of farther away galaxies. The qualifier ‘weak’ sets this effect apart from strong lensing which is caused by massive nearby objects – such as black holes – and deforms point-like sources to partials rings. Weak gravitational lensing, on the other hand, is not as easily recognizable and must be inferred from the statistical distribution of the shapes of galaxies.

The Kilo Degree Survey (KiDS) has gathered and analyzed weak lensing data from about 15 million distant galaxies. While their measurements are not sensitive to the expansion of the universe, they are sensitive to the density of dark energy, which affects the way light travels from the galaxies towards us. This density is encoded in a cosmological parameter imaginatively named σ8. Their data, too, is in conflict with the CMB data from the Planck satellite.

The members of the KiDs collaboration have tried out which changes to the cosmological standard model work best to ease the tension in the data. Intriguingly, it turns out that ahead of all explanations the one that works best is that the cosmological constant changes with time. The change is such that the effects of accelerated expansion are becoming more pronounced, not less.

In summary, it seems increasingly unlikely the tension in the cosmological data is due to chance. Cosmologists are cautious and most of them bet on a systematic problem with the Planck data. However, if the Planck measurement receives independent confirmation, the next best bet is on time-dependent dark energy. It wouldn’t make our future any brighter though. The universe would still expand forever into cold darkness.


[This article previously appeared on Starts With A Bang.]

Update June 21: Corrected several sentences to address comments below.