‘Funny dark matter:’ Something is wrong about our theory of the expanding universe


Ever since the early 20th century, we’ve known that the universe is expanding. Exactly how quickly it’s expanding, however, remains something of a vexed question. Until now, our theoretical understanding of the universe predicted a rate of expansion that’s about 8% slower than what we calculated from our actual observations. This discrepancy is referred to as the Hubble tension, and the reason behind it is one of the great unanswered questions of physics.

The most obvious potential explanation is that our measurements are inaccurate. However, a new paper published December 9 in The Astrophysical Journal further validates our existing observations by cross-checking the Hubble Space Telescope data with new observations from the James Webb Space Telescope, and finding that the two agree almost perfectly.

What is the Hubble Constant, and how do we measure it?

The rate at which the universe is expanding is expressed as a value called the Hubble Constant, generally abbreviated as “H0”. One quirk of our universe is that its rate of expansion varies with distance—the further away an object is, the faster it’s moving away from us. To reflect this fact, the constant is expressed in units of kilometers per second per megaparsec (km/s/Mpc), with a megaparsec being a unit of distance equivalent to around 300,000 light years.

Our best theoretical model for the universe, the Lambda/Cold Dark Matter model (“ΛCDM”), predicts a value for H0 of 67–68 km/s/Mpc. Our observations, however, put H0 at around 73 km/s/Mpc. So what’s going on?

To understand this, we first need to understand how H0 is measured. Scientists do this by studying distant objects–stars, galaxies, supernovas–and working out a) how far away they are and b) how fast they’re moving away from us. 

Climbing the cosmic distance ladder

The first step is being able to calculate how far away from us distant objects are —and figuring out cosmic distances is rarely a straightforward task. As Siyang Li, one of the paper’s co-authors on the paper, says ruefully, “A lot of our work involves measuring the distances to galaxies—[which] is one of the very hard, hard things to do in astronomy.”

Li explains that to make these calculations, astronomers use the so-called “cosmic distance ladder.” The ladder starts with objects within about 1,000 parsecs of Earth, whose distance we can calculate with simple trigonometry. For more distant objects, Li says, “We really need two pieces of information. One is the apparent magnitude: how bright does the star appear to us on Earth? The other is the intrinsic luminosity of that star: how intrinsically bright is it?”

The difference between these two values is a function of distance: the further away an object is, the dimmer it appears to be. (Imagine an expanding sphere of light rays emanating from a lamp; if you’re close to the lamp, many of those rays will reach you, but as you move further and further away, more and more rays will miss you.) There’s a relatively simple relationship between these two values and the object’s distance, so if we have two of these pieces of information, we can calculate the third. 

This is useful because there are some categories of object—known as “standard candles”—that all share the same intrinsic luminosity. (Examples include type 1a supernovae, along a class of stars known as Cepheids.) Once we establish the intrinsic luminosity of a class of standard candle—a process known as calibration—we can then use that information to work out the distance to similar objects that are too far away that figure to be calculated directly. The process can then be repeated for another class of standard candle. 

Once we know how far away an object is, the second piece of information we need is how quickly it’s moving away from us. As the universe expands, the light from such objects takes longer and longer to reach us, and its wavelength is stretched by the expanding spacetime through which it travels. This phenomenon is called “redshift,” and if we can figure how much the light from a given object is redshifted, we can calculate how quickly the object is moving away from us.

Calculating the Hubble Constant

Once we have both pieces of information, the actual determination of the Hubble Constant is reasonably straightforward: velocity and distance are related by the equation v = H0d, where v is velocity, d is distance, and H0 is the Hubble Constant.  

If we take this measurement for a large number of distant objects, we can zero in on an ever more precise value for the Hubble Constant. Of course,  to do so, it’s crucial that the measurements are correct. Most of our information on distant objects comes from the Hubble Space Telescope, which has spent decades accumulating data, and the launch of the James Webb Space Telescope provided a welcome chance to cross-check that data. 

It also opens up new possibilities for research, as Adam Riess—the paper’s lead author, and the recipient of the 2011 Nobel Prize in Physics for his research into the Hubble tension–explains: “JWST has better resolution and sensitivity in the near-infrared. Hubble is better at bluer wavelengths. Hubble’s biggest advantage is that it has been up there longer so it has much more data, [but] once there is enough data from JWST it may surpass Hubble–or they may be used jointly to study the [Hubble] tension.”

For now, the JWST results correlate almost perfectly with existing data, providing more strong evidence that it’s not the accuracy of our measurements that’s the problem. In that case, Riess says, the problem may be with the theory. “Failing to find flaws in the measurements,” he says, “leaves an increasingly likely scenario of a flaw in the model.”

What’s the ΛCDM model, and why does it predict a different Hubble Constant?

As its name suggests, the ΛCDM model is based on two fundamental concepts: the cosmological constant (denoted by the Greek letter “Λ”) and the existence of cold dark matter. The cosmological constant expresses the intrinsic energy of space itself—the mysterious “dark energy” that current estimates suggest makes up around 68% of the energy in the universe. “Cold dark matter”, meanwhile, represents our best understanding of the equally elusive dark matter, which makes up another 27% of the universe’s energy. (Plain old matter, of which stars, planets and humans are made, comprises only a measly 5%.)

The notions of dark energy and dark matter are not arbitrary—dark matter’s existence can be inferred from its effects on galactic rotation, and dark energy is necessary for the universe’s continued expansion. The versions in the ΛCDM model reflect these facts and are also correlated with our observations of the cosmic microwave background, the leftover radiation from the Big Bang.

“Basically,” Riess says, “ΛCDM predicts the physical size of matter/temperature fluctuations in the post-Big Bang Universe. The CMB is used to measure the angular size of those fluctuations, and comparing the two calibrates the Hubble constant.”

Clearly, however, the ongoing problem of the Hubble tension suggests that something isn’t right. Do either Riess or Li have suspicions as to where the root of the problem might be found? “Something in the dark sector,” says Riess. “[Either] funny dark energy or funny dark matter.”

Li agrees, adding that he suspects that it’s our imperfect understanding of the former that may be at the root of the Hubble tension: “With dark matter by itself, we know it’s there, and there are models we can make to predict the behavior of galaxies—rotations and stuff like that. But with dark energy, there are so many possibilities out there that there’s not really one exact alternative that fits exactly… There’s so much we don’t know about dark energy, and so much that we’re still discovering and learning.” 

 

Win the Holidays with PopSci’s Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.

 



Source link

About The Author

Scroll to Top