Products You May Like
Everything in the Universe has gravity – and feels it too. Yet this most common of all fundamental forces is also the one that presents the biggest challenges to physicists.
Albert Einstein’s theory of general relativity has been remarkably successful in describing the gravity of stars and planets, but it doesn’t seem to apply perfectly on all scales.
However, gaps in our understanding start to appear when we try to apply it to extremely small distances, where the laws of quantum mechanics operate, or when we try to describe the entire universe.
Our new study, published in Nature Astronomy, has now tested Einstein’s theory on the largest of scales.
We believe our approach may one day help resolve some of the biggest mysteries in cosmology, and the results hint that the theory of general relativity may need to be tweaked on this scale.
Quantum theory predicts that empty space, the vacuum, is packed with energy. We do not notice its presence because our devices can only measure changes in energy rather than its total amount.
However, according to Einstein, the vacuum energy has a repulsive gravity – it pushes the empty space apart. Interestingly, in 1998, it was discovered that the expansion of the Universe is in fact accelerating (a finding awarded with the 2011 Nobel Prize in physics).
However, the amount of vacuum energy, or dark energy as it has been called, necessary to explain the acceleration is many orders of magnitude smaller than what quantum theory predicts.
Hence the big question, dubbed “the old cosmological constant problem”, is whether the vacuum energy actually gravitates – exerting a gravitational force and changing the expansion of the universe.
If yes, then why is its gravity so much weaker than predicted? If the vacuum does not gravitate at all, what is causing the cosmic acceleration?
We don’t know what dark energy is, but we need to assume it exists in order to explain the Universe’s expansion.
Similarly, we also need to assume there is a type of invisible matter presence, dubbed dark matter, to explain how galaxies and clusters evolved to be the way we observe them today.
These assumptions are baked into scientists’ standard cosmological theory, called the lambda cold dark matter (LCDM) model – suggesting there is 70 percent dark energy, 25 percent dark matter, and 5 percent ordinary matter in the cosmos. And this model has been remarkably successful in fitting all the data collected by cosmologists over the past 20 years.
But the fact that most of the Universe is made up of dark forces and substances, taking odd values that don’t make sense, has prompted many physicists to wonder if Einstein’s theory of gravity needs modification to describe the entire universe.
A new twist appeared a few years ago when it became apparent that different ways of measuring the rate of cosmic expansion, dubbed the Hubble constant, give different answers – a problem known as the Hubble tension.
The disagreement, or tension, is between two values of the Hubble constant.
The other is the expansion rate measured by observing exploding stars known as supernovas in distant galaxies.
Many theoretical ideas have been proposed for ways of modifying LCDM to explain the Hubble tension. Among them are alternative gravity theories.
Digging for answers
We can design tests to check if the universe obeys the rules of Einstein’s theory.
General relativity describes gravity as the curving or warping of space and time, bending the pathways along which light and matter travel. Importantly, it predicts that the trajectories of light rays and matter should be bent by gravity in the same way.
Together with a team of cosmologists, we put the basic laws of general relativity to test. We also explored whether modifying Einstein’s theory could help resolve some of the open problems of cosmology, such as the Hubble tension.
To find out whether general relativity is correct on large scales, we set out, for the first time, to simultaneously investigate three aspects of it. These were the expansion of the Universe, the effects of gravity on light, and the effects of gravity on matter.
Using a statistical method known as the Bayesian inference, we reconstructed the gravity of the Universe through cosmic history in a computer model based on these three parameters.
We could estimate the parameters using the cosmic microwave background data from the Planck satellite, supernova catalogs as well as observations of the shapes and distribution of distant galaxies by the SDSS and DES telescopes.
We then compared our reconstruction to the prediction of the LCDM model (essentially Einstein’s model).
We found interesting hints of a possible mismatch with Einstein’s prediction, albeit with rather low statistical significance.
This means that there is nevertheless a possibility that gravity works differently on large scales, and that the theory of general relativity may need to be tweaked.
Our study also found that it is very difficult to solve the Hubble tension problem by only changing the theory of gravity.
The full solution would probably require a new ingredient in the cosmological model, present before the time when protons and electrons first combined to form hydrogen just after the Big Bang, such as a special form of dark matter, an early type of dark energy, or primordial magnetic fields.
Or, perhaps, there’s a yet unknown systematic error in the data.
That said, our study has demonstrated that it is possible to test the validity of general relativity over cosmological distances using observational data. While we haven’t yet solved the Hubble problem, we will have a lot more data from new probes in a few years.
This means that we will be able to use these statistical methods to continue tweaking general relativity, exploring the limits of modifications, to pave the way to resolving some of the open challenges in cosmology.