sábado, 26 de novembro de 2011

15. The Geometry of Space - Sean Carroll - Dark Matter, Dark Energy: The Dark Side of the Universe



As promised in the last lecture on energy density (rho), we will now discuss the curvature (K) from the Friedmann equation. Like we were able to map the gravitational field in a cluster of galaxies by use of lensing, can we map the curvature of space in the universe by measuring the angles of some very large triangle?

Since the universe certainly appears to be flat, any curvature must be small. In fact, the Friedmann equation predicts the curvature K to decrease as the inverse of the scale factor squared. But compare this with the matter density similarly decreasing, but even more quickly at the third power. The early radiation dominated universe is similarly decreasing, but even quicker still at the fourth power. So the curvature actually grows over time in respect to the quickly fading matter and radiation. After 14 billion years of growth, we should be able to see the curvature if it exists. Otherwise the universe must be flat or the curvature is minute beyond belief. Enter the flatness problem!

Some standard sized object used as a ruler could allow measurement of curvature, just like we already found a standard candle in supernovae for measuring distance. But in addition to its standard size, we need to also know the distance of the standard object to complete the triangle. Such a triangle would tell us if the value of K is negative, zero, or positive. The 2D representations of these three 3D curvatures of space are a saddle, a plane, or a sphere. Their angles would add up to be less than 180 degrees, equal, or greater than 180. Objects themselves would also appear to be different sizes due to the light paths being pulled apart, not pulled at all, or pulled together. So they would appear smaller, the same, or larger, depending on K.

The greatest temperature variations in the CMB were caused by blobs of matter with just enough size to collapse during the 380,000 years before recombination. So we have both distance and size! These blobs would appear one degree across in a flat universe, and indeed in 2000, Boomerang observed such structures. This confirms that something is supplying the needed energy to make the universe flat. Dark energy does that for us, thus giving another reason instead of just supernovae, to support its existence.

It can be thought of as vacuum energy, negative, zero, or positive. It's smooth throughout space and persistent over time. It is inherent in spacetime itself, equal to the overall energy density. This cosmological constant was deleted by Einstein, but is now brought back as dark energy.

Yet the vacuum energy should be much larger than what we observe compared to what is expected. This is the new issue dubbed The Cosmological Constant Problem.

Another is the Coincidence Scandal, an analog to the flatness problem where K and ρ just happen to be so much alike at this instant of time. Why do the values of concordance cosmology; 70% dark energy and 30% dark matter plus ordinary matter, also just happen to be so alike at this point in the universe?

It's also curious how the universe started out being radiation dominated, then matter dominated, and will end as dark energy dominated.

The answers to this are now only speculations, and will occupy the rest of the course, along with alternate theories.

We're sort of in a good news, bad news, situation right now, with where we are in the concordance cosmology. The good news is we have a model that fits all the data. We have 5% of the universe as ordinary matter, 25% dark matter, and 70% dark energy.

Since 1998 when we first found evidence that the universe is accelerating, this idea has been tested in many different ways. It keeps coming out and passing the tests. It seems as if the universe is accelerating, just as the first supernovae observations indicated.

The bad news is we don't understand it. The dark energy part in particular is very exotic and outside our ordinary experience. In fact, the dark matter part is also very exotic. Ten years ago, we'd be giving a set of lectures just on dark matter, and that would also seem very exciting, exotic, and interesting. These days the dark matter seems almost prosaic compared to the dark energy, which is something truly different.

So clearly we want to do the best we can, to test this idea. To figure out whether or not, this hypothesis that 70% of the universe is some smoothly distributed persistent form of energy density is in fact correct. We want to use that hypothesis to make predictions, and then go out there and test those predictions. The most obvious test we can think of is the one we mentioned in the last lecture, the geometry of space.

So here is, once again, the Friedmann equation, derived from Einstein's General Theory of Relativity, that governs the relationship between energy density and the curvature of space in an expanding universe:

((8πG)/3)(ρ) = H² + K

What we're saying here is that the energy density of the universe, rho(ρ), is given by the sum of the contribution from the expansion (H) and the curvature of space itself (K). It turns out that the right amount of dark energy you need to make the universe accelerate, is also the right amount of dark energy you need to make the universe spatially flat. In other words, this satisfies the Friedmann equation, with the spatial curvature term (K) being zero.

That is a testable prediction, since you can try to imagine measuring the curvature of space itself. So in this lecture we'll actually go out and measure the curvature of space on very large scales. In other words, what we're doing is the same kind of thing we were doing when we used gravitational lensing to measure the weight of a galaxy or a cluster of galaxies.

What we were doing then was letting light rays go by a cluster or some other massive object, and we were mapping out the local geometry of space, near that gravitating object. Since Einstein tells us that space and time are warped and bent by the presence of matter, mapping out the geometry of space and time is a way to tell how much stuff there is.

So we can do that very straightforwardly with individual clusters of galaxies, yet now we're going to do that with the whole universe. We're going to look at how light propagates through space, to measure the geometry of all of space, all at once. That's how we can be sure we're not missing anything. When we're looking at individual clusters, it's always possible in principle that there's something outside. When you look at the whole universe, you're guaranteed to find everything that there is.

So here are the possibilities for what the curvature of space could be. Remember what this means. Since the universe is uniform everywhere, there is a certain single number which tells us how much space is curved. That number could be positive, it could be zero, or it could be negative. That number is the curvature of space.

It's related to Ω, the density parameter, which tells us what the total density of the universe is, divided by the critical density in the sense that when Ω=1, the universe is spatially flat, when Ω>1 it is positively curved like a sphere, and when Ω<1, space is negatively curved like a saddle. You can think of the directions of curvature of space, bending in different directions.

Now the curvature of space is something, like many of the things in cosmology, that is hard to visualize in reality. Whenever you're drawing pictures, representations of the curvature of space, the best you can do is draw a two-dimensional version. The actual space in which we live is three dimensional. As far as we know, this three-dimensional space in which we live, is not embedded in any bigger space, so it is all there is. So when we're talking about the geometry of space, we're talking about the intrinsic geometry of space, not how it looks to an outside observer, but things you can do when you are inside space, to measure it's geometry.

When we say the geometry is flat, we mean that the kind of geometry that Euclid invented thousands of years ago, is the right kind to describe space on very large scales. Euclid looked at tabletops, as a paradigm for where to do geometry. You would draw right angles and triangles, and come up with different laws on how geometric shapes would fit together.

We want the three-dimensional version of that, so we want to draw triangles, straight lines, and circles, within a big three-dimensional space, looking at their intrinsic properties. So there are different intrinsic properties that are telling you how space is possibly curved. The most obvious one is the angles inside a triangle. If you draw a triangle on a tabletop, or a triangle just on a three-dimensional space that is flat, no matter how you draw that triangle, no matter how it's oriented or how big it is, when you add up the angles of those three angles inside, you will always get 180 degrees. The way to think about the statement, "space is flat," is to translate it into the statement, "every single triangle I can draw anywhere in the universe, has angles inside that add up to 180 degrees."

If space is positively curved, a similar statement would hold true about triangles, yet it would say that the angles always add up to more than 180 degrees. We can visualize that by imagining our drawing a triangle on the surface of a two-dimensional sphere. It would be the case that the angles inside add up to greater than 180. So we have the three dimensional version of that. We're imagining that space itself in which we live, is a three-dimensional version of a sphere. Just like on a regular sphere, if you start at one point and walk around, you will eventually come back to where you left. So in a three-dimensional sphere, in a universe with positive spatial curvature, if you walk off in one direction, you will also eventually come back to where you left. It will take you billions of years, yet you would eventually get there.

Negative curvature then, means that you once again draw a big triangle, so that in a negatively curved space, every triangle you can draw, has angles inside that add up to less than 180 degrees. That's the kind of thing you can do, yet it's not the only thing you can do. A famous postulate of ordinary Euclidean geometry is the parallel postulate. It says if you start with two straight parallel lines and let them go, if the lines are initially parallel, they will always remain exactly parallel. That is to say, if they're initially having the property that the distance between the two lines is a constant, not changing as you go past the lines, that will always be true no matter how far you go.

That is a postulate of Euclidean geometry, yet not a necessarily true fact about the world. If we lived in a positively curved space, two lines that were initially parallel, would eventually come together. That makes perfect sense. Thinking about a sphere, you can draw two lines as parallel as you want, yet if you trace them down, they will eventually hit each other somewhere. In a negatively curved space meanwhile, you start with two parallel lines, you follow them down, and guess what? They're going to peel off, becoming more and more far apart.

That is a thing you can empirically imagine doing in space, that would reveal whether space is positively curved, negatively curved, or flat. So we've briefly touched upon something called the flatness problem in actual cosmology. The flatness problem has a sort of informal version and a formal version. The informal version is that the universe seems to be close to spatially flat. When you calculate the actual density of matter in the universe, putting aside dark energy for a bit, if you knew only about the matter, you would say that we've reached 30% of the critical density necessary to make the universe flat. Now 30%, as these things go, is awfully close to 100%. It makes you think that we're almost there. So there's this feeling that you're just missing something, and should be exactly flat if you knew what everything was. That's the sort of informal version of the flatness problem.

Yet there is a more formal version of the flatness problem, a statement that is a little more quantitative that drives home, exactly how surprising it is, that the universe is close to being spatially flat, yet without exactly being there. Basically it is a statement that the universe doesn't want to be flat! If the universe is a little bit non-flat, if there's a little bit of curvature to space, the amount of curvature becomes more and more important as the universe grows. So if you live in an old universe, it should be very curved, if there were any curvature at all.

You can see this roughly from the Friedmann equation once again. It relates the energy density of the universe, rho(ρ), to the sum of the contribution from the expansion (H) and the curvature of space itself (K). So you have three terms in that equation. Yet remember that we have very well-defined set of rules about how ρ changes as the universe expands.

For ordinary matter, for dark matter, or for any particles that are slowly moving compared to the speed of light, as the universe grows, the density goes down and the volume goes up, so that ρ goes down exactly like the number density goes down, since particles just become more dilute.

For radiation, ρ goes down even more quickly, because the number density goes down. Space becomes bigger, but also, every single particle of radiation is losing energy. So we have very well-defined rules about how ρ changes as the universe expands.

There's also a very well-defined rule about how K changes as the universe expands. You can imagine blowing up a balloon, which looks very curved when small, yet as it gets larger, every little cubic inch on the balloon looks flatter and flatter. In the same way, curvature dilutes away as the universe expands. However, when you plug in the numbers, K dilutes away more slowly than ρ in matter or radiation. To be technical:

K goes like 1/(scale factor)²
ρ of matter goes like 1/(scale factor)³
ρ of radiation goes like 1/(scale factor)^4

So as the universe gets larger, as the scale factor grows, ρ in matter and radiation, fall off more rapidly than the contribution to the spatial curvature. So imagine that there is some non-zero spatial curvature in the very early universe. That the term K in the Friedmann equation is not exactly 0 at early times. Even if it's fairly close to zero, the relative importance of that term, compared to the importance in ρ of matter or radiation, grows. We live today in a very old universe of 14 billion years. It's had that long for the spatial curvature to overtake and overwhelm the ρ of matter and radiation.

Yet it hasn't! That is the technical statement of the flatness problem. In order to get a 30% of the critical density universe today, in just matter and radiation, the difference between being absolutely flat and being curved in the early universe, had to be incredibly infinitesimal and finely tuned to just the right, tiny, tiny, tiny amount, so that today it would be comparable to the value of ρ in matter and radiation. This seems like an unlikely situation, and that is the flatness problem.

So 30% of the critical density, Ω=0.3, the amount that we've found in ordinary matter, is a weird number to have. That's why before we found the dark energy, theoretical physicists who were very convinced by this, were thinking that Ω matter, the contribution to ρ just from ordinary matter plus dark matter, had to be 1.0. They were believing that the universe had to have the critical density, Ω=1, since it was so close that it would make no sense to not quite go all the way. They believed in a flat universe.

Yet at the end of the day, you can believe whatever you want, it's not going to make you any money, you still have to go out there and look. So how do you actually measure the spatial geometry of the universe? Ideally you want to do what Karl Gauss did, when inventing the concepts of non-Euclidean geometry. He actually made a big triangle and measured the angles inside.

So we want to make a big triangle in the universe. Of course we can't travel to distant galaxies, stringing lasers or apiece of wire from one place to another, we have to take celestial objects as they're given to us, and use them to construct a big triangle, adding up the angles inside.

So one way to do this is if we had a standard ruler. A standard candle is something whose brightness we know. So the further away it is, the dimmer it will look. A standard ruler is something whose size we know, so the further away it is, the smaller it looks. So a standard ruler is just as good a way to measure distances to objects as a standard candle is. The reason you don't hear as much about standard rulers is because there just aren't as many of them. There aren't that many objects in cosmology or astrophysics whose size is miles across that you actually know ahead of time. Galaxies, stars, different astrophysical objects, appear in different sizes.

However imagine that you not only had a standard ruler, but imagine that you actually knew how far away it was. Imagine you had some object whose size you knew and whose distance you knew. Then you would think that you must be able to figure out exactly how big it will look. That would be true, assuming that you knew the geometry of space. The angular size that the ruler will take up, if you know how big it is, and it's distance, is telling you the geometry of space.

If it has a certain angular size, one degree across in a flat universe, then in a positively curved universe, it will look bigger than one degree across. That's because the angles your subtending from the light rays that come from you to the object or vice versa, are pulled together by the positive curvatures. So they can start out parallel and they will end up at the different angles of your standard ruler. Similarly if you have a negatively curved universe, that ruler which would have looked one degree across, is now going to look smaller than one degree across,

However, this is of course, asking a lot. You're asking to have some object, not only whose size we know, yet we also want to put that object in some place and know exactly how far away it is. How lucky do we have to be to have an object that is a fixed size and a known distance? Well, we got lucky! The universe provides us with exactly that, in the form of the temperature fluctuations in the CMB.

We already talked about the CMB, the leftover radiation from the Big Bang. It's a snapshot of what the universe looked like about 400,000 years after the Big Bang. At times earlier than that, the universe was so hot, that individual atoms were ionized and the universe was opaque, so light could not travel very far, before bumping into an electron. After 400,000 years, the universe had cooled down enough that electrons and nuclei had gotten together, the universe became transparent, and light traveled unimpeded through the universe. So we see what the universe was looking like about 400,000 years after the Big Bang.

The universe at that early time was much smoother than it is today, where it is smooth on large scales, yet on small scales we see individual planets, stars, galaxies, and clusters. At very early times, the universe was smooth on essentially all scales. It's the tiny deviations from perfect smoothness of that early time, that grew under the force of gravity, into stars, galaxies, and clusters.

So if you look at the CMB, observing not only that it exists, but also delicately measuring the temperature of the CMB at different points in the sky, you're measuring the imprint of those primordial fluctuations in density. We see the classic picture of the CMB, from the WMAP satellite. It's an all-sky picture, the whole sphere as we look out onto the sky, then projected onto an ellipse in this image. What we're seeing are just the tiny fluctuations in temperature, at only one part in 100,000. The blue spots are a little bit cooler, the red a little hotter. This is telling us where the density fluctuations were located, on a sphere 400,000 years after the Big Bang.

So we can learn a lot from these density fluctuations. We don't know any theory that predicts a precise place for where they should be, and don't ever expect to even have such a theory, yet we do know the statistical properties that they should have. Already in an earlier lecture, we talked about the fact that the properties of these splotches on the CMB, the hot spots and cold spots, provide evidence for dark matter. That's because what you have in the primordial universe, is a hot plasma that is experiencing acoustic oscillations.

You have an ionized plasma, yet in some regions is a little bit more dense, while in others a little bit less. Under the force of gravity, a region is going to collapse under its own gravitational field and heat up. That will increase the temperature fluctuation in that region, so it's now hotter than it used to be. The dark matter of course, continues to fall in, yet the ordinary matter bounces due to pressure, and it becomes less hot than it used to be. That's also a fluctuation, yet now it's a lower temperature rather than higher. It goes back and forth exactly like that.

Let's ask a question. What size would you expect the largest amount of fluctuation to be? The point is that if a region is going to collapse that is very large, it has no time to do so in any appreciable way, so its temperature fluctuation is going to be whatever it was stuck with in the very early universe. It it's too small, the region collapses, and expands, and collapses, and bounces back and forth, until it gets damped. So eventually it settles down into a configuration that is not very fluctuated or different from the surrounding plasma.

However, there is one particular length for which a region has time to collapse and heat up, becoming fairly substantial in terms of the difference it has in temperature, yet doesn't have time to bounce back yet. That will correspond to a physical size in light years, of the age of the universe in years at the moment when the CMB was formed.

In other words, since the CMB is a snapshot of 400,000 years after the Big Bang, regions that are 400,000 light years across, will have the greatest amount of fluctuation in their temperature. So we have a prediction for at that moment in the CMB, we know hot large the most noticeable hot-spots and cold-spots should be, which is 400,000 light years across.

Furthermore, we know how far away the CMB is. Using the Friedmann equation, using some theory of what the universe is made of, which we have, we can predict the distance from here today, to there, 400,000 years after the Big Bang. In other words, we have all the ingredients for making a big triangle! We have a standard ruler on the microwave sky. It's the hot and cold spots with the largest amplitude, and they are predicted to be one degree across, if the universe is spatially flat.

In other words, if the universe is spatially curved, positively or negatively, that's like taking the map you would predict in a flat universe, and either magnifying it, or shrinking it. So these are the data that we have from WMAP, and we can compare them to the prediction. What is the answer? The answer is that the universe is spatially flat!

This was in fact, not first found by WMAP, as there were other experiments that found this. The Boomerang experiment from Antarctica was actually the first to make a precision measurement of the size of the most dominant fluctuations in the CMB. They all knew ahead of time, what they were looking for, so if it was one degree across, that would be evidence that the universe was spatially flat. This was in the year 2000, soon after dark energy had been discovered, when people were still very excited, though skeptical. They wanted to know if this picture was hanging together?

So the Boomerang experiment and other experiments following up on it, looked for fluctuations in the CMB of one degree across, and that's exactly what they found. In other words, the CMB is telling us that the universe is spatially flat, and that the total density of stuff in the universe is the critical density. It fits perfectly with the concordance cosmology of 5% ordinary matter, 70% dark energy, 25% dark matter.

So exciting this was, that by 2003 when WMAP came along, Science magazine declared it to be the breakthrough of the year, even though what they really did was just say that, "We didn't make a mistake in 1998!" Back in 1998, the acceleration of the universe was the breakthrough of the year, which suggested the concordance cosmology. Then in 2003, WMAP came along and said, "Yes, that's right! This crazy universe with 5%, 25%, 70%, really is the universe we live in." This was sufficiently surprising, that again, it was the breakthrough of the year!

One thing to take very seriously is the fact that the CMB tells us the value of ρ. So if you just take evidence from the CMB, that ρ is the critical density, so that Ω=1, and you add that to the evidence from clusters of galaxies, ordinary galaxies, gravitational lensing, that matter only adds up to 30% of the critical density, then you don't need to mention the word supernova to be convinced that there's something called dark energy, that is 70% of the universe. It's just 100% - 30% = 70%.

In other words, whether or not we have a correct theoretical understanding of the physics of type Ia supernovae, whether or not the two supernovae groups did a good job in explaining their error bars and collecting data that is reliable, you're still forced to the conclusion that dark energy exists. The constraints that we have right now, over-determine the kind of universe in which we live. You can't wriggle out of the conclusion that there's a lot of dark energy, just by being skeptical about the supernovae results.

So the place we are, is that we're stuck with a universe in which 70% of ρ, is dark energy. We now must face up to the same kind of problem that we had with dark matter. Given that there is this stuff, what could it be? Let's just foreshadow a little bit, by saying what the simplest possible candidate for dark energy would be, and that is something called vacuum energy.

Remember that the two important properties that we know dark energy has, are that it is spatially uniform, more or less the same in every location in space, and it's persistent in time. The density ρ, the amount of energy per cubic cm, isn't changing very much for the dark energy as the universe expands. So the simplest idea for something that is more or less smooth in space, and more or less constant in time, is something that is exactly constant throughout both space and time. In other words a form of ρ that is inherent in spacetime itself. It's just the statement that every little cm³ of space, contains energy, whether there's any stuff in that cm³ or not.

So we see an image that is the closest Sean could come to an artist's representation of what this idea of vacuum energy, really looks like. You imagine taking a little cube of completely empty space, and ask yourself the question of how much energy is there in this cm³ of empty space?

According to General Relativity, there's no reason why the answer has to be zero. The energy density of empty space, is a constant of nature. It could be negative, positive, or zero. That number, whatever it is, is the vacuum energy. So the hypothesis that this energy is the correct value to explain 70% of the critical density, fits with the hypothesis that this is the dark energy. The dark energy is in fact, the energy density of empty space, the vacuum energy.

This is the same thing that Einstein was talking about when he first invented what he called the cosmological constant. Einstein added this term to his equation as we'll talk about later. Because he was not able to explain a static universe within his theory. We now know the universe is not static, so Einstein said this was a great mistake of his. Yet now we're bringing back the cosmological constant, that's exactly equivalent to this idea of vacuum energy.

So that's a very good idea. We don't need to complicate the idea, the idea that the dark energy is vacuum energy inherent in empty space, plus our ideas about dark matter and ordinary matter, are enough to fit the data. Why then do we even contemplate other possibilities? Well there's one point, which is that we were just surprised with the existence of dark energy itself. We had a preference, a prejudice ahead of time, that matter made up the critical density, and we were wrong! So our prejudices about what ideas are simple and make sense, shouldn't be taken too seriously. We're keeping an open mind about this possibility.

In fact, just like the flatness problem, there are sort of naturalness and fine tuning problems associated with the concept of a vacuum energy. One is called the cosmological constant problem, which is just the statement that once you admit that there can be vacuum energy, you can start asking how big should it be? The answer is, as we'll see, that it should be much larger that what we would observe! Once you admit that there can be any energy density in empty space, the surprise is that's it's so tiny.

The other problem is called the coincidence scandal. This is an exact analog to the flatness problem. Remember the flatness problem, said look, ρ in stuff that decays away very quickly, the equivalent of ρ in K decays away slowly, so why would it be the case that we are born right at the right moment, when these two things are comparable to each other? These two things evolve with respect to each other, K and ρ. Wouldn't it be a coincidence if there we were, at the right time to observe both of them?

This kind of argument convinced people that K must be zero, that we must live in a flat universe, which turned out to be right. Yet exactly the same set of words applies to the vacuum energy. It doesn't evolve at all. As a function of its ρ, it doesn't change as the universe expands. It doesn't go up or down. Yet ρ in matter or radiation, does go down as the universe expands. These two numbers change with respect to each other by quite a bit.

However, we're inventing a universe now, claiming that it fits the data in which 30% of ρ is matter, and 70% is vacuum energy. Those numbers are close to each other. In the past, it was almost all matter, in the far past it was almost all radiation, in the future it will be almost all vacuum energy. Why are we lucky enough to be born at the right time when the vacuum energy is comparable to the matter in the universe? That's the coincidence scandal. Sean actually has no good ideas for why that might be the case!

So we have a theory that fits the data, and we keep getting more data that the theory keeps fitting. Yet we recognize that the theory we have, has holes in it, in the sense that we don't understand why certain parameters take on the values that they do. That will encourage us to keep looking at more theories with different possibilities. So in the rest of these lectures we'll take some of these very seriously.

14. The Accelerating Universe - Sean Carroll - Dark Matter, Dark Energy: The Dark Side of the Universe



We now start the last cycle of the course by finally getting to dark energy. All the previous material can be though of as preliminary up to this point as we put it all together.

In the 1970s and 80s it was known that the limited amount of dark matter plus the small amount of ordinary matter did not add up to enough mass to make the universe flat, even though we see a flat universe without curvature (K=0). We should see the observed density as actually equal the density required for a flat universe, the critical density. Their ratio we call omega(Ω), which should then equal 1 for a flat universe. Yet the dark matter plus ordinary matter only add up to 30% of the critical density by the 1990s, so omega only equaled 0.3.

Many cosmologists thought it was just a matter of time before we found all the remaining 70%, probably dark matter, perhaps somewhere in between galaxies. But a growing number began to realize that this was not going to happen, so they began searching for other answers.

In the Friendmann equation ρ = H² + K, we roughly knew the Hubble constant (H), but the two other parameters remained unknown. They could try to measure K, which is what we'll think about in the next lecture. Or they could try to measure the energy density of the universe, rho(ρ). This is related to H, in that they compared the value for nearby galaxies with values for the furthest galaxies. The difference would tell them the rate that the expanding universe was slowing down due to its mass, and so weigh the universe.

Type Ia supernovae were standard candles in theory, due to the Chandrasekhar limit of white dwarfs. These could be observable in the furthest galaxies, while the cepheid variable stars are observable in nearby galaxies. The "period / luminosity" relation for cepheid variables is analogous to the "light curve / luminosity" relation for type Ia supernovae. Discovered in the 1990s by Mark Phillips, the period from brightening to fading, allowed the type and luminosity to be determined, thus distance.

The surprise was that the universe was found not to be slowing down, but speeding up. After the initial shock, it began to actually make sense! This was the missing 70% needed to make omega = 1. This solved the age problem, the structure problem, the CMB, the rho, and the deceleration questions!

I was fortunate to schedule many of these initial supernovae observations using the Hubble Space Telescope. It was tense work because the telescope pointing had to be tweaked at the last possible moment to coordinate with ground based observations. Members from both teams, sometimes even Saul Perlmutter himself, would talk with me about plans for sending coordinates from the observatory, which I would then convert and relay to the Hubble. One wrong digit and I would be in big trouble! The only mistake was on their end once, sending me a wrong coordinate, thus missing observations of the supernovae altogether. When they announced the initial results of the accelerating universe, it was so unexpected that I immediately thought this would someday be awarded a Nobel prize, and that appears to still be a safe bet.

Dark energy was not affected by gravity, so matter was ruled out. It was smoothly distributed in a persistent field, so radiation was ruled out. New solutions create new questions.

The question at the end of the course guidebook for this lecture asks why the Hubble constant (H) can be constant in an expanding universe. This is explained fully in lecture 16, so don't panic. They made a mistake by including it a few lectures too early.

We've reached that happy point in these series of lectures, where we start talking about dark energy as well as dark matter. Dark energy is something different from dark matter. It is discovered in different ways, it plays a different role in the cosmic story, it comes from something different.

So the fact that we need both dark energy and dark matter to explain the observations that we see in cosmology, is evidence that the dark sector of the universe, 95% of what the universe is made of, is interesting somehow! It's not just all the same stuff. It might be that someday we can subdivide the dark sector into more than two bits. Yet right now, we think that dark matter plus dark energy, is enough to explain everything we've been able to see in the universe so far.

However, the way we get to the discovery of dark energy, goes through thinking about dark matter. The thinking about dark matter goes all the way back to the 1930s, when Fritz Zwicky was looking at the dynamics in clusters of galaxies. He noticed that in the Coma cluster of galaxies, the motions of the individual galaxies were too fast to be explained just on the basis of the ordinary matter you saw there. By the 1970s, Vera Rubin looked at individual galaxies and realized that they also were spinning too fast to be associated with nothing but the visible matter.

So through the 1970s and 80s, people became absolutely convinced that there was something called dark matter, that the dark matter couldn't just be the ordinary matter that was hidden from us somehow. Yet the question remain, how much dark matter is there? Every time you looked at a larger system, you found more and more dark matter. In the 1980s therefor, a lot of people were convinced that we'd continue to find more and more dark matter.

For example, you can look at individual galaxies and clusters, yet how can you be sure that there isn't more stuff that is in between the galaxies and clusters? How can you be absolutely sure of that? Furthermore there was another reason to be skeptical. That comes from the Friedmann equation and the notion of the critical density of the universe.

So we look again at the Friedmann equation that relates stuff inn the universe, to the curvature of spacetime, in the case of an expanding, homogeneous universe.

((8πG)/3)(ρ) = H² + K

On the left, we see rho(ρ) which stands for the energy density of the universe. Now we're working under the approximation where everything is perfectly smooth. The right side shows the Hubble constant (H) that tells us about the expansion rate of the universe, and the spatial curvature (K). So if (K) is zero, and space is flat, if geometry is like Euclid said it was, that's a special value for the spatial curvature. It could be positive or negative. The other terms in the Friedmann equation should not be positive or negative, but want to be positive. The energy density is something that should be positive. We like positive amounts of energy, not negative amounts. That's dangerous.

The Hubble constant squared, doesn't matter what the value of H is, it's never going to be a negative number. So there's no special, interesting, particular values of the energy density (ρ) or the expansion rate (H), yet there is a special, interesting, middle value for the spatial curvature (K). It's zero.

However, when you plug in the numbers, the energy density you observe in the universe, in the matter of cluster and galaxies, doesn't seem to be equal to the special amount of density you would need to make the universe spatially flat. We can define the critical density as the density that you would need to satisfy the Friedmann equation when K equals zero, when there is no spatial curvature.

We can define that density, whether or not that's the density we actually have. In fact, cosmologists often define a number called omega (Ω), which is taking the actual density of the universe, and dividing by the critical density. So if the density is equal to the critical density, we say Ω=1. In fact, with the stuff we've found in the universe, the ordinary matter and the dark matter, only about 30% of the critical density is there in the matter of galaxies and clusters. So Ω seems to be 0.3.

That's a very strange number to have. A very nice number to have would be 1.0, since the universe would be spatially flat. A number like 10 to the ten billionth power, or for that matter, one ten billionth, would be numbers which wouldn't surprise us. They would just be some numbers, and we can't really explain them.

Yet 0.3 makes it seem like we're missing something. It's telling you that we're 30% of the way to being the critical density, and you remember that every time you look, you find more stuff. So throughout the 1980s, many cosmologists were convinced that we'd continue to find more stuff, and eventually find enough matter in the universe, to show that the energy was equal to the critical density. The value of 0.3 is close to 1.0, yet not equal to it. A lot of people just said, well we haven't found everything yet.

However, in the 1990s, that point of view became harder and harder to stick with. Technology became better, our ability to measure the energy in the universe in terms of matter, became more and more convincing. Especially with things like gravitational lensing, with x-ray maps of clusters of galaxies, we'd become convinced that the matter we'd found in the universe that wasn't adding up to 1.0.

The idea that a cluster of galaxies is a fair sample, is exactly the idea that the amount of dark matter in that cluster, compared to the amount of ordinary matter, is the same in that cluster as it is for the universe as a whole. If that's true, the amount of stuff we're finding in clusters of galaxies, implies that Ω is only 0.3, it's only 30% of the critical density, and does not quite equal 100%.

So if you were a respectable theoretical cosmologist in the 1990s, you would have begun to admit that this was true. Yet Sean can say that there were very few respectable theoretical cosmologists! The observers, who actually took the data, were becoming convinced that something was going on, yet the theorists were still clinging to the hope that somehow Ω was equal to 1.0.

Sean can remember personally giving a talk at the end of 1997, when he was asked to give a review talk on the cosmological parameters, the Hubble constant (H), Omega (Ω), rho (ρ), and so forth. Sean was one of these disreputable theoretical cosmologists that was personally convinced that Ω matter must be one, and we just hadn't found it yet. So when he sat down to look at all the papers which had recently been written, all the data that was collected, the talk he ended up giving said, "Well you know, something is going on." Maybe:

We do not live in a universe where cold dark matter makes Ω = 1.0.
Something weird is going on, so that either Ω is not 1, and we do not have quite the critical density,

Or perhaps it's not all cold dark matter. Maybe there's a mixture of cold and hot dark matter?
Or perhaps there's something weird in the early universe that made galaxies form in a strange way?
Or maybe there's more stuff than just matter in the universe?

Maybe there's this stuff that these days, we would call dark energy. So in late 1997, we were getting desperate. We had a whole bunch of thing on the table for what could possibly explain the data, yet didn't know which one of them was right. So what do you want to do to resolve this? You want to weigh the universe, and find out how much energy density there is in space, yet you want to weigh the whole universe, not just a bit of it here and there. You could always be missing something in between. So how can you weigh the entire universe all at once?

It turns out there are two techniques to use. One is to actually directly measure the spatial curvature (K). If you did this, you could find out whether the density you have is only 30% of the critical density, or whether it's 100%. We'll talk about that in the next lecture.

The other way is to measure the deceleration of the universe. You measure how the expansion rate of space, changes as a function of time. So this is something you'd expect to happen in ordinary cosmology. It's true that the universe is expanding, things are moving apart, yet while they do that, the different particles in the galaxies, the ordinary matter and the dark matter, is pulling on all the other particles. Stuff is exerting a gravitational force. So you expect that expansion rate to gradually slow down.

If there's enough stuff, it will in fact re-collapse. That would be Ω>1, so we had more than the critical density. So if you measured precisely the rate at which the expansion of the universe were changing with time, that would tell you the total amount of stuff in the universe. The challenge is just to actually do that. It's very difficult. How do you measure the rate at which the expansion of the universe is changing? How do you measure the deceleration of the universe?

Well you do what Hubble did, yet you just do it better! Hubble found the expansion of the universe, by comparing the velocity of distant galaxies to their distances. So Hubble's Law, that tells us velocity is proportional to the distance, is always going to be valid in a small region of the universe, cosmologically speaking.

Yet when you get out to a very far away galaxy, now you're looking at one that was emitting light from the distant past, by the time it gets to you. You're actually probing what the universe was doing at an earlier time, since light moves at only one light year per year. Therefor if you measure the distances and redshifts, the apparent velocities, of galaxies that are very far away, you can see whether or not the expansion rate has changed. You can measure the acceleration or deceleration.

So you want to do what Hubble did and use standard candles. If you have some object whose brightness is fixed, so you know how bright it really is, then by seeing how bright it appears to you, then you can calculate how far away it is. That's the basic idea of a standard candle. Hubble's standard candles were cepheid variable stars, pulsating stars for which you could figure out from the period of pulsation, how intrinsically bright the star really was.

The problem is that cepheid variables are not that bright. They're just the brightness of ordinary stars. You can't pick out, individual cepheid variables that are in very distant galaxies. Instead, you need a much brighter standard candle. So eventually what you appeal to are supernovae, exploding stars that are incredibly bright. We see an image of one of the most beautiful supernovae you'll ever see, SN1994d, and we can see that there's a galaxy with the supernovae in the bottom left. That is a star in that galaxy, not a nearby star in our galaxy. So the brightness of that supernovae is comparable to that of the entire galaxy it is inside, or just in front of. That's billions of times the brightness of an ordinary star.

That's the good news, yet the bad news is that they're rare. You don't see supernovae all the time, so you can't predict them. In a galaxy the size of the Milky Way, you're only going to get about one supernovae per century. The other problem is that supernovae are not standard candles, all by themselves. They are not all the same brightness. There are different kinds of supernovae.

When we discussed MACHOs and how you create neutron stars, there's a type of supernova called a core-collapse supernova (type II). You have a bright, heavy star, burning nuclear fuel. That fuel eventually burns out and the core of the star just collapses, exploding off the outer layers. Clearly for different masses of stars, when they collapse, their brightness is going to be different. So type II supernovae are not standard candles.

Yet from the name of type II supernovae, you might guess there's something called the type I supernovae. In fact, there's various different kinds of type I supernovae, and the particular kind that's called a type Ia supernova, can be used as a standard candle. A type Ia supernova is a very different object than a type II supernova. A type Ia comes from a white dwarf star, so this is what you get when a medium mass star gives out its nuclear fuel and just settles down to be a white dwarf.

Yet imagine that you're lucky, and you have a white dwarf star that has a companion. There's another star next to it, which in the course of its evolution, grows. The white dwarf begins to accrete some of the mass from its companion. We see a picture of an artist's conception of a white dwarf getting mass away from a nearby star, so that the mass of the white dwarf gradually grows and grows.

Yet there's a limit, as white dwarfs cannot just be arbitrarily massive. Eventually the gravitational field will become so strong that the white dwarf collapses. This limit is called the Chandrasekhar limit, and it's the same for every white dwarf, everywhere in the universe. So you can see that you have a hint in fact, that something is a standard candle. The place where the white dwarf collapses and the outer layers are blown off, forms a type Ia supernova. So it would not be surprising if every such event is more or less the same brightness.

It's true, they are plausibly standard candles. Type Ia supernovae could be approximately the same brightness. Yet there's various problems associated with the idea of using type Ia supernovae to measure the acceleration or deceleration of the universe. First, type Ia supernovae are not precisely standard candles. By looking at nearby supernovae, people noticed that type Ia's differed in brightness by about 15%. This doesn't sound like that much, but we're trying to look for a very subtle change in the expansion rate of the universe, so every 15% counts.

The real breakthrough in this field, came when Mark Phillips in the late 1990s, realized that just like cepheid variables, type Ia supernovae had a period/luminosity relationship. The supernovae doesn't pulsate, yet it does go up in brightness and then go down. What Phillips realized was that the time it takes to decline in brightness, told you what the maximum brightness was. The type Ia supernovae that are the brightest, are those that take the longest to decline. So if you measured not only the maximum brightness of the supernovae, yet also how it evolved, how the brightness declined as a function of time, then you could really pin-down that overall brightness, to better than 5%. Then you have something that is a good enough standard candle, to measure the deceleration of the universe.

The other problems are more like worries. One is, how do you know when you observe a supernovae that's very distant, if they were just as standard during the early universe, as they are today? That is something we'll have to deal with by taking a lot of data, and trying to figure out the real physics behind these objects. Yet more importantly, how do you even find them to start with? You could just look at a random galaxy,staring at it for 100 years. Then you have a 50% chance of finding one supernovae. Yet no one is going to give you telescope time to do that! So you need to come up with a better technique.

The thing is that only by the 1990s, did astronomical technology evolve to the point where we could find a whole bunch of supernovae all at once. So people developed techniques using large CCD cameras, which allowed you to take an image of a fairly wide swath of the sky. You want to take an image that is deep enough to get lots of galaxies, yet wide enough that you get a whole bunch of them, so you get galaxies at different redshifts in great numbers. Then you'll notice that the rise time of a supernovae, the time it takes to go from being very dim, to very bright, is a couple of weeks. That turns out to be a very convenient time.

You can then take an image of some region in the sky with literally thousands of galaxies in it, and then you come back again a few weeks later to take another image. In fact you can do the first image at new moon, when the sky is not affected by the bright moonlight, and then take the next image during the next new moon, and it works out perfectly. Then you want to compare these two images to look for one of the galaxies getting a tiny bit brighter.

So the picture we see now, is a little bit of a fake, since it's a better than average view, of what such a supernova discovery looks like. It's from the Hubble Space Telescope, rather than the ground based telescopes where most of this work is done. Yet you get a feeling for what's going on. We see an image on the left of the Hubble Deep Field, and another image on the right of the same region of the sky, yet taken several years later. You can tell there's a supernova in the image on the right, since there's an arrow pointing to it! That's the nice thing about these images, there's always arrows pointing to where to look! In this case, the arrow points to a fairly dim, red dot, which you can zoom in on, and find that indeed it's a supernova all by itself. This is a very distant supernova, and an especially good example, yet by using this type of technique and technology, you can find supernovae by the dozens.

So this project of finding a whole bunch of type Ia supernovae and using them to measure what the universe was doing at earlier times, was undertaken in the 1990s by two different groups. One group was centered at Lawrence Berkeley Labs, led by Saul Perlmutter. The other group was scattered around the globe, not really centered anywhere. Yet the leader of the group was an Australian astronomer named Brian Schmidt. So we see a picture of Brian Schmidt on the left, and Saul Perlmutter on the right, arguing over whose universe is accelerating or decelerating faster! (Both with their dukes up!)

It was a mostly friendly rivalry between the two groups, Perlmutter's on one side, Schmidt's on the other. Schmidt's group involved a lot of people, such as Adam Riess now at STScI who was the lead author on the most important paper, Bob Kirshner at Harvard was both Brian Schmidt's adviser and Adam Riess' adviser, so was sort of the intellectual godfather of the team! Alex Filippenko was another prominent member of the team, who was most famous for giving Teaching Company lectures on modern astronomy that Sean encourages us to have a look at.

So it was important that two groups were doing it, because if only one group did it, no one would believe them! Yet if two groups do it, and get the same result, then people are willing to think that something is going on which is at least on the right track! Indeed it was in 1998, only a few months after Sean gave his personal talk in 1997 when he said that something was going on, that these two supernovae groups in 1998 announced that something was going on, namely that the universe was not decelerating at all, but was accelerating! It was expanding faster and faster. The correct image of the universe is one in which galaxies have a velocity that is increasing as a function of time.

This came as a great surprise indeed! Most did not expect us to live in an accelerating universe. The Schmidt group, in fact, had a subtitle of "searching for the deceleration of the universe!" What they found instead was the "acceleration of the universe." So you had a very strange situation where on the one hand, the result was a complete surprise, yet on the other hand, it make perfect sense. The reason that people were willing to believe this result, besides the fact that two very good groups got the same result, was that it made things snap together. It answered a lot of questions all at once, as we'll explain.

However, the fact that the universe is accelerating, is a physical challenge. It's not what you expected. There's an intuitive argument that as the universe expands, it should be decelerating because particles are pulling on each other. If particles should be pulling on each other, how in the world do you explain an apparently observed phenomenon that the universe is accelerating rather than decelerating? The answer is you need something besides particles. You need to invent a new kind of stuff!

If the Friedmann equation is correct, and the universe has nothing in it but matter and radiation, it doesn't matter what kind of matter and radiation you have, the universe will necessarily be decelerating. So either the Friedmann equation is not correct, according to these data, a possibility that we'll explore later on, or there is something in the universe that is neither matter nor radiation. We call that stuff dark energy.

So what does dark energy mean? We use the word dark energy and it sounds a little bit mysterious, yet we'll emphasize that even though there's a lot we don't know about dark energy, it's not just a placeholder for something going on that we just don't understand. So dark energy really does have some properties that are definite, and need to be part of any theory of what the dark energy is.

First, the dark energy is smoothly distributed through space. It is more or less the same amount of dark energy here in this room, than it is anywhere between the galaxies. At the very least, dark energy does not clump noticeably in the presence of a gravitational field. The reason why we know that is, if it did clump, we would have noticed the dark energy before, when we measured the energy densities of galaxies and clusters. Gravitational lensing and other means would have shown there to be more energy here than can be explained by the matter, and that would be the dark energy.

Yet we don't see that. The dark energy is the same amount inside a cluster, as outside the cluster. That's why we didn't see it when we looked with gravitational lensing and dynamical means. So you might guess that the dark energy could take a form like some kind of radiation, something moving very quickly. If anything moves fast like photons or neutrinos, it would not cluster into galaxies and clusters, so that would be smoothly distributed, just like the dark energy.

Yet the second important property of dark energy is that it is persistent. The energy density, the number of ergs per cubic centimeter of the dark energy, doesn't change as the universe expands. So that's the opposite of what radiation does. Radiation looses energy rapidly as the universe expands, so for dark energy you need something that doesn't go away. It is the fact that dark energy is persistent which explains the acceleration of the universe. That's why were convinced that the dark energy is really something different. It's not a kind of particle that is pulling on other particles, but it's a kind of field, a kind of substance, a kind of fluid that fills space. Its energy density doesn't go away as the universe expands.

Yet this is asking quite a bit! This is like going around and saying, "OK, we've discovered some stuff that is completely unlike ordinary matter, dark matter, or radiation." Nevertheless, over the course of 1998, people began to buy this story fairly quickly. Why are astronomers who are by nature quite skeptical people, willing to believe this remarkable claim? The real point is that it made everything suddenly make sense.

Like we said, in 1997 things were becoming difficult to understand. We had a prejudice that Ω matter, the density of ordinary stuff plus dark matter, should be 1.0. So we should have the critical density. Yet the observations simply weren't consistent with that. Plus there were other problems that made the universe in which we believe, to not quite make sense when compared with the data that we had.

One such problem was the age problem. Given the amount of stuff in the universe, and given the Friedmann equation, you can calculate how old the universe should be. It's an absolute requirement that the universe should be older than the stuff inside. When people calculated the age of the universe, and when they compared it to the ages of the oldest stars, they were often getting an answer that the stars were older than the universe. That didn't quite make sense. There were large errors bars on that, so it was not a very hard and fast conclusion, yet it still made people worry that we were missing something. In an accelerating universe, the universe today is older for a given value of the Hubble constant than it would be in a decelerating universe, so it made the age problem go away, just like that.

Another problem was large-scale structure in the universe. If you had a universe with nothing but matter in it, and the matter was the critical density, you form a lot of structure. There's a lot of matter around, it clumps very easily, and you have more structure in that hypothetical universe than we seem to be finding in the universe we observe. You could explain this by saying that we don't have that much matter, that we don't have the critical density. Yet another way to explain it, is to imagine that there's something that doesn't clump. There's some dark energy that is smoothly distributed and doesn't contribute to the growth of large scale structure.

Finally there is this business about the critical density. Like we said, there was a prejudice on the part of theorists, that the critical density was the nicest value for the total density to have. They were therefor hoping that observers would continue to find more matter, even though the observers were telling them that no, they hadn't found anymore. It turns out that once you find the universe to be accelerating, and you invoke the presence of dark energy as an explanation for this acceleration, you then ask how much energy then do you need in the dark energy? The answer is about 70% of the critical density!

In other words, the really nice thing about the dark energy, was that it provided exactly enough energy to make the total energy density of the universe, equal the critical density. So it wasn't just that we'd found a new element of the universe, but it was that we found what was an apparently complete picture, a complete inventory of the universe.

So this is where the pie chart that we began our lectures with, comes from. We have what is now called a concordance cosmology, a view of the universe in which 5% of the stuff in the universe is ordinary matter, 25% is dark matter, and 70% is dark energy. That simple set of ingredients, is enough to make the universe flat, to be the critical density, to get the age right, to correctly explain large-scale structure, the acceleration of the universe, the CMB, and get the matter density right in galaxies and clusters.

That is a lot of observations that come from a small amount of assumptions. That's why people were so quick to jump onto the concordance cosmology bandwagon. It also tells us something about our place in the universe. Not only are we not like Aristotle would have it, sitting at the center of the cosmos, we are not even made of the same stuff as the cosmos! We're only 5% of the universe. The kinds of things we're made of, are only 5% of the energy density of the stuff in the universe.

This is a big deal, a sufficiently big deal that it was recognized by Science magazine in 1998 as the breakthrough of the year. They invented a nice little picture of Albert Einstein, blowing the universe bubbles with his pipe, to illustrate the fact that this dark energy was in fact something that Einstein himself had contemplated, as we will talk about in later lectures.

Of course, even though it makes everything fit together, the dark energy and its role in concordance cosmology, are still dramatic claims. We would certainly not accept the evidence of the supernovae data, as sufficient to believe in this dramatic claim. We want to check things, like that the supernovae are telling us the right thing.

For example, the statement that the supernovae indicate the the universe is accelerating, is just the statement that the very distant supernovae are dimmer than we would have thought. So you can invent much more mundane ways to make supernovae dimmer. For example, perhaps there is a cloud of dust in between us and the supernovae that is actually just scattering some of the light. That is absolutely the kind of thing that the supernovae groups took very seriously. The point is that when dust scatters light, it doesn't scatter every wavelength equally, but scatters more blue light than red. So the light from the supernovae would be reddened if it passed thrhough dust.

One of the things the supernovae groups did was to check very carefully for reddening and other alterations of the spectra of the supernovae they were observing, yet they didn't find any. They also checked that the behavior of the supernovae they observed was the same, whether they came from small, little galaxies, or big galaxies, in clusters, outside clusters, and there was no environmental effect that was leading to the supernovae being different in one place from another.

The real killer check however, on this picture of the universe, would be to make a prediction using it, and then to go measure that prediction. So here is a prediction made from this concordance cosmology. If 5% of the universe is ordinary matter, 25% is dark, and 70% is dark energy, and it all adds up to the critical density, then the spatial curvature of the universe should be zero. The universe should be spatially flat. So far we haven't talked about any observational check on that, so the next lecture will take us through how we know whether or not the universe is spatially flat, and the answer is yes, it is spatially flat. In other words, even without the supernovae data, something is 70% of the energy density of the universe, and that something is the dark energy.

So that's good news that we understand something about the universe, that 70% of it is dark energy. The bad news is we don't know what that stuff is, on some deep level. So in addition to making sure that we're on the right track, that there is dark energy, theoretical physicists are now faced with the task of explaining what the dark energy is, where it came from, what it might be going into, how it can react with other stuff.

There is a simplest guess, which is Einstein's idea of the cosmological constant. We will take that guess seriously, and also look at alternative explanations to see which one fits the best.

13. WIMPs and Supersymmetry - Sean Carroll - Dark Matter, Dark Energy: The Dark Side of the Universe



This lecture ties right in with my previous review on stepping back to see the larger picture in the process of science. How scientists propose unique and testable theories when faced with great unknowns. Sean admits this lecture is all conjectural in the effort to characterize the particles of dark matter. So we should be prepared to get out of this presentation a good exposure to this big picture, not necessarily to understand how the hypothetical theories actually work. This lecture should motivate you to do that with other, more appropriate sources. Otherwise it could make for a frustrating experience!

Scientists would like to use existing particle physics theories to characterize dark matter instead of making up a new one just for dark matter. Supersymmetry is one such theory that naturally predicts fermion WIMPs, a leading dark matter candidate. These are the fermionic superpartners of bosonic particles, with names such as the zino, higgsino, photino, or neutralino. Their stability may be due to conservation of some new quantity, they are heavy to the effect of 1000x the proton mass, and they are weakly interacting. Direct detection is promising, but depends on the various models. Indirect detection could occur when particle anti-particle annihilation produces gamma rays, which the GLAST satellite could observe. Making our own fermion WIMPs is possible with Fermilab but should be even more possible at CERN with the LHC.

Other dark matter particle candidates, apart from supersymmetry, are:

A Kaluza-Klein particle that arises from the idea of an infinite number of partners resulting from curled up spacial dimensions like we'll see later in string theory.

Sterile neutrinos don't feel weak interactions, just gravity. But this theory does not get anything extra out of it, as the many spinoffs of supersymmetry would.

A bosonic dark matter particle is known as an axion. They have a very small mass, but a large density that agrees well with other theories discussed in later lectures.

This ends the speculation phase of the second larger cycle in the course. We start anew next time with dark energy.

By now, we've squelched any remaining suspicion we might have in our minds that the dark matter, found from evidence in the dynamics and motions of galaxies and clusters, could possibly somehow be ordinary matter that was somehow hidden. It can't be gas or dust, since that would fall into clusters of galaxies, heat up, and we'd be able to see it in x-rays. It also very probably cannot be ordinary stars, brown dwarfs, white dwarfs, neutron stars, or black holes, since all of them lead to microlensing events, for which we've searched for and can't find enough.

Finally, the real reason we know that ordinary matter in some hidden form, can't be the dark matter that we have evidence for, is both Big Bang nucleosynthesis and the CMB, both put a very tight constraint on how much ordinary matter there is in the universe. It's only 5% of the critical density, while the dark matter is something more like 30%. In other words, we have to turn to some unordinary kind of matter, some new kind of particle.

So in this lecture, we're going to start getting serious and start asking what kind of particle the dark matter could be? We have to go beyond the particles we've already established to exist in the standard model of particle physics. So what are the requirements for a new dark matter particle? Most obviously it must be dark, yet the other important requirement is that it must be cold.

So "dark" means not only is it hard to see, but that it doesn't interact very much. When you look at a galaxy, the shiny part of it that's easily visible from stars, all comes from the ordinary matter of course. The reason why the galaxy is able to contract, is because ordinary matter interacts. Ordinary matter, in the process of collapsing under its own gravitational field, will get stuck when one little bit of ordinary matter comes into contact with another one, so it can cool off and settle down into the bottom.

Dark matter seems to be distributed in a big, puffy halo, around the galaxy. It's not condensed right in the middle, like visible matter is. The explanation for this, is very easy to come up with. The dark matter particles just pass through each other and go right out. So dark matter needs not only to be dark, but also to be not interacting with itself in any obvious way.

The other requirement for the dark matter particle is that it should be cold. In other words, even if the dark matter existed and had a small mass, yet was moving at a high velocity, and close to the speed of light in the early universe, then when it tried to collapse it would go right outside and just keep going, not oscillating back and forth like a real good dark matter particle would do.

So we're trying to invent a new kind of particle that is dark and cold. Let's just check that there's no such particle lying around in the standard model already. Well, which particles in the standard model are dark? Of course we don't want the ones that decay, so neutrons would not be good examples of dark matter particles since they decay away and we need something that will stick around. The only neutral particle in the standard model that is stable, is the neutrino.

Neutrinos are very obvious dark matter candidates. However as it turns out, when you go through the details, because neutrinos are so light, they are not cold. When they decouple or freeze out from the primordial plasma at very early times, they're moving very fast. They qualify as hot dark matter.

If the dark matter we observe is hot, then structure on small scales in the universe doesn't form. You don't make galaxies in the universe, which is mostly hot dark matter. Whereas the stuff would want to collapse, the neutrinos are moving out very quickly and tend to smooth everything out. Hot dark matter is actually quite strongly ruled out as a possible way for the dark matter to behave, so much so that these days we turn it around. We use cosmology to place a constraint in the mass of the neutrino. If it's too big, is would be the dark matter. Yet we know it can't be the dark matter because we know the dark matter is not hot. So that's a good way to get an upper limit on how big the neutrino mass could be.

So that's the only possibility in the standard model of particle physics, and we need to turn to particles beyond the standard model. So lets think of what the requirements are on those particles. Of course they must be dark and cold, yet in lecture 9 we gave an explanation for how to calculate the relic abundances of particles left over from the early universe. If a particle interacts strongly in the early universe, then that particle and its anti-particle will all annihilate away. By the end of the universe's history, by today, we won't have anything left over. So what you need for a particle to be left over in sufficient amounts to be the dark matter, is a particle that does not annihilate very strongly, one that is weakly interacting.

Now when we say "weakly interacting" you might reasonably think it means not interacting very much, or not interacting very often when coming together with another particle. That's true, but now lets actually plug in the numbers. Let's do our calculation of what kind of particles give rise to the right kind of abundance of a particle today to be the dark matter, and ask how often should particles annihilate in order for them to be good dark matter candidates?

The answer is that the right rate of interaction for a new particle to be the dark matter candidate, is exactly that which it would have, if it interacted through the weak nuclear force. So when we say weakly interacting particles are good dark matter candidates, we don't simply mean particles that don't interact very much, but those that interact via a W and Z boson, via the weak interactions of the standard model.

It's not the only possible way to get dark matter, but it's suggestive. It's telling us that if we invent a new particle that is stable, that is not interacting through electromagnetism or the strong interactions, but does interact through the weak interactions, it is a natural dark matter candidate. So the name attached to such a candidate is a WIMP (Weakly Interacting Massive Particle), as opposed to the MACHOs which are the compact collections of ordinary matter.

So we want to make WIMP candidates, new examples of particle physics that give us WIMPs that are stable and could be the dark matter. For most of this lecture, we'll talk about one specific example of a particle physics model that naturally leads to a candidate WIMP dark matter particle. This specific example is called supersymmetry. Yet really what we should be getting out of this lecture, is not the details of supersymmetry as a candidate way to get dark matter, but just an example of the kinds of thought processes that physicists go through when trying to invent new particles. The point is you don't want to just say, "Oh yes, there must be some new particle, and that's the dark matter."

Particles have interactions and come with different properties. We know a lot already about the particle physics of the standard model. So when you're inventing a new particle, that has to fit in somehow. The best dark matter candidates will be those that have some natural interpretations in terms of the particle physics all by themselves, even if we didn't know there was such a thing as dark matter. Supersymmetry is an excellent example of that, which is why it's worth going into in some detail.

So supersymmetry is a hypothetical idea which invents a new symmetry onto those we already know about in the standard model. We already mentioned that the standard model of particle physics is characterized by a great amount of supersymmetry. For example, the most obvious thing that you see in the standard model, is that when you look at the fermions, they come in three generations. You have the up and down quark in a doublet, the electron and its neutrino in a doublet, and all by themselves those four particles form a self-contained set.

Then you have another four particles that repeat that pattern. Two quarks and two leptons, the charmed and strange quark, and the muon and muon neutrino. Then it happens yet again with the bottom quark and top quark, and the tau and tau neutrino. So the fact that you have a similar structure repeating itself over again, is an example of a symmetry. At a deeper level, symmetries are responsible for the forces between particles, the strong nuclear force, the weak nuclear force, and the electromagnetic force.

So supersymmetry is a very specific kind of symmetry, something different than we've ever seen in the standard model. It's a symmetry that relates bosons to fermions. Bosons are the force particles that can pile on top of each other to give rise to electromagnetic fields, gravitational fields, and so forth. Fermions are the matter particles that take up space. These two seem very different from each other. Remember that there's a thing called spin, and bosons always have an amount of it that is integer. Yet fermions always have a spin that is fractional. So supersymmetry is a speculative idea, saying somehow there could be a symmetry relating particles with different amounts of spin.

If that is true, the nice thing is that you can have a lightest supersymmetric particle, which is a perfect dark matter candidate. Such a particle is sometimes called the neutralino, and that's what we'll explore now. So if supersymmetry existed, for every kind of fermion particle, there would be a boson particle, with the same kinds of charges and the same mass. Vice versa for every boson and a corresponding fermion.

So for example, you have a bosonic particle with a certain mass, electric charge, and interaction with the weak and strong nuclear forces. If supersymmetry existed, there would be a fermion with the same mass, the same electric charge, the same weak and strong nuclear force interactions, yet at different spin. Now you can look in the standard model. There are certainly particles of both kinds of spin in the standard model, bosons and fermions. Yet they certainly don't match up. You can't take the bosons of the standard model and say any certain fermion is its superpartner. It doesn't quite go like that.

You could also imagine that there are new particles, which we don't see in the standard model yet, but that could be the superpartners of the particles we know and love. For example, the electron would have a partner that was a boson, and would be called the selectron. It would have a charge of -1, just like the electron does, a lepton number of +1, and a quark number of 0. It would not interact with strong interactions, but would interact with weak interactions.

Yet if supersymmetry were exactly right, that selectron would have exactly the same mass as the electron, which is clearly not true. If there were a bosonic particle with the electric charge of the electron and the same mass as the electron, we would have noticed it long ago! So somehow we have to invent an entire new set of particles we have never yet seen. What we have done is given them names. That's a good first start if we haven't actually found them yet. At least we can come up with their names.

The new particles we need to imagine, if we're going to believe that supersymmetry is right, are the fermionic partners of the existing bosons, and the bosonic partners of the existing fermions. The bosons that are the partners of the existing fermions are given names that are derived by tacking an "s" onto the beginning of the name of the fermion. So we have different kinds of quarks for example, whose bosonic superpartners are called squarks. We have different kinds of leptons whose bosonic superpartners are called sleptons. So you have the electron and its partner the selectron, the neutrino and its partner the sneutrino, etc. You can have great fun with this if you go very far!

For the existing bosons, they have fermionic partners that are given names by tacking the suffix "-ino" at the end of the particle name. So you have for example he photon, which is a boson, and its fermionic superpartner is the photino. The graviton is a boson, so there's the gravitino fermion. The Higgs boson we think exists, has a partner called the Higgsino, and so forth. So you get all sorts of particles.

If supersymmetry is right, given the fact that there is no way to match up particles we already observe, supersymmetry is hypothesizing that we have doubled the number of particles in the real world that we have actually observed in the standard model of particle physics. So it's a very economical idea in the sense that it's just one little idea to say there's a symmetry between bosons and fermions. It's easy to say, yet it becomes quite prolific in terms of what kinds of particles it predicts. It's not just one more particle tacked onto the standard model, but it's doubling the number.

Now you might ask if it's worth doing that? Why are we contemplating doubling the number of particles that we have in nature? Well dark matter is a nice benefit from inventing supersymmetry, yet it's not the primary motivation. In fact the very first reason why supersymmetry was invented, was because it is a prediction of string theory. We'll discuss this later in the course, which is a hypothetical way of gravity, and turns out to only work if you have supersymmetry.

Yet then it was recognized that even if string theory isn't right, supersymmetry by itself is not only aesthetically pleasing, not only is it a very nice and elegant theory of nature, but it also solves some naturalness problems that exist in the standard model. The foremost one is called the hierarchy problem. We won't go into great detail about this, but it's basically the fact that the different mass scales of particle physics are very different from each other. That is the kind of thing that particle physicists don't like, for things to be very different from each other without some good reason.

So the mass of the Higgs boson in the standard model is actually what sets the scale for all the massive particles in the model. Yet it's is somehow very different than the high-energy mass scales we're familiar with from gravity, from the Planck scale, or from GUT (Grand unification Theory). So why is the Higgs boson so much lighter than the particles we would expect to exist at the very high energies? That's called the hierarchy problem.

It turns out that in supersymmetry, there's a natural explanation for the hierarchy problem. That's why most particle physicists like supersymmetry as a candidate for physics beyond the standard model. The problem of course is that we don't see selectrons, squarks, etc. Somehow they must be hidden from us. In particular, they must weigh a lot more, they must have a much larger mass than we would naturally expect.

How do you do that? The answer is that this symmetry we're inventing, supersymmetry, must somehow be hidden from our immediate view. The idea that symmetries are hidden, is a very familiar one in particle physics. In fact, it's the correct way to think about the weak interactions. We mentioned that the W and Z bosons that carry the weak interactions, are very heavy particles, unlike photons, gluons, and the graviton, which are very light particles.

Why is that? In their natural state, the W and Z bosons that carry the weak interactions, would be massless. Yet it turns out that the symmetry associated with those particles is broken by empty space. In fact, it's broken by the Higgs field, which is why it must exist. The role of the Higgs field is to break the symmetry of the weak interactions and give mass to the W and Z particles.

This maybe sounds like cheating, by trying to sort of hide something in a broken symmetry, to explain things you don't otherwise understand. Yet the idea of properties of particles changing that depends on where they are in the medium through which they move, is very natural. For example, light does not always travel at the speed of light. The speed of actual light rays is different in air, water, or glass, than it would be in empty space. That's because the medium through which the light is traveling, has properties all by itself. So when we speak of the speed of light, we really mean the speed of light in a vacuum. Yet in stuff, the speed of light can be very different.

Similarly, the idea of a broken symmetry is that some field pervades empty space. Modern particle physics says that even in empty space, there is a Higgs field, something that has a non-zero value. The reason why W and Z bosons have mass, is that they're traveling through this Higgs field. That's a very successful idea, and the particles predicted by this idea, have largely been discovered and Nobel Prizes have been given out.

So we just want to do this same kind of thing, but now with supersymmetry. We want to break supersymmetry, and there must be some mechanism that does this at a deep level. If that happens, then all of the superpartners of the particles in the standard model, become heavy. So basically you take a whole bunch of particles in the standard model that had the same masses as those we observe. Yet you lift their masses by a large amount, you raise them by breaking the symmetry, and end up with a whole bunch of particles, all of which are very heavy, at least 1000 times the mass of the proton. So the reason why we haven't discovered supersymmetric particles yet, in this scenario, is that it's just too hard to get there. It's just out of our reach, although we're trying to do it right now.

So an interesting wrinkle of this possibility, is that there could be a new conserved quantity associated with supersymmetry. Remember that there are things called quark number, lepton number, electric charge, all which have quantities that can neither be created nor destroyed. An electrically charged particle cannot decay into a neutral particle.

Well what if "superness" is also a conserved quantity? What if whether or not you are in the standard model, or superpartner of the standard model, is a conserved quantity? Then the superpartners of the standard model, would not be able to decay into the partners we see, the actual particles we know to exist in the standard model. So therefor, the lightest, supersymmetric particle would have to be stable. There would be nothing for it to decay into.

So we're saying that there would be a kind of particle that would be stable because it carries some conserved quantity. It's heavy, some 1000 times greater mass than the proton, so we haven't seen it yet, and it can very plausibly be weakly interacting. These are particles that are part of the supersymmetric standard model, they're not completely separate. So you could have particles like the partner of the Higgs boson, the Higgsino. That would be an electrically neutral particle that would feel the weak interactions.

Likewise the Zino, the partner of the Z boson, or even the photino, the partner of the photon, these would all be massive, possibly stable particles, any one of which is a candidate for the lightest supersymmetric particle, and any one of them would make an excellent WIMP. They are all weakly interacting, massive particles, yet we don't know which one is the right one, since we don't know which one is lightest. Under different scenarios of supersymmetry, different particles are going to get different masses, but that's one of the things we need to figure out by taking data. We're not going to know until we do experiments and find the superpartners, which one of them is actually the lightest, the LSP (Lightest Supersymmetric Particle), or the neutralino.

So the reason why we like this theory is because supersymmetry was not invented to give us a dark matter candidate. It was invented for other reasons, but lo and behold, a perfect dark matter candidate pops out! It's easy in supersymmetry to get particles that are stable, weakly interacting, and massive. So what you want to go do, is test this idea, to go look for these particles. There are various ways to do this.

The most promising method over the next few years is called direct detection, which means building an experiment which will actually find a dark matter particle directly. The problem is dark matter particles by construction are weakly interacting, they don't interact very much. So what we have to do is very similar to what has already successfully been done with neutrinos. We need to build a detector that is deep underground, shielded from the noise we're subjected to on the surface of the earth, and is very sensitive to particles coming in and lightly glancing off an atomic nucleus in the detector.

We've already done this for neutrinos, yet neutrinos are, firstly, at a very different energy scale than the dark matter particles are, and secondly, there's a shining beacon in the sky that emits neutrinos, namely the sun. For dark matter particles, we're looking for a background of them, since we don't have a shining source to look at. This makes it more difficult, yet there's a number of experiments going on right now that are actively trying to do exactly this experiment. It's very plausible that over the next few years, there will be a headline in the papers saying that scientists have directly detected the dark matter of the universe.

Yet maybe they won't, we don't know. That would be a hard thing to do, and in different models it becomes very easy or very difficult, so we're trying other ways. One other way is a very clever idea, taking advantage of the fact that not only do you have dark matter particles, but you have dark matter anti-particles! The reason why the dark matter particles have a certain density, is because they have stopped annihilating because the universe has expanded. You have both the particles and their anti-particles in the dark matter, yet they just don't annihilate since they don't interact with each other because there aren't that many of them around, per cubic cm. Much like neutrinos, they can pass right through each other very easily. You'd need a very high density of dark matter particles before you begin to see them annihilate.

Yet there are places in the universe where the density of dark matter particles might be very high. The center of galaxies or of clusters of galaxies, where dark matter particles were very spread out, will collapse, contract, and gather in the same place. There you will begin to see dark matter particles annihilating with dark matter anti-particles. When that happens they're going to give off radiation, high-energy photons. In most models they will give off gamma rays, which are hard for us to observe here on earth. If one comes to us here at the surface, it will be absorbed by the atmosphere.

So NASA and the DOE are building a new satellite called GLAST that will be launched in 2007, to look at gamma rays in the centers of galaxies and clusters of galaxies. That means we'll be indirectly detecting dark matter. If we see a signal of gamma rays at a very specific source, at a very specific energy, that is exactly what you'd expect if a particle and its anti-particle were annihilated. Again, this may or may not happen in different models. We have to go out there and do the experiments to see if nature is being nice to us in this way.

Finally we have perhaps the most direct method of all, which is forgetting about the dark matter that surrounds us. Lets just go and make our own dark matter. This is what particle physicists are paid to do! They collide energetic particles together and make new ones. This is what we're trying to do right at this moment, building better particle accelerators to do even better. The reason why it's not a surprise that we haven't yet made dark matter particles, is because it's hard to notice, even if we do! Dark matter particles are weakly interacting, so they're very hard to make, and are neutral, so they're very hard to detect once you've made them.

In other words, we might be making dark matter particles all the time in current particle accelerators, yet we just don't have enough data to be sure that this is what's going on. As of 2007, the most high energy collisions we can produce here on earth, come from the Fermi National Accelerator Lab (Fermilab) outside Chicago. They make high energy collisions at about 1000 times the mass of a proton, at about the place you'd just expect to see these superpartners being created. The problem is that since it's just at the edge, you might make one or two and yet never know. So we're crossing our fingers and it's certainly possible that the Tevatron accelerator at Fermilab, could make supersymmetric particles, yet we can't guarantee it.

Therefor we're trying to build even better accelerators. The LHC (Large hadron Collider) is being built right now at CERN, a European particle accelerator outside Geneva. It's scheduled to turn on by late 2007, yet will take at least a year for energies to ramp up. Once they do, they should be 10 times more than the energies at Fermilab. We'll be able to create vastly new numbers of particles at the LHC that we could only barely hint at, with the Tevatron. So again, it's very plausible that just a couple of years after the accelerator turns on, we'll be awash in supersymmetric particles.

The problem there will be too many new particles, and we'll have to figure out what is going on. It's not going to be an overnight project, yet there will be a lot of excitement involved when we discover new particles at the LHC, and try to make sense of them, to figure out how they fit into particle physics, and whether or not one of them could be the dark matter.

It's also possible of course that there is dark matter, and it's some neutral particle that does not interact very strongly, yet does not come from supersymmetry. So there are candidate particles from dark matter, both that qualify as WIMPs and other kinds of dark matter particles. So we'll get one example of a particle that is a different way to get a WIMP. One that is neutral, stable, and feels the weak interactions.

That's something called the lightest Kaluza-Klein particle. This is an idea that has nothing to do with supersymmetry, but says that there are extra dimensions of space. There are tiny directions you can go in space that are curled up into little balls so you can't see them. We'll talk about this in detail when we get to string theory in the last few lectures. Yet the point is that if you have these tiny curled up dimensions, then every particle we already know and love, electrons, photons, and what have you, have an infinite number of partner particles that correspond to particles moving in these extra dimensions, with different amounts of momentum.

Due to quantum mechanics, these different amounts of momentum are not arbitrary, but are quantized. So there is a minimum amount of extra energy that a particle can have, from spinning around in the extra dimensions. This would show up to us as something called the lightest Kaluza-Klein particle, and could very easily be weakly interacting and massive. it's another very promising candidate for a WIMP, and therefor for the dark matter.

Then there are also particles that are not weakly interacting at all. In other words, they're the dark matter, yet don't feel the W and Z bosons. One example are neutrinos, but sterile neutrinos. This is exactly a neutrino that doesn't feel the weak interactions, which is what the word sterile means in this particular context. So you can invent new kinds of neutrinos, which already don't feel the electric force or strong nuclear force, but also don't feel the weak force. These are the kinds of neutrinos that don't feel electromagnetism, the strong force, or the weak force. All they feel is gravity, and they can occasionally interact with other kinds of neutrinos. So people are making models of massive sterile neutrinos, calculating how many you can make in the early universe, and it turns out to be easy to get the right abundance to be the dark matter.

The only reason this model is not as popular as supersymmetry, is that you don't get a lot extra out of it. The bonuses you get from supersymmetry are quite considerable, just from the particle physics perspective. Sterile neutrinos help you a little bit, but we don't know if they're part of some larger picture as of yet.

Finally we'll mention axions. They are perhaps the second leading candidate for dark matter particles, after supersymmetric particles. Yet axions are really completely different in conception than the supersymmetry, LSP, or neutralino would be. Axions are bosons, whereas the supersymmetric particles that would be the dark matter, are fermions. The supersymmetric particles are very heavy, some 1000 times the mass of the proton, while axions are very light, with the same sort of mass like the neutrino. They are very low-mass particles, which ordinarily we'd expect to be fast moving.

Neutrinos can't be the dark matter, because they're so light, they're moving very fast, and they do not make good dark matter candidates. Why is it that axions, which are very light, can nevertheless be cold dark matter? The answer is the axion is created by a very different mechanism than neutrinos or WIMPs are. The axions in the models that people write down, were never interacting with the rest of the particles in the plasma of the very early universe. They were never heated up by interacting with the rest of the stuff in the primordial soup.

Instead, there was a kind of field, an axion field that didn't change. It was just stuck there, and it contained energy. This energy was just constant, not going away until a phase transition happened and this field melted. When that happened, it went from being a constant amount of energy per cubic centimeter, to a bunch of axions with zero velocity. So this is a completely different mechanism than the one you get by creating WIMPs or neutrinos.

It turns out that there are enough free parameters in the model, to make this kind of axion from a melting field, with exactly the right kind of densities to be dark matter. So this is good news and bad news. It's good since it's a completely different way to get dark matter particles, yet bad because therefor the ways to go look for axions are completely different also! The kinds of experiments we are doing to try to find WIMPs in underground detectors, in the sky, in the lab, have a set of corresponding experiments we'd like to do for axions, but they're different experiments.

So people are still doing those experiments, and we're very hopeful we'll find either WIMPs or axions, and might even get especially lucky. The best universe, if you're a theoretical physicist, would be one in which half the dark matter is supersymmetric particles, and half is axions! We'll actually have to do the experiments to see whether nature is so kind to us as that.