Вы находитесь на странице: 1из 89

International Loop Quantum Gravity Seminar

International Loop Quantum Gravity Seminar

This file was generated by an automated blog to book conversion system. Its use is governed by the licensing terms of the original content hosted at ilqgs.blogspot.com/feeds/posts/default.

Powered by Pothi.com http://pothi.com

Contents
Bianchi models in loop quantum cosmology Inhomogeneous loop quantum cosmology Shape dynamics Spin foams from arbitrary surfaces The Immirzi parameter in spin foam quantum gravity What is hidden in an infinity? Spinfoam cosmology with the cosmological constant Quantum deformations of 4d spin foam models Observational signatures of loop quantum cosmology? Inflation and observational constraints on loop quantum cosmology Loop quantization of a parameterized field theory Quantum evaporation of 2-d black holes 1 7 15 21 26 31 47 52 60 66

72 77

Bianchi models in loop quantum cosmology


March 26, 2012 by Edward Wilson-Ewing, Marseille.

Parampreet Singh, LSU Title: Physics of Bianchi models in LQC PDF of the talk (500KB) Audio [.wav 40MB], Audio [.aif 4MB].

The word singularity, in physics, is often used to denote a prediction that some observable quantity should be singular, or infinite. One of the most famous examples in the history of physics appears in the Rayleigh-Jeans distribution which attempts to describe the thermal radiation of a black body in classical electromagnetic theory. While the Rayleigh-Jeans distribution describes black body radiation very well for long wavelengths, it does not agree with observations at

short wavelengths. In fact, the Rayleigh-Jeans distribution becomes singular at very short wavelengths as it predicts that there should be an infinite amount of energy radiated in this part of the spectrum: this singularity -which did not agree with experiment- was called the ultraviolet catastrophe. This singularity was later resolved by Planck when he discovered what is called Planck's law, which is now understood to come from quantum physics. In essence, the discreteness of the energy levels of the black body ensure that the black body radiation spectrum remains finite for all wavelengths. One of the lessons to be learnt from this example is that singularities are not physical: in the Rayleigh-Jeans law, the prediction that there should be an infinite amount of energy radiated at short wavelengths is incorrect and indicates that the theory that led to this prediction cannot be trusted to describe this phenomenon. In this case, it is the classical theory of electromagnetism that fails to describe black body radiation and it turns out that it is necessary to use quantum mechanics in order to obtain the correct result. In the figure below, we see that for a black body at a temperature of 5000 degrees Kelvin, the Rayleigh-Jeans formula works very well for wavelengths greater than 3000 nanometers, but fails for shorter wavelengths. For these shorter wavelengths, it is necessary to use Planck's law where quantum effects have been included.

Picture credit. There are also singularities in other theories. Some of the most striking examples of singularities in physics occur in general relativity where the curvature of space-time, which encodes the strength of the gravitational field, diverges and becomes infinite. Some of the best known examples are the big-bang singularity that occurs in cosmological models and the black hole singularity that is found inside the event horizon of every black hole. While some people have argued that the big-bang singularity represents the beginning of time and space, it seems more reasonable that the singularity indicates that the theory of general relativity cannot be trusted when the space-time curvature becomes very large and that quantum effects cannot be ignored: it is necessary to use a theory of quantum gravity in order to study the very early universe (where general relativity says the big bang occurs) and the center of black holes.

In loop quantum cosmology (LQC), simple models of the early universe are studied by using the techniques of the theory of loop quantum gravity. The simplest such model (and therefore the first to be studied) is called the flat Friedmann-Lemaitre-Robertson-Walker (FLRW) space-time. This space-time is homogeneous (the universe looks the same no matter where you are in it), isotropic (the universe is expanding at the same rate in all directions), and spatially flat (the two other possibilities are closed and open models which have also been studied in LQC) and is considered to provide a good approximation to the large-scale dynamics of the universe we live in. In LQC, it has been possible to study how quantum geometry effects become important in the FLRW model when the space-time curvature becomes so large that is is comparable to one divided by the Planck lengthsquared. A careful analysis shows that the quantum geometry effects provide a repulsive force that causes a bounce and ensures that the singularity predicted in general relativity does not occur in LQC. We will make this more precise in the next two paragraphs. By measuring the rate of expansion of the universe today, it possible to use the FLRW model in order determine the size and the space-time curvature of the universe was in the past. Of course, these predictions will necessarily depend on the theory used: general relativity and LQC will not always give the same predictions. General relativity predicts that, as we go back further in time, the universe becomes smaller and smaller and the space-time curvature becomes larger and larger. This keeps on going until around 13.75 billion years ago the universe has zero volume and an infinite space-time curvature. This is called the big bang.

In LQC, the picture is not the same. So long as the space-time curvature is considerably smaller than one divided by the Planck length squared, it predicts the same as general relativity. Thus, as we go back further in time, the universe becomes smaller and the space-time curvature becomes larger. However, there are some important differences when the space-time curvature nears the critical value of one divided by the Planck length squared: in this regime there are major modifications to the evolution of the universe that come from quantum geometry effects. Instead of continuing to contract as in general relativity, the universe instead slows its contraction before starting to become bigger again. This is called the bounce. After the bounce, as we continue to go further back in time, the universe becomes bigger and bigger and the space-time curvature becomes smaller and smaller. Therefore, as the space-time curvature never diverges, there is no singularity.

Photo credit. In loop quantum cosmology, we see that the big bang singularity in FLRW models is avoided due to quantum effects and this is analogous to what happened in the theory of black body radiation: the classical theory predicted a singularity which was resolved once quantum effects were included. This observation raises an important question: does LQC resolve all of the singularities that appear in cosmological models in general relativity? This is a complicated question as there are many types of cosmological models and also many different types of singularities. In this talk, Parampreet Singh explains what happens to many different types of singularities in models, called the Bianchi models, that are homogeneous but anisotropic (each point in the universe is equivalent, but the universe may expand in different directions at different rates). The main result of the talk is that all strong singularities in the Bianchi models are resolved in LQC.

sciseekclaimtoken-4faa8e2c424dc

Inhomogeneous loop quantum cosmology


February 13, 2012 by David Brizuela, Albert Einstein Institute, Golm, Germany.

William Nelson, PennState Title: Inhomogeneous loop quantum cosmology PDF of the talk (500k) Audio [.wav 32MB], Audio [.aif 3MB].

William Nelson's talk is a follow-up of the work presented by Ivn Agull a few months ago in this seminar series about their common work in collaboration with Abhay Ashtekar. Ivn's talk was reviewed in this blog by Edward Wilson-Ewing, so the reader is referred to that entry for completeness. Even if substantial material will overlap

with that post, here I will try to focus on other aspects of this research. Due to the finiteness of the speed of light, when we look at a distant point, like a star, we are looking to the state of that point in the past. For our regular daily distances this fact hardly affects anything, but if we consider larger distances, the effect is noticeable even for our slow-motion human senses. For instance, the sun is 8 light-minutes away from us, so if it suddenly were switched off we would be able to do a fair number of things in the mean time until we find ourselves in the complete darkness. For cosmological distances this fact can be really amazing: we can see far back in the past! But, how far away? Can we really see the instant of the creation? The light rays, that were emitted during the initial moments of the universe and that arrive to the Earth nowadays, form our particle horizon, which defines the border of our observable universe. As a side remark, note that the complete universe could be larger (even infinite) than the observable one, but not necessarily. We could be living in a universe with compact topology (like the surface of a balloon) and the light emitted from a distant galaxy would reach us from different directions. For instance, one directly and other once after traveling around the whole universe. Thus, what we consider different galaxies would be copies of the same galaxy in different stages of its evolution. In fact, we could even see the solar system in a previous epoch! Since the universe has existed for a finite amount of time (around 14 billions of years), the first guess would be that the particle

horizon is at that distance: 14 billions of light years. But this is not true mainly for two different reasons. On the one hand, our universe is expanding, so the sources of the light rays that were emitted during the initial moments of the universe are further away, around 46 billions of light-years away. On the other hand, at the beginning of the universe, the temperature was so high that atoms or even neutrons or protons could not be formed in a stable way. The state of the matter was a plasma of free elementary particles, in which the photons interacted very easily. The mean free path of a photon was extremely short since it was almost immediately absorbed by some particle. In consequence, the universe was opaque to light, so none of the photons emitted at that epoch could make its way to us. The universe became transparent around 380000 years after the Big Bang, in the so-called recombination epoch (when the hydrogen atoms started to form), and the photons emitted at that time form what is known as the Cosmic Microwave Background (CMB) radiation. This is the closest event to the Big Bang that we can nowadays measure with our telescopes. In principle, depending on the technology, in the future we might be able to detect also the neutrino and gravitational-wave backgrounds. These were released before the CMB photons since both neutrinos and gravitational waves could travel through the mentioned plasma without much interaction. The CMB has been explored making use of very sophisticated satellites, like the WMAP, and we know that it is highly homogeneous. It has an almost perfect black body spectrum that is peaked on a microwave frequency corresponding to a temperature of 2.7 K. The tiny inhomogeneities that we observe in the CMB are understood as the seeds of the large structures of our current universe.

Furthermore, the CMB is one of the few places where one could look for quantum gravity effects since the conditions of the universe during its initial moments were very extreme. The temperature was very high so that the energies of interaction between particles were much larger than we could achieve with any accelerator. But we have seen that the CMB photons we observe were emitted quite after the Big Bang, around 380.000 years later. Cosmologically this time is insignificant. (If we make an analogy and think that the universe is a middle-age 50 years old person, this would correspond to 12 hours.) Nevertheless, by that time the universe had already cooled down and the curvature was low enough so that, in principle, Einstein's classical equations of general relativity should be a very good approximation to describe its evolution at this stage. Therefore, why do we think it might be possible to observe quantum gravity effects at the CMB? At this point, the inflationary scenario enters the game. According to the standard cosmological model, around 10^(-36) seconds after the Big Bang, the universe underwent an inflationary phase which produced an enormous increase of its size. In a very short instant of time (a few 10^(-32) seconds) the volume was multiplied by a factor 10^78. Think for a while on the incredible size of that number: a regular bedroom would be expanded to the size of the observable universe! This inflationary mechanism was introduced by Alan Guth in the 1980s in order to address several conceptual issues about the early universe like, for instance, why our universe (and in particular the CMB) is so homogeneous. Note that the CMB is composed by points that are very far apart and, in a model without inflation, could

10

not have had any kind of interaction or information exchange during the whole history of the universe. On the contrary, according to the inflationary theory, all these points were close together at some time in the past, which would have allowed them to reach this thermal equilibrium. Furthermore, inflation has had a tremendous success and it has proved to be much more useful than originally expected. Within this framework, the observational values of the small inhomogeneities of the CMB are reproduced with high accuracy. Let us see in more detail how this result is achieved. In the usual inflationary models, at the early universe the existence of a scalar particle (called the inflaton) is considered. The inflaton is assumed to have a very large but flat potential. During the inflationary epoch it slowly loses potential energy (or, as it is usually referred, it slowly rolls down its potential), and produces the exponential expansion of the universe. At the end of this process the inflaton's potential energy is still quite large. Since nowadays we do not observe the presence of such a particle, it is argued that after inflation, during the so-called reheating process, all this potential energy is converted into "regular" (Standard Model) particles. Even though this process is not yet well understood. It is also usually assumed that at the onset of inflation the quantum fluctuations of the inflaton (and of the different quantities that describe the geometry of the universe) were in a vacuum state. This quantum vacuum is not a static and simple object, as one might think a priori. On the contrary, it is a very dynamical and complex entity. Due to the Heisenberg uncertainty principle, the laws of physics (like the conservation of energy) are allowed to be

11

violated during short instants of time. This is well-known in regular quantum field theory and it happens essentially because the nature does not allow to perform any observation during such a short time. Therefore, in the quantum vacuum there is a constant creation of virtual particles that, under regular conditions, are annihilated before they can be observed. Nevertheless, the expansion of the universe turns this virtual particles into real entities. Intuitively one can think that a virtual particle and its corresponding antiparticle are created but, before they can interact again to disappear, the inflationary expansion of the universe tears them so apart that the interaction is not possible anymore. This initial tiny quantum fluctuations, amplified through the process of inflation, produces then the CMB inhomogeneities we observe. Thus, the inflation is a kind of magnifying glass that allows us to have experimental access to processes that happened at extremely short scales and hence large energies, where quantum gravity effects might be significant. On the other hand, loop quantum cosmology (LQC) is a quantum theory of gravity that describes the evolution of our universe under the usual assumptions of homogeneity and isotropy. The predictions of LQC coincide with those of general relativity for small curvature regions. That includes the whole history of the universe except for the initial moments. According to general relativity the beginning of the universe happened at the Big Bang, which is quite a misleading name. The Big Bang has nothing to do with an explosion, it is an abrupt event where the whole space-time continuous came to existence. Technically, this point is called a singularity, where different objects describing the curvature of the spacetime diverge.

12

Thus general relativity can not be applied there and, as it is often asserted, the theory contains the seeds of its own destruction. LQC smooths out this singularity by considering quantum gravity effects and the Big Bang is replaced by a quantum bounce (the so-called Big Bounce). According to the new paradigm, the universe existed already before the Big Bounce as a classical collapsing universe. When the energy density became too large, it entered this highly quantum region, where the quantum gravity effects come with the correct sign so that gravity happens to be repulsive. This caused the universe to bounce and the expansion we currently observe began. The aim of Wills talk is to study the inflationary scenario in the context of LQC and obtain its predictions for the CMB inhomogeneities. In fact, Abhay Ashtekar and David Sloan already showed that inflation is natural in LQC. This means that itis not necessary to choose very particular initial conditions in order to get an inflationary phase. But there are still several questions to be addressed, in particular whether there might be any observable effects due to pre-inflationary evolution of the universe. As we have already mentioned, in the usual cosmological models, the initial state is taken as the so-called Bunch-Davies vacuum at the onset of inflation. This time might be quite arbitrary. The natural point to choose initial conditions would be the Big Bang but this is not feasible since it is a singular point and the equations of motions are no longer valid. In any case, the extended view has been that, even if there were some particles present at the onset of inflation, the huge expansion of the universe would dilute them and thus the final profile of the CMB would not be affected. Nevertheless, recently Ivn Agull and Leonard Parker showed that the presence

13

of such initial particles does matter for the final result since it causes the so-called stimulated emission of quanta: initial particles produce more particles, which themselves produce more particles and so on. In fact, this is the same process on which the nowadays widely used laser devices are based. Contrary to the usual models based on general relativity, LQC offers a special point where suitable initial conditions can be chosen: the Big Bounce. Thus, in this research, the corresponding vacuum state is chosen at that time. The preliminary results presented in the talk seem quite promising. The simplest initial state is consistent with the observational data but, at the same time, it slightly differs from the CMB spectrum obtained within the previous models. These results have been obtained under certain technical approximations so, the next step of the research will be to understand if this deviation is really physical. If so, this could provide a direct observational test for LQC that would teach us invaluable lessons about the deep quantum regime of the early universe.

14

Shape dynamics
December 15, 2011 by Julian Barbour, College Farm, Banbury, UK.

Tim Koslowski, Perimeter Institute Title: Shape dynamics PDF of the talk (500k) Audio [.wav 33MB], Audio [.aif 3MB].

I will attempt to give some conceptual background to the recent seminar by Tim Koslowski (pictured left) on Shape Dynamics and the technical possibilities that it may open up. Shape dynamics arises from a method, called best matching, by which motion and

15

more generally change can be quantified. The method was first proposed in 1982, and its furthest development up to now is described here. I shall first describe a common alternative. Newtons Method of Defining Motion Newtons method, still present in many theoreticians intuition, takes space to be real like a perfectly smooth table top (suppressing one space dimension) that extends to infinity in all directions. Imagine three particles that in two instants form slightly different triangles (1 and 2). The three sides of each triangle define the relative configuration. Consider triangle 1. In Newtonian dynamics, you can locate and orient 1 however you like. Space being homogeneous and isotropic, all choices are on an equal footing. But 2 is a different relative configuration. Can one say how much each particle has moved? According to Newton, many different motions of the particles correspond to the same change of the relative configuration. Keeping the position of 1 fixed, one can place the centre of mass of 2, C2, anywhere; the orientation of 2 is also free. In three-dimensional space, three degrees of freedom correspond to the possible changes of the sides of the triangle (relative data), three to the position of C2, and three to the orientation. The three relative data cannot be changed, but the choices made for the remainder are disturbingly arbitrary. In fact, Galilean relativity means that the position of C2 is not critical. But the orientational data are crucial. Different choices for them put different angular momenta L into the system, and the resulting motions are very different. Two snapshots of relative configurations contain no information about L;

16

you need three to get a handle on L. Now we consider the alternative. Dynamics Based on Best Matching The definition of motion by best matching is illustrated in the figure. Dynamics based on it is more restrictive than Newtonian dynamics. The reason can be read off from the figure. Best matching, as shown in b, does two things. It brings the centers of mass of the two triangles to a common point and sets their net relative rotation about it to zero. This last means that a dynamical system governed by best matching is always constrained, in Newtonian terms, to have vanishing total angular momentum L. In fact, the dynamical equations are Newtonian; the constraint L = 0 is maintained by them if it holds at any one instant.

Figure 1. The Definition of Motion by Best Matching. Three particles, at the vertices of the grey and dashed triangles at two instants, move relative to each other. The difference between the triangles is fact, but can one determine unique displacements of the particles? It seems not. Even if we hold the grey triangle fixed in

17

space, we can place the dashed triangle relative to it in any arbitrary position, as in a. There seems to be no way to define unique displacements. However, we can bring the dashed triangle into the position b, in which it most nearly covers the grey triangle. A natural minimizing procedure determines when best matching is achieved. The displacements that take one from the grey to the dashed triangle are not defined relative to space but relative to the grey triangle. The procedure is reciprocal and must be applied to the complete dynamical system under consideration. So far, we have not considered size. This is where Shape Dynamics proper begins. Size implies the existence of a scale to measure it by. But, if our three particles are the universe, where is a scale to measure its size? Size is another Newtonian absolute. Best matching can be extended to include adjustment of the relative sizes. This is done for particle dynamics here. It leads to a further constraint. Not only the angular momentum but also something called the dilatational momentum must vanish. The dynamics of any universe governed by best matching becomes even more restrictive than Newtonian dynamics. Best Matching in the Theory of Gravity Best matching can be applied to the dynamics of geometry and compared with Einstein's general relativity (GR), which was created as a description of the four-dimensional geometry of spacetime. However, it can be reformulated as a dynamical theory in which three-dimensional geometry (3-geometry) evolves. This was done in the late 1950s by Dirac and Arnowitt, Deser, and Misner (ADM),

18

who found a particularly elegant way to do it that is now called the ADM formalism and is based on the Hamiltonian form of dynamics. In the ADM formalism, the diffeomorphism constraint, mentioned a few times by Tim Koslowski, plays a prominent role. Its presence can be explained by a sophisticated generalization of the particle best matching shown in the figure. This shows that the notion of change was radically modified when Einstein created GR (though this fact is rather well hidden in the spacetime formulation). The notion of change employed in GR means that it is background independent. In the ADM formalism as it stands, there is no constraint that corresponds to best matching with respect to size. However, in addition to the diffeomorphism constraint, or rather constraints as there are infinitely many of them, there are also infinitely many Hamiltonian constraints. They reflect the absence of an external time in Einstein's theory and the almost complete freedom to define simultaneity at spatially separated points in the universe. It has proved very difficult to take them into account in a quantum theory of gravity. Building on previous work, Tim and his collaborators Henrique Gomes and Sean Gryb have found an alternative Hamiltonian representation of dynamical geometry in which all but one of the Hamiltonian constraints can be swapped for conformal constraints. These conformal constraints arise from a best matching in which the volume of space can be adjusted with infinite flexibility. Imagine a balloon with curves drawn on it that form certain angles wherever they meet. One can imagine blowing up the balloon or letting it contract by different amounts everywhere on its surface. In this process, the angles at which the curves meet cannot change, but the distances between points can. This is called a conformal transformation and is clearly

19

analogous to changing the overall size of figures in Euclidean space. The conformal transformations that Tim discusses in his talk are applied to curved 3-geometries that close up on themselves like the surface of the earth does in two dimensions. The alternative, or dual, representation of gravity through the introduction of conformal best matching seems to open up new routes to quantum gravity. At the moment, the most promising looks to be the symmetry doubling idea discussed by Tim. However, it is early days. There are plenty of possible obstacles to progress in this direction, as Tim is careful to emphasize. One of the things that intrigues me most about Shape Dynamics is that, if we are to explain the key facts of cosmology by a spatially closed expanding universe, we cannot allow completely unrestricted conformal transformations in the best matching but only the volume-preserving ones (VPCTs) that Tim discusses. This is a tiny restriction but strikes me as the very last vestige of Newton's absolute space. I think this might be telling us something fundamental about the quantum mechanics of the universe. Meanwhile it is very encouraging to see technical possibilities emerging in the new conceptual framework.

20

Spin foams from arbitrary surfaces


October 31, 2011 by Frank Hellman, Albert Einstein Institute, Golm, Germany Jacek Puchta, University of Warszaw Title: The Feynman diagramatics for the spin foam models PDF of the talk (3MB) Audio [.wav 35MB], Audio [.aif 3MB].

In several previous blog posts (e.g. here) the spin foam approach to quantum gravity dynamics was introduced. To briefly summarize, this approach describes the evolution of a spin-network via a 2-dimensional surface that we can think of as representing how the network changes through time.

While this picture is intuitively compelling, at the technical level

21

there have always been differences of opinions on what type of 2-dimensional surfaces should occur in this evolution. This question is particularly critical once we start trying to sum over all different type of surfaces. The original proposal for this 2-dimensional surface approach was due to Ooguri, who allowed only a very restricted set of surfaces, namely those called "dual to triangulations of manifolds".

A triangulation is a decomposition of a manifold into simplices. The simplices in successive dimensions are obtained by adding a point and "filling in". The 0-dimensional simplex is just a single point. For the 1-dimensional simplex we add a second point and fill in the line between them. For 2-dimensions we add a third point, fill in the space between the line and the third point, and obtain a triangle. In 3-d we get a tetrahedron, and in 4-d what is called a 4-simplex.

The surface "dual to a triangulation" is obtained by putting a vertex in the middle of the highest dimensional simplex, then connecting these by an edge for every simplex one dimension lower, and to fill in surfaces for every simplex two dimensions lower. An example for the case where the highest dimensional simplex is a triangle is given in the figure, there the vertex abc is in the middle of the triangle ABC, and connected by the dashed lines indicating edges, to the neighboring vertices.

22

All current spin foam models were created with such triangulations in mind. In fact many of the crucial results of the spin foam approach rely explicitly on this feature rather technical point.

The price we pay for restricting ourselves to such surfaces is that we do not address the dynamics of the full Loop Quantum Gravity Hilbert space. The spin networks we evolve will always be 4-valent, that is, there are always four links coming into every node, whereas in the LQG Hilbert space we have spin-networks of arbitrary valence. Another issue is that we might wish to study the dynamics of the model using the simplest surfaces first to get a feeling for what to expect from the theory, and for some interesting examples, like spin foam cosmology, the triangulation based surfaces are immediately quite complicated.

The group of Jerzy Lewandowski therefore suggested to generalize the amplitudes considered so far to fairly arbitrary surfaces, and

23

gave a method for constructing the spin foam models, considered before in the triangulation context only, on these arbitrary surfaces. This patches one of the holes between the LQG kinematics and the spin foam dynamics. The price is that many of the geometricity results from before no longer hold.

Furthermore it now becomes necessary to effectively handle these general surfaces. A priori a lot of those exist, and it can be very hard to imagine them. In fact the early work on spin foam cosmology overlooked a large number of surfaces that potentially contribute to the amplitude. The work Jacek Puchta presented in this talk solves this issue very elegantly by developing a simple diagrammatic language that allows us to very easily work with these surfaces without having to imagine them.

This is done by describing every node in the amplitude through a network, and then giving additional information that allows us to reconstruct a surface from these networks. Without going into the full details, consider a picture like in the next figure. The solid lines on the right hand side are the networks we consider, the dashed lines are additional data. Each node of the solid lines represents a triangle, every solid line is two triangles glued along an edge, and every dashed line is two triangles glued face to face. Following this prescription we obtain the triangulation on the left. While the triangulation generated by this prescription can be tricky to visualize in general, it is easy to work directly with the networks of dashed and solid lines. Furthermore we don't need to restrict ourselves to

24

networks that generate triangulations anymore but can consider much more general cases.

This language has a number of very interesting features. First of all these networks immediately give us the spin-networks we need to evaluate to obtain the spin foam amplitude of the surface reconstructed from them.

Furthermore it is very easy to read off what the boundary spin network of a particular surface is. As a strong demonstration of how this language simplifies thinking about surfaces, he demonstrated how all surfaces relevant for the spin foam cosmology context, which were long overlooked, are easily seen and enumerated using the new language.

The challenge ahead is to understand whether the results obtained in the simplicial setting can be translated into the more general setting at hand. For the geometricity results this looks very challenging. But in any case, the new language looks like it is going to be an indispensable tool for studying spin foams going forward, and for clarifying the link between the canonical LQG approach and the covariant spin foams.

25

The Immirzi parameter in spin foam quantum gravity


October 01, 2011 by Sergei Alexandrov, Universite Montpellier, France. James Ryan, Albert Einstein Institute Title: Simplicity constraints and the role of the Immirzi parameter in quantum gravity PDF of the talk (11MB) Audio [.wav 19MB], Audio [.aif 2MB].

Spin foam quantization is an approach to quantum gravity. Firstly, it is a "covariant" quantization, in that it does not break space-time into space and time as "canonical" loop quantum gravity (LQG) does. Secondly, it is "discrete" in that it assumes at the outset that space-time has a granular rather than a smooth structure assumed by "continuum" theories such as LQG. Finally, it is based on the "path integral" approach to quantization that Feynman introduced in which one sums probabilities for all possible trajectories in a system. In the case of gravity one assigns probabilities to all possible space-times.

26

To write the path integral in this approach one uses a reformulation of Einstein's general relativity due to Plebanski. Also, one examines this reformulation for discrete space-times. From the early days it was considered as a very close cousin of loop quantum gravity because both approaches lead to the same qualitative picture of quantum space-time. (Remarkably, although one starts with smooth space and time in LQG, after quantization a granular structure emerges.) However, at the quantitative level, for long time there was a striking disagreement. First of all, there were the symmetries. On the one hand, LQG involves a set of symmetries known technically as the SU(2) group, while on the other, spin foam models had symmetries either associated with the SO(4) group or the Lorentz group. The latter are symmetries that emerge in space-time whereas the SU(2) symmetry emerges naturally in space. It is not surprising that working in a covariant approach the symmetries that emerge naturally are those of space-time whereas working in an approach where space is distinguished like in the canonical approach one gets symmetries associated with space. The second difference concerns the famous Immirzi parameterwhich plays an extremely important role in LQG, but was not even included in the spin foam approach. This is a parameter that appears in the classical formulation that has no observable consequences there (it amounts to a change of variables). On LQG quantization, however, physical predictions depend on it, in particular the value of the quantum of area and the entropy of black holes. The situation has changed a few years ago with the appearance of

27

two new spin foam models due to Engle-Pereira-Rovelli-Livine (EPRL) and Freidel-Krasnov (FK). The new models appear to agree with LQG at the kinematical level (i.e. they have similar state spaces, although their specific dynamics may differ). Moreover, they incorporate the Immirzi parameter in a non-trivial way. The basic idea behind these models is the following: in the Plebanski formulation general relativity is represented as a topological BF theory supplemented by certain constraints ("simplicity constraints"). BF theories are well studied topological theories (their dynamics are very simple, being limited to global properties). This straightforwardness in particular implies that it is well known how to discretize and to quantize BF theories (using, for example, the spin foam approach). The fact that general relativity can be thought of as a BF theory with additional constraints gives rise to the idea that quantum gravity can be obtained by imposing the simplicity constraints directly at quantum level on a BF theory. For that purpose, using the standard quantization map of BF theories, the simplicity constraints become quantum operators acting on the BF states. The insight of EPRL was that, once the Immirzi parameter is included, some of the constraints should not be imposed as operator identities, but in a weaker form. This allows to find solutions of the quantum constraints which can be put into one-to-one correspondence with the kinematical states of LQG. However, such quantization procedure does not take into account the fact that the simplicity constraints are not all the constraints of the theory. They should be supplemented by certain other ("secondary") constraints and together they form what is technically

28

known as a system of second class constraints. These are very different from the usual kinds of constraints that appear in gauge theories. Whereas the latter correspond to the presence of symmetries in the theory, the former just freeze some degrees of freedom. In particular, at quantum level they should be treated in a completely different way. To implement second class constraints, one should either solve them explicitly, or use an elaborate procedure called the Dirac bracket. Unfortunately, in the spin foam approach the secondary constraints had been completely ignored so far. At the classical level, if one takes all these constraints into account for continuum space-times, one gets a formulation which is independent of the Immirzi parameter. Such a canonical formulation can be used for a further quantization either by the loop or the spin foam method and leads to results which are still free from this dependence. This raises questions about the compatibility of the spin foam quantization with the standard Dirac quantization based on the continuum canonical analysis. In this seminar James Ryan tried to shed light on this issue by studying a the canonical analysis of Plebanski formulation for discrete space-times. Namely, in his work with Bianca Dittrich, they analyzed constraints which must be imposed on the discrete BF theory to get a discretized geometry and how they affect the structure of the theory. They found that the necessary discrete constraints are in a nice correspondence with the primary and secondary simplicity constraints of the continuum theory.

29

Besides, it turned out that the independent constraints are naturally split into two sets. The first set expresses the equality of two sectors of the BF theory, which effectively reduces SO(4) gauge group to SU(2). And indeed, if one explicitly solves this set of constraints, one finds a space of states analogous to that of LQG and the new spin foam models dependent on the Immirzi parameter. However, the corresponding geometries cannot be associated with piecewise flat geometries (geometries that are obtained by gluing flat simplices, just like one glues flat triangles to form a geodesic dome). These piecewise flat geometries are the geometries usually associated with spin foam models. Instead they produce the so called twisted geometries recently studied by Freidel and Speziale. To get the genuine discrete geometries appearing, for example, in the formulation of general relativity known as Regge calculus, one should impose an additional set of constraints given by certain gluing conditions. As Dittrich and Ryan succeeded in showing, the formulation obtained by taking into account all constraints is independent of the Immirzi parameter, as it is in the continuum classical formulation. This suggests that the quest for a consistent and physically acceptable spin foam model is far from being accomplished and that the final quantum theory might eventually be free from the Immirzi parameter.

30

What is hidden in an infinity?


September 12, 2011 by Daniele Oriti, Albert Einstein Institute, Golm, Germany

Matteo Smerlak, ENS Lyon Title: Bubble divergences in state-sum models PDF of the slides (180k) Audio [.wav 25MB], Audio [.aif 5MB].

Physicists tend to dislike infinities. In particular, they take it very badly when the result of a calculation they are doing turns out to be not some number that they could compare with experiments, but infinite. No energy or distance, no velocity or density, nothing in the world around us has infinity as its measured value. Most times, such infinities signal that we have not been smart enough in dealing with the physical system we are considering, that we have missed some key ingredient in its description, or used the wrong mathematical language in describing it. And we do not like to

31

be reminded of our own lack of cleverness. At the same time, and as a confirmation of the above, much important progress in theoretical physics has come out of a successful intellectual fight with infinities. Examples abound, but here is a historic one. Consider a large 3-dimensional hollow spherical object whose inside is made of some opaque material (thus absorbing almost all the light hitting it), and assume that it is filled with light (electromagnetic radiation) maintained at constant temperature. This object is named a black body. Imagine now that the object has a small hole from which a limited amount of light can exit. If one computes the total energy (i.e. considering all possible frequencies) of the radiation exiting from the hole, at a given temperature and at any given time, using the well-established laws of classical electromagnetism and classical statistical mechanics, one finds that it is infinite. Roughly, this calculation looks as follows: you have to sum all the contributions to the total energy of the radiation emitted (at any given time), coming from all the infinite modes of oscillation of the radiation, at the temperature T. Since there are infinite modes, the sum diverges. Notice that the same calculation can be performed by first imagining that there exists a maximum possible mode of oscillation, and then studying what happens when this supposed maximum is allowed to grow indefinitely. After the first step, the calculation gives a finite result, but the original divergence is obtained again after the second step. In any case, this sum gives a divergent result: infinity! However, this two-step procedure allows to understand better how the quantity of interest diverges.

32

Beside being a theoretical absurdity, this is simply false on experimental grounds since such radiating objects can be realized rather easily in a laboratory. This represented a big crisis in classical physics at the end of the 19th century. The solution came from Max Planck with the hypothesis that light is in reality constituted by discrete quanta (akin to matter particles), later named photons, with a consequently different formula for the emitted radiation from the hole (more precisely, for the individual contributions). This hypothesis, initially proposed for completely different motivations, not only solved the paradox of the infinite energy, but spurred the quantum mechanics revolution which led (after the work of Bohr, Einstein, Heisenberg, Schroedinger, and many others) to the modern understanding of light, atoms and all fundamental forces (except gravity). We see, then, that the need to understand what was really lying inside an infinity, the need to confront it, led to an important jump forward in our understanding of Nature (in this example, of light), and to a revision of our most cherished assumptions about it. The infinity was telling us just that. Interestingly, a similar theoretical phenomenon seems now to suggest that another, maybe even greater jump forward is needed and a new understanding of gravity and of spacetime itself. An object that is theoretically very close to a perfect black body is a black hole. Our current theory of matter, quantum field theory, in conjunction with our current theory of gravity, General Relativity, predicts that such black hole will emit thermal radiation at a constant temperature inversely proportional to the mass of the black hole.

33

This is called Hawking radiation. This result, together with the description of black holes provided by general relativity, also suggest that black holes have an entropy associated to them, measuring the number of their intrinsic degrees of freedom. Because a black hole is nothing but a particular configuration of space, this entropy is then a measure of the intrinsic degrees of freedom of (a region of) space itself! However, first of all we have no real clue what these intrinsic degrees of freedom are; second, if the picture of space provided by general relativity is correct, their number and their corresponding entropy is infinite! This fact, together with a large number of other results and conceptual puzzles, prompted a large part of the theoretical physics community to look for a better theory of space (and time), possibly based on quantum mechanics (taking on board the experience from history): a quantum theory of space-time, a quantum theory of gravity. It should not be understood that the transition from classical to quantum mechanics led us away from the problem of infinities in physics. On the contrary, our best theories of matter and of fundamental forces, quantum field theories, are full of infinities and divergent quantities. What we have learned, however, from quantum field theories, is exactly how to deal with such infinities in rather general terms, what to expect, and what to do when such infinities present themselves. In particular, we have learned another crucial lesson about nature: physical phenomena look very different at different energy and distance scales, i.e. if we look at them very closely or if they involve higher and higher energies. The methods

34

by which we deal with this scale dependence go under the name of renormalization group, now a crucial ingredient of all theories of particles and materials, both microscopic and macroscopic. How this scale dependence is realized in practice depends of course on the specific physical system considered. Let us consider a simple example. Consider the dynamics of a hypothetical particle with mass m and no spin; assume that what can happen to this particle during its evolution is only one of the following two possibilities: it can either disintegrate into two new particles of the same type or disintegrate into three particles of the same type. Also, assume that the inverse processes are also allowed (that is, two particles can disappear and give rise to a single new one, and the same can do three particles). So there are two possible interactions that this type of particle can undergo, two possible fundamental processes that can happen to it. To each of them, we associate a parameter, called a coupling constant that indicates how strong each possible interaction process is (compared with each other and with other possible processes due for example to the interaction of the particles with gravity or with light etc), one for the process involving three particles, and one for the one involving four particles (this is counting incoming and outgoing particles). Now, the basic object that a quantum field theory allows us to compute is the probability (amplitude) that, if I first see a number n of particles at a certain time, at a later time I will instead see m particles, with m different from n (because some particle will have disintegrated and other will have been created). All the other quantities of physical interest can be obtained using these probabilities.

35

Moreover, the theory tells me exactly how this probability should be computed. It goes roughly as follows. First, I have to consider all possible processes leading from n particles to m particles, including those involving an infinite number of elementary creation/disintegration processes. These can be represented by graphs (called Feynman graphs) in which each vertex represents a possible elementary process (see the figure for an example of such process, made out of interactions involving three particles only, with associated graph,).

A graph describing a sequence of 3-valent elementary interactions for a point particle, with 2 particles measured both at the initial and at the final time (to be read from left to right)

Second, each of these processes should be assigned a probability (amplitude), that is, a function of the mass of the particle considered and of the coupling constants. Third, this amplitude is in turn a function of the energies of each particle involved in any given process.(and corresponding to a single line in the graph

36

representing the process), and this energy can be anything, from zero to infinity. The theory tells me what form the probability amplitude has. Now the total probability for the measurement of n particle first and m particles later is computed by summing over all processes/graphs (including those composed of infinite elementary processes) and all the energies of particles involved in them, weighted by the probability amplitudes. Now, guess what? The above calculation typically gives the always feared result: infinity. Basically, everything that could go wrong, actually goes wrong, as in Murphys law. Not only the sum over all the graphs/processes gives a divergent answer, but also the intermediate sum over energies diverges. However, as we anticipated, we now know how to deal with this kind of infinities, we are not scared anymore and, actually, we have learnt what they mean, physically. The problem mainly arises when we consider higher and higher energies for the particles involved in the process. For simplicity imagine that all the particles have the same energy E, and assume this can take any value from 0 to a maximum value Emax. Just like in the black body example, the existence of the maximum implies that the sum over energies is a finite number, so everything up to here goes fine. However, when we let the maximal energy becomes infinite, typically the same quantity becomes infinite. We have done something wrong; lets face it: there is something we have not understood of the physics of the system (simple particles as they may be). It could be that, as in the case of the blackbody radiation, we are missing something fundamental about the nature

37

of these particles, and we have to change the whole probability amplitude. Maybe other type of particles have to be considered as created out of the initial ones. All this could be. However, what quantum field theory has taught us is that, before considering these more drastic possibilities, one should try to re-write the above calculation by considering coupling constants and mass, that themselves depend on the scale Emax, and then compute again the probability amplitude, but now using these scale dependentconstants, and check if one can now consider the case of Emax growing up to infinity, i.e. consider arbitrary energies for the particles involved in the process. If this can be done, i.e. if one can find coupling constants dependent on the energies such that now the result of sending Emax to infinity, i.e. considering larger and larger energies, is a finite, sensible probability then there no need for further modifications of the theory, and the physical system considered, i.e. the (system of) particles, is under control. What does all this teach us? It teaches us that the type of interactions that the system can undergo and their relative strengths depend on the scale at which we look at the system, i.e. on what energy is involved in any process the system is experiencing. For example, it could happen that when Emax becomes higher and higher, the coupling constant as a function of Emax becomes zero. This would mean that, at very high energies, the process of disintegration of one particle into two (or two into one) does not happen anymore, and only the one involving four particles takes place. Pictorially, only graphs of a certain shape, become relevant. Or, it could happen that, at very high energies, the mass of the particles becomes zero,

38

i.e. the particles become lighter and lighter, eventually propagating just like photons do. The general lesson, beside technicalities and specific cases, is that for any given physical system it is crucial to understand exactly how the quantities of interest diverge, because in the details of such divergence lies important information about the true physics of the system considered. The infinities in our models should be tamed, explored in depth, and listened to. This is what Matteo Smerlak and Valentin Bonzom have done in the work presented at the seminar, for some models of quantum space that are currently at the center of attention of the quantum gravity community. These are so-called spin foam models, in which quantum space is described in terms of spin networks (graphs whose links are assigned discrete numbers, spins, representing elementary geometric data) or equivalently in terms of collections of triangles glued to one another along edges, and whose geometry is specified by the length of all such edges. Spin foam models are then strictly related to both loop quantum gravity, whose dynamical aspects they seek to define, and to other approaches to quantum gravity like simplicial gravity. These models, very much like models for the dynamics of ordinary quantum particles, aim to compute (among other things) the probability to measure a given configuration of quantum space, represented again as a bunch of triangles glued together or as a spin network graph. Notice that here a configuration of quantum space means both a given shape of space (it could be a sphere, a doughnut, or any other fancier shape), and a given geometry (it could be a very big or a very small sphere, a sphere with some bumps here and there, etc). One could also consider computing the probability of a transition from a given

39

configuration of quantum space to a different one. More precisely, the models that Bonzom and Smerlak studied are simplified ones (with respect to those that aim at describing our 4-dimensional space-time) in which the dynamics is such that, whatever the shape and geometry of space one is considering, during its evolution, should one measure the curvature of the same space at any given location, one would find zero. In other words these models only consider flat space-times. This is of course a drastic simplification but not such that the resulting models become uninteresting. On the contrary, these flat models are not only perfectly fine to describe quantum gravity in the case in which space has only two dimensions, rather than three, but are also the very basis for constructing realistic models for 3-dimensional quantum space, i.e. 4-dimensional quantum spacetime. As a consequence, these models, together with the more realistic ones, have been a focus of attention of the community of quantum gravity researchers. What is the problem being discussed, then? As you can imagine, the usual one: when one tries to compute the mentioned probability for a certain evolution of quantum space, even within these simplified models, the answer one gets is the ever-present, but by now only slightly intimidating, infinity. How does the calculation look like? It looks very similar to the calculation for the probability of a given process of evolution of particles in quantum field theory. Consider the case in which space is 2-dimensional and therefore space-time is 3-dimensional. Suppose you want to consider the probability of measuring first n triangles glued along one another to

40

form, say, a 2-dimensional sphere (the surface of a soccer ball) of a given size, and then m triangles now glued to form, say, the surface of a doughnut. Now take a collection of an arbitrary number of triangles and glue them to one another along edges to form a 3-dimensional object of your choice, just like kids stick LEGO blocks to one another to form a house or a car or some spaceship (you see, science is in many ways the development of childrens curiosity by other means). It could be as simple as a soccer ball, in principle, or something extremely complicated, with holes, multiple connections, anything). There is only one condition on the 3-dimensional object you can build: its surface should be formed, in the example we are considering here, by two disconnected parts: one in the shape of a sphere made of n triangles, and one in the shape of the surface of a doughnut made of m triangles. This condition would for example prevent you from building a soccer ball, which you could do, instead, if one wanted to consider only the probability of measuring n triangles forming a sphere, and no doughnut was involved. Too bad. Well be lazy in this example and consider a doughnut but no soccer ball. Anyway, apart from this, you can do anything. Let us pause for a second to clarify what it means for a space to have a given shape. Consider a point on the sphere and take a path on it that starts at the given point and after a while comes back to the same point, forming a loop. Now you see that there is no problem in taking this loop to become smaller and smaller, eventually shrinking to a point and disappearing. Now do the same operation on the surface of a doughnut. You will see that certain loops can again be shrunk to a point and made disappear,

41

while others cannot. These are the ones that go around the hole of the doughnut. So you see that operations like these can help us determining the shape of our space. The same holds true for 3d spaces, in fact, you only need many more types of operations of this type. Ok, now you finish building your 3-dimensional object made of as many triangles as you want. Just as the triangles in the boundary of the 3d object, those forming the sphere and the doughnut, also those forming the 3d object come with numbers associated to the edges of the triangles. These numbers, as said, specify the geometry of all the triangles, and therefore of the sphere, of the doughnut and of the 3d object that has them on its boundary.

A collection of glued triangles forming a sphere (left) and a doughnut (right); the interior 3d space can alsobe built out of glued triangles having the given shape on the boundary: for the first object, the interior is a ball; for the second it forms what is called a solid torus. Pictures from http://www.hakenberg.de/ The theory (the spin foam model you are studying) should give you a probability for the process considered. If the triangles forming the sphere represent how quantum space was at first, and the triangles forming the doughnut how it is in the end, the 3d object chosen

42

represent a possible quantum space-time. In the analogy with the particle process described earlier, the n triangles forming a sphere correspond to the initial n particles, the m triangles forming the doughnut correspond to the final m particles, and the triangulated 3d object is the analogue of a possible interaction process, a possible history of triangles being created/destroyed, forming different shapes and changing their size; this size is encoded in their edge lengths, which is the analogue of the energies of the particles. The spin foam model now gives you the probability for the process in the form of a sum over the probabilities for all possible assignments of lengths to the edges of the 3d object, each probability function enforcing that the 3d object is flat (it gives probability equal to zero if the 3d object is not flat). As anticipated, the above calculation gives the usual nonsensical infinity as a result. But again, we now know that we should get past the disappointment, and look more carefully at what this infinity hides. So what one does is again to imagine that there is a maximal length that edges of triangles can have, call it Emax, define the truncated amplitude and study carefully exactly how it behaves when Emax grows, when it is allowed to become larger and larger. In a sense, in this case, what is hidden inside this infinity is the whole complexity of a 3d space, at least of a flat one. What one finds is that hidden in this infinity, and carefully revealed by the scaling of the above amplitude with Emax, is all the information about the shape of the 3d object, i.e. of the possible 3d spacetime considered, and all the information about how this 3d spacetime has been constructed out of triangles. Thats lots of information!

43

Bonzom and Smerlak, in the work described at the seminar, have gone a very long way toward unraveling all this information, dwelling deeper and deeper into the hidden secrets of this particular infinity. Their work is developed in a series of papers, in which they offer a very elegant mathematical formulation of the problem and a new approach toward its solution, progressively sharpening their results and improving our understanding of these specific spin foam models for quantum gravity, of the way they depend on the shape and on the specific construction of each 3d spacetime, and of what shape and construction give, in some sense, the biggerinfinity. Their work represented a very important contribution to an area of research that is growing fast and in which many other results, from other groups around the world, had already been obtained and are still being obtained nowadays. There is even more. The analogy with particle processes in quantum field theory can be made sharper, and one can indeed study peculiar types of field theories, called group field theories, such that the above amplitude is indeed generated by the theory and assigned to the specific process, as in spin foam models, and at the same time all possible processes are taken into account, as in standard quantum field theories for particles. This change of framework, embedding the spin foam model into a field theory language, does not change much the problem of the divergence of the sum over the edge lengths, nor its infinite result. And it does not change the information about the shape of space encoded in this infinity. However, it changes the perspective by

44

which we look at this infinity and at its hidden secrets. In fact, in this new context, space and space-time are truly dynamical, all possible spaces and space-times have to be considered together and on equal footing and compete in their contribution to the total probability for a certain transition from one configuration of quantum space to another. We cannot just choose one given shape, do the calculation and be content with it (once we dealt with the infinity resulting from doing the calculation naively). The possible space-times we have to consider, moreover, include really weird ones, with billions of holes and strange connections from one regions to another, and 3d objects that do not really look like sensible space-times at all, and so on. We have to take them all into account, in this framework. This is of course an additional technical complication. However, it is also a fantastic opportunity. In fact, it offers us the chance to ask and possibly answer a very interesting question: why is our space-time, at least at our macroscopic scale, the way it is? Why does it look so regular, so simple in its shape, actually as simple as a sphere is? Try! we can consider an imaginary loop located anywhere in space and shrink it to a point making it disappear without any trouble, right? If the dynamics of quantum space is governed by a model (spin foam or group field theory) like the ones described, this is not obvious at all, but something to explain. Processes that look as nice as our macroscopic space-time are but a really tiny minority among the zillions of possible space-times that enter the sum we discussed, among all the possible processes that have to be considered in the above calculations. So, why should they dominate and end up being the truly important ones, those that best approximate our macroscopic space-time? Why and how do they emerge from the

45

others and originate, from this quantum mess, the nice space-time we inhabit, in a classical, continuum approximation? What is the true quantum origin of space-time, in both its shape and geometry? The way the amplitudes grow with the increase of Emax is where the answer to these fascinating questions lies. The answer, once more, is hidden in the very same infinity that Bonzom, Smerlak, and their many quantum gravity colleagues around the world are so bravely taming, studying, and, step by step, understanding.

46

Spinfoam cosmology with the cosmological constant


August 30, 2011 by David Sloan, Institute for Theoretical Physics, Utrecht University, Netherlands. Francesca Vidotto, CNRS Marseille Title: Spinfoam cosmology with the cosmological constant PDF of the slides (3 MB) Audio [.wav 26MB], Audio [.aif 2MB].

Current observations of the universe show that it appears to be expanding. This is observed through the red-shift - a cosmological Doppler effect - of supernovae at large distances. These giant explosions provide a 'standard candle', a fixed signal whose color indicates relative motion to an observer. Distant objects, therefore appear not only to be moving away from us, but accelerating as they do so. This acceleration cannot be accounted for in a universe filled with 'ordinary' matter such as dust or radiation. To provide acceleration there must be a form of matter

47

which has negative pressure. The exact nature of this matter is unknown, and hence it is referred to as being 'dark energy'.

An image of the remnants of the Tycho type Ia supernova as recorded by NASA's Spitzer observatory and originally observed by Tycho Brahe. According to the standard model of cosmology, 73% of the matter content of the universe consists of dark energy. It is the dominant component of the observable universe, with dark matter making up most of the remainder (ordinary matter that makes up stars, planets, and nebulae comprises just 4%). In cosmology the universe is assumed to be broadly homogeneous and isotropic, and therefore the types of matter

48

present are usually parametrized by the ratio (w) of their pressure to energy. Dark energy is unlike normal matter as it exhibits negative pressure. Indeed in observations recently made by Reiss et al. this ratio has been determined to be -1.08 0.1. There are several models which attempt to explain the nature of dark energy. Among them are Quintessence which consists of a scalar field whose pressure varies with time, and void (or Swiss-cheese) models which seek to explain the apparent expansion as an effect of large scale inhomogeneities.However, the currently favored model for dark energy is that of the cosmological constant, for which w=-1. The cosmological constant has an interesting history as a concept in general relativity (GR). Originally introduced by Einstein, who noted there was the freedom to introduce it in the equations of the theory, it was an attempt to counteract the expansion of the universe that appeared in general relativistic cosmology. It should be remembered that at that time it was thought the universe was static. The cosmological constant was quickly shown to be insufficient to lead to a stable, static universe. Worse, later observations showed the universe did expand as general relativistic cosmology seemed to suggest. However, the freedom to introduce this new parameter into the field equations of GR remained of theoretical interest, its cosmological solutions yielding (anti) DeSitter universes which can have a different topology to the flat cases. The long-term fate of the universe is generally determined by the cosmological constant - for large enough positive values the universe will expand indefinitely, accelerating as it does so. For negative values the universe will eventually recollapse, leading to a

49

future 'big crunch' singularity. Recently through supernovae observations the value of the cosmological constant has been determined to be small yet positive. In natural (Planck) units, its value is 10^(-120), a number so incredibly tiny that it appears unlikely to have occurred by chance. This 'smallness' or 'fine tuning' problem has elicited a number of tentative explanations ranging from anthropic arguments (since much higher values would make life impossible) to virtual wormholes, however as yet there is no well accepted answer. The role of the cosmological constant can be understood in two separate ways - it can be considered either as a piece of the geometry or matter components of the field equations. As a geometrical entity it can be considered just as one more factor in the complicated way that geometry couples to matter, but as matter it can be associated with a 'vacuum' energy of the universe: energy associated with empty space. It is this dual nature that makes the cosmological constant an ideal test candidate for the introduction of matter into fundamental theories of gravity. The work of Bianchi, Krajevski, Rovelli and Vidotto, discussed in the ILQG seminar, concerns the addition of this term into the spin-foam cosmological models. Francesca describes how one can introduce a term which yields this effect into the transition amplitudes (a method of calculating dynamics) of spin-foam models. This new ingredient allows Francesca to cook up a new model of cosmology within the spin-foam cosmology framework. When added to the usual recipe of a dipole graph, vertex expansions, and coherent states, the results described

50

indeed appear to match well with the description of our universe on large scales. The inclusion of this new factor brings insight from quantum deformed groups, which have been proposed as a way of making the theory finite. This is an exciting development, as the spin-foam program is 'bottom-up' approach to the problem of quantum gravity. Rather than beginning with GR as we know it, and making perturbations around solutions, the spin-foam program starts with networks representing the fundamental gravitational fields and calculates dynamics through a foam of graphs interpolating between two networks. As such, recovering known physics is not a sure thing ahead of time. The results discussed in Francesca's seminar provide a firmer footing for understanding the cosmological implications of the spin-foam models and take these closer to observable physics.

51

Quantum deformations of 4d spin foam models


August 03, 2011 by Hanno Sahlmann, Asia Pacific Center for Theoretical Physics and Physics Department, Pohang University of Science and Technology, Korea. Winston Fairbairn, Hamburg University Title: Quantum deformation of 4d spin foam models PDF of the slides (300k) Audio [.wav 36MB], Audio [.aif 3MB].

The work Winston Fairbairn talked about is very intriguing because it brings together a theory of quantum gravity, and some very interesting mathematical objects called quantum groups, in a way that may be related to the non-zero cosmological constant that is observed in nature! Let me try to explain what these things are, and how they fit together. Quantum groups

52

A group is a set of things that you can multiply with each other to obtain yet another element of the group. So there is a product. Then there also needs to be a special element, the unit, that when multiplied with any element of the group, just gives back the same element. And finally there needs to be an inverse to every element, such that if one multiplies an element with its inverse, one gets the unit. For example, the integers are a group under addition, and the rotations in space are a group under composition of rotations. Groups are one of the most important mathematical ingredients in physical theories because they describe the symmetries of a physical system. A group can act in different ways on physical systems. Each such way is called a representation. Groups have been studied in mathematics for hundreds of years, and so a great deal is known about them. Imagine the excitement when it was discovered that there exists a more general (and complicated) class of objects that nevertheless have many of the same properties as groups, in particular with respect to their mathematical representations. These objects are called quantum groups. Very roughly speaking, one can get a quantum group by thinking about the set of functions on a group. Functions can be added and multiplied in a natural way. And additionally, the group product, the inversion and the unit of the group itself, induce further structures on the set of functions in the group. The product of functions is commutative - fg and gf are the same thing. But one can now consider set of functions that have all the required extra structure to make them set of functions acting over a group - except for the fact that the product is now not commutative

53

anymore. Then the elements cannot be functions on a group anymore -- in fact they can't be functions at all. But one can still pretend that they are functions on some funny type of set: A quantum group. Particular examples of quantum groups can be found by deforming the structures one finds for ordinary groups. In these examples, there is a parameter q that measures how big the deformations are. q=1 corresponds to the structure without deformation. If q is a complex number with q^n=1 for some integer n (i.e., q is a root of unity), the quantum groups have particular properties. Another special class of deformations is obtained for q a real number. Both of these cases seem to be relevant in quantum gravity. Quantum gravity Finding a quantum theory of gravity is an important goal of modern physics, and it is what loop quantum gravity is all about. Since gravity is also a theory of space, time, and how they fit together in a space-time geometry, quantum gravity is believed to be a very unusual theory, one in which quantities like time and distance come in discrete bits, (atoms of space-time, if you like) and are not all simultaneously measureable. One way to think about quantum theory in general is in terms of what is known as "path integrals". Such calculations answer the question of how probable it is that a given event (for example two electrons scattering off each other) will happen. To compute the path integral, one must sum up complex numbers (amplitudes), one for each way that the thing under study can happen. The probability is then given in terms of this sum. Most of the time this involves infinitely may possible ways, the electrons for example can scatter by exchanging one photon, or two, or three, or..., the first photon

54

can be emitted in infinitely many different places, and have different energies etc. Therefore computing path integrals is very subtle, needs approximations, and can lead to infinite values. Path integrals were introduced into physics by Feynman. Not only did he suggest to think about quantum theory in terms of these integrals, he also introduced an ingenious device useful for their approximate calculation. To each term in an approximate calculation of some particular process in quantum field theory, he associated what we now call its Feynman diagram. The nice thing about Feynman diagrams is that they not only have a technical meaning. They can also be read as one particular way in which a process can happen. This makes working with them very intuitive.

(image from Wikipedia) It turns out that loop quantum gravity can also be formulated using some sort of path integrals. This is often called spin foam gravity. The spin foams in the name are actually very nice analogs to Feynman diagrams in ordinary quantum theory: They are also a technical device in an approximation of the full integral - but as for Feynman diagrams, one can read them as a space-time history of a process - only now the process is how space-time itself changes!

55

Associating the amplitude to a given diagram usually involves integrals. In the case of quantum field theory there is an integral over the momentum of each particle involved in the process. In the case of spin foams, there are also integrals or infinite sums, but those are over the labels of group representations! This is the magic of loop quantum gravity: Properties of quantized space-time are encoded in group representations. The groups most relevant for gravity are known as technically as SL(2,C) -- a group containing all the Lorentz transformations, and SU(2), a subgroup related to spatial rotations. Cosmological constant In some theories of gravity, empty space has weight, and hence influences the dynamics of the universe. This influence is governed by a quantity called the cosmological constant. Until a bit more than ten years ago, the possibility of a non-zero cosmological constant was not considered very seriously, but to everybodys surprise, astronomers then discovered strong evidence that there is a positive cosmological constant. Creating empty space creates energy! The effect of this is so large that it seems to dominate

56

cosmological evolution at the present epoch (and there has been theoretical evidence for something like a cosmological constant in earlier epochs, too). Quantum field theory in fact predicts that there should be energy in empty space, but the observed cosmological constant is tremendously much smaller than what would be expected. So the explaining the observed value of the cosmological constant presents quite a mystery for physics. Spin foam gravity with quantum groups

Now I can finally come to the talk. As I've said before, path integrals are complicated objects, and infinities do crop up quite frequently in their calculation. Often these infinities are a due to problems with the approximations one has made, and sometimes several can be canceled against each other, leaving a finite result. To analyze those cases, it is very useful to first consider modifications of the path integral that remove all infinities, for example by restricting the ranges of integrations and sums. This kind of modification is called "introducing a regulator", and it certainly changes the physical content of the path integral. But introducing a regulator can help to analyze the situation and rearrange the calculation in such a way that in the end the regulator can be removed, leaving a finite result. Or one may be able to show that the existence of the regulator is in fact irrelevant at least in certain regimes of the theory. Now back to gravity: For the case of Euclidean (meaning a theory of pure space rather than a theory of space-time, this is unphysical but simplifies certain calculations) quantum gravity in three dimensions, there is a nice spinfoam formulation due to Ponzano and Regge, but as can be anticipated, it gives divergent answers in certain

57

situations. Turaev and Viro then realized that replacing the group by its quantum group deformation at a root of unity furnishes a very nice regularization. First of all, it does what a regulator is supposed to do, namely render the amplitude finite. This happens because the quantum group in question, with q a root of unity, turns out to have only finitely many irreducible representations, so the infinite sums that were causing the problems are now replaced by finite sums. Moreover, as the original group was only deformed, and not completely broken, one expects that the regulated results stay reasonably close to the ones without regulator. In fact, something even nicer happens: It turned out (work by Mizoguchi and Tada) that the amplitudes in which the group is replaced by its deformation into a quantum group correspond to another physical theory -quantum gravity with a (positive) cosmological constant! The deformation parameter q is directly related to the value of the constant. So this regulator is not just a technical tool to make the amplitudes finite. It has real physical meaning. Winston's talk was not about three dimensional gravity, but on the four dimensional version - the real thing, if you like. He was considering what is called the EPRL vertex, a new way to associate amplitudes to spin foams, devised by Engle, Pereira, Rovelli and Livine, which has created a lot of excitement among people working on loop quantum gravity. The amplitudes obtained this way are finite in a surprising number of circumstances, but infinities are nevertheless encountered as well. Winston Fairbairn, together with Catherine Meusburger (and, independently, Muxin Han), were now able to write down the new vertex function in which the group is replaced by a deformation into a quantum group. In fact, they developed a nice graphical calculus to do so. What is more, they

58

were able to show that it gives finite amplitudes. Thus the introduction of the quantum group does its job as a regulator. As for the technical details, let me just say that they are fiercely complicated. To appreciate the intricacy of this work, you should know that the group SL(2,C) involved is what is known as non-compact, which makes its quantum group deformations very complicated and challenging structures (intuitively, compact sets have less chance of producing infinities than non-compact ones). Also, the EPRL vertex function relies on a subtle interplay between SU(2) and SL(2,C). One has to understand this interplay on a very abstract level to be able to translate it to quantum groups. The relevant type of deformation in this case has a real parameter q. In this case, there are still infinitely many irreducible representations but it seems that it is the quantum version of the interplay between SU(2) and SL(2,C) that brings about the finiteness of the sums. Thanks to this work, we now have a very interesting question on our hands: Is the quantum group deformation of the EPRL theory again related to gravity with a cosmological constant? Many people bet that this is the case, and the calculations to investigate this question have already begun, for example in a recent preprint by Ding and Han. Also, this begs the question of how fundamentally intertwined quantum gravity is with quantum groups. There were some interesting discussions about this already during and after the talk. At this point, the connection is still rather mysterious on a fundamental level.

59

Observational signatures of loop quantum cosmology?


April 17, 2011 by Edward Wilson-Ewing, Penn State Ivan Agullo, March 29th 2011. Observational signatures of loop quantum cosmology? PDF of the slides, and audio either .wav (40MB) or .aif format (4MB). in

In the very early universe the temperature was so high that electrons and protons would not combine to form hydrogen atoms but rather formed a plasma that made it impossible for photons to travel significant distances as they would continuously interact with the electrons and protons. However, as the early universe expanded, it cooled and, 380 000 years after the Big Bang, the temperature became low enough for hydrogen atoms to form in a process called recombination. At this point in time, it became possible for photons to travel freely, as the electrons and protons had combined to become electrically neutral atoms. It is possible today to observe the photons from that era that are still travelling

60

through the universe: these photons form what is called the cosmic microwave background (CMB). By observing the CMB, we are in fact looking at a photograph of what the universe looked like only 380 000 years after the Big Bang! Needless to say, the detection of the CMB was an extremely important discovery and the study of it has taught us a great deal about the early universe. The existence of the CMB was first predicted in 1948 by George Gamow, Ralph Adler and Robert Herman, when they estimated its temperature to be approximately 5 degrees Kelvin. Despite this prediction, the CMB was not detected until Arno Penzias and Robert Wilson made their Nobel Prize-winning discovery in 1964. Ever since then, there have been many efforts to create better radio telescopes, both Earth- and satellite-based, that would provide more precise data and therefore more information about the early universe. In the early 1990s the COBE (COsmic Background Explorer) satellite's measurements of the small anisotropies in the CMB were considered so important that two of COBE's principal investigators, George Smoot and John Mather, were awarded the 2006 Nobel Prize. The state-of-the-art data today comes from the WMAP (Wilkinson Microwave Anisotropy Probe) satellite and there is already another satellite that has been taking data since 2009. This satellite, called Planck, has a higher sensitivity and better angular resolution than WMAP ,and its improved map of the CMB is expected to have been completed by the end of 2012. The CMB is an almost perfect black body and its temperature has been measured to be 2.7 degrees Kelvin. As one can see in the figure below, the black body curve of the CMB is perfect: all of the data points lie right on the best fit curve. It is possible to measure the CMB's temperature in every direction and it has been found that

61

it is the same in every direction (what is called isotropic) up to one part in 100 000.

Even though the anisotropies are very small, they have been measured quite precisely by the WMAP satellite, and one can study how the variations in temperature are correlated by their angular separation in the sky. This gives the power spectrum of the CMB, where once again theory and experiment agree quite nicely:

62

In order to obtain a theoretical prediction about what the power spectrum should look like, it is important to understand how the universe behaved before recombination occurred. This is where inflation, an important part of the standard cosmological model, comes in. The inflationary epoch occurs very soon after the universe leaves the Planck regime (where quantum gravity effects may be important), and during this period the universe's volume increases by a factor of approximately e^(210) in the very short time of about 10^(-33) seconds (for more information about inflation, see the previous blog post). Inflation was first suggested as a mechanism that could explain why (among other things) the CMB is so isotropic: one effect of the universe's rapid expansion during inflation is that the entire visible universe today occupied a small volume before the beginning of inflation and had had time to enter into causal contact and thermalize. This pre-inflation thermalization can explain why, when we look at the universe today, the temperature of the cosmic microwave background is the same in all directions. But this is not all: using inflation, it is also possible to predict the form of the power spectrum. This was done well before it was measured sufficiently precisely by WMAP and, as one can see in the graph above, the observations match the prediction very well! Thus, even though inflation was introduced to explain why the CMB's temperature is so isotropic, it also explains the observed power spectrum. It is especially this second success that has ensured that inflation is part of the standard cosmological model today. However, there remain some issues that have not been resolved in inflation. For example, at the beginning of inflation, it is assumed that the quantum fields are in a particular state called the

63

Bunch-Davies vacuum. It is then possible to evolve this initial state in order to determine the state of the quantum fields at the end of inflation and hence determine what the power spectrum should look like. Even though the predicted power spectrum agrees extremely well with the observed one, it is not entirely clear why the quantum fields should be in the Bunch-Davies vacuum at the onset of inflation. In order to explain this, we must try to understand what happened before inflation started and this requires a theory of quantum gravity. Loop quantum cosmology is one particular quantum gravity theory which is applied to the study of our universe. There have been many interesting results in the field over the last few years: (i) it has been shown in many models that the initial big bang singularity is replaced by a bounce where the universe contracts to a minimal volume and then begins to expand again; (ii) it has also been shown that quantum gravity effects only become important when the space-time curvature reaches the Planck regime, therefore classical general relativity is an excellent approximation when the curvature is less than the Planck scale; (iii) the dynamics of the universe around the bounce point are such that inflation occurs naturally once the space-time curvature leaves the Planck regime. The goal of the project presented in the ILQGS talk is to use loop quantum cosmology in order to determine whether or not the quantum fields should be in the Bunch-Davies vacuum at the onset of inflation. More specifically, the idea is to choose carefully the initial conditions at the bounce point of the universe (rather than at the beginning of inflation) and then study how the state of the fields evolves as the universe expands in order to determine their state at the beginning of inflation. Now one can ask: Does this change the

64

state of the quantum fields at the onset of inflation? And if so, are there any observational consequences? These are the questions addressed in this presentation.

65

Inflation and observational constraints on loop quantum cosmology


February 21, 2011 by Ivan Agull, Penn State Gianluca Calcagni, Inflationary observables and observational constraints on loop quantum cosmology January 18th 2011. PDF of the slides, and audio in either.wav (40MB) or .aifformat (4MB).

Cosmology has achieved a remarkable progress in the recent years. The proliferation of meticulous observations of the Universe in the last two decades has produced a significant advance in our understanding of its large scale structure and of its different stages of evolution. The precision attained in the observations already made, and in the observations expected in the near future, is such that the Cosmos itself constitutes a promising "laboratory" to obtain information about fundamental physics. The obvious drawback is that, unlike what happens in common laboratories on earth, such as the famous Large Hadron Collider at CERN, our Universe has to be observed as it is, and we are not allowed to select and modify the parameters of the "experiment". The seminar given by Gianluca Calcagni, based on results obtained in collaboration with M.

66

Bojowald and S. Tsujikawa, shows an analysis of the effects that certain aspects of quantum gravity, in the context of Loop Quantum Cosmology, could produce on observations of the present Universe. One of the important pieces of the current understanding of the Universe is the mechanism of inflation. The concept of inflation generically makes reference to the enormous increase of some quantity in a short period of time. It is a concept that is surely familiar to everyone, because we all have experienced an epoch when, for instance, house prices have experienced an alarming growth in a brief period. In the case of cosmological inflation the quantity increasing with time is the size of the Universe. Inflation considers that there was a period in the early Universe in which its volume increased by a factor of arround 1078 (a 1 followed by 78 zeros!) in a period of time of about 10-36 seconds. There is no doubt that, fortunately, this inflationary process is more violent than anything that we experience in our everyday life. The inflationary mechanism in cosmology was introduced by Alan Guth in 1981, and was formulated in its modern version by A. Linde, A. Abrecht, P. Steinhard a year later. Guth realized that a huge expansion would make the Universe look almost flat and highly homogeneous for its inhabitants, without requiring it had been like that since its origin. This is consistent with observations of our Universe, which looks like highly homogeneous on large scales; and we know, by observing the cosmic microwave background (CMB), was even more homogeneous in the past. The CMB constitutes a "photograph" of the Universe when it was 380000 years old (a teenager compared with its current age of about 14000000000 years) and shows that the inhomogeneities existing at that time consisted of tiny variations (of 1 part in 100000) of its temperature.

67

These small fluctuations are the origin of the inhomogeneities that we observe today in form of galaxies cluster, galaxies, stars, etc.

However, a few years after the proposal of inflation, several researchers (Mukhanov, Hawking, Guth, Pi, Starobinsky, Bardeen, Steinhardt and Turner) realized that inflation may provide something more valuable than a way of understanding the homogeneity and flatness of the observable Universe. Inflation provides a natural mechanism for generating the small inhomogeneities we observe in the CMB through one of the most attractive ideas in contemporary physics. A violent expansion of the Universe is able to amplify quantum vacuum fluctuations (that exist as a consequece of the Heisenbergs uncertainty principle), and spontaneously produce tiny density fluctuations. In that way, General Relativity, via the exponential expansion of the Universe, and Quantum Mechanics, via the uncertainty principle, appear together (but not scrambled) in the game to generate a spectrum of primordial cosmic inhomogeneities out from the vacuum. This spectrum constitutes the "seed" that, by the effect of gravity, will later evolve into the temperature fluctuations of the CMB and then into the structures that we observe today in our Universe, from galaxy clusters to ourselves. The statistical properties of the temperature distribution of the CMB, analyzed in great detail by the WMAP satellite in the

68

last decade, confirm that this picture of the genesis of our Universe is compatible with observations. Although there is not enough data to confirm the existence of an inflationary phase in the early stages of the Universe, inflation is one of the best candidates in the landscape of contemporary physics to explain the origin of the cosmic structures.

The mechanism of inflation opens a window of interesting possibilities for both theoretical and experimental physicists. The increase of the size of the Universe during inflation is such that the radius of an atom would expand to reach a distance that light would take a year to travel. This means that, if there was a period of inflation in the past, by observing the details of small temperature fluctuations in the CMB, we are actually getting information about physical processes that took place at extremely small distances or, equivalently, at very large energies. In particular, inflation predicts that the origin of the CMB temperature variations are generated from quantum vacuum fluctuations when the energy density of the Universe was about 10-11 times the Planck density (the Planck density corresponds to 510113 Jules per cubic meter). The energy density that the Large Hadron Collider at CERN is going to reach is

69

around 10-60 times the Planck density, that is 49 orders of magnitude smaller! In this sense, the observation of our own Universe is a promising way to reveal information about physical theories that only become apparent at very high energies, as it is the case of quantum gravity. This fact has motivated experts in quantum gravity to focus their research on the predictions of potential observable signatures of their theories in cosmology, as shown in this seminar by Gianluca Calcagni. Gianluca Calcagni shows in this seminar an analysis of the effects that certain aspects of quantum gravity, from the perspective Loop Quantum Cosmology (LQC), could produce on the generation of primordial fluctuations during inflation. Generally, LQC introduces two types of corrections to the equations describing the evolution of cosmic inhomogeneities; the so-called holonomy corrections and inverse volume corrections. This seminar focuses on the latter type, leaving the former for future work. Gianluca shows that it is possible to obtain the equations describing cosmic inhomogeneities during the inflationary epoch including inverse volume effects in LQC consistently (in the case that these corrections are small). With these equations it is possible then to recalculate the properties of the temperature distribution of the CMB including such LQC corrections. Generically, the effects that quantum aspects of gravity introduce into the classical equations are relevant when the scale of energy density is close to the Planck scale, and quickly disappear at lower scales. One would expect, therefore, that at the energy density at which inflation occurs, which is eleven orders of magnitude below the Planck scale, the effects of quantum gravity would be suppressed by a factor of 10-11. The authors of this work argue that, surprisingly, this is not the case, and inverse volume

70

corrections may be larger. In summary, the conclusions of this seminar suggests that even though the effects that quantum aspects of gravity have on the CMB are small, cosmological observations can put upper bounds on the magnitude of the corrections coming from quantum gravity that may be closer to theoretical expectation than what one would expect. Although there is still a lot of work to do, the observation of our Universe via the cosmic microwave background and the distribution of galaxies is shown as a promising way to obtain information about physical processes where the relationship of Quantum Mechanics and General Relativity plays a major role. In this sense, it may be the Universe itself which give us a helping hand toward the understanding of the fundamental laws of physics.

71

Loop quantization of a parameterized field theory


January 22, 2011 by Rodolfo Gambini and Jorge Pullin Madhavan Varadarajan, Loop quantum gravity dynamics: insights from parameterized field theory, 16 November 2010. PDF of the slides, and audio in either .wav (82MB) or .aif format (4MB). Based on joint work with Alok Laddha.

In many instances physical theories are formulated in such a way that the mathematical quantities used to describe the physics are not the bare minimum necessary. One is using redundant variables. This can be either for convenience, to make better contact with easily measurable things, or, as is the case in general relativity because we do not know how to isolate the essential variables. When one has redundant variables symmetries appear in the theory: apparently different values of the variables represent the same physics. There also are equations relating the values of the variables, called constraints. Constraints have two purposes: first they tell us that the variables are redundant and in reality there are less "free" variables than one thinks. Second, the constraints

72

generate transformations among the variables that tell us that what appear to be two different physical configurations are in fact the same physically. Configurations that are physically equivalent through these transformations are called "gauge equivalent". When one has more than one constraint there exist consistency conditions that need to be satisfied that say that the free variables are the same no matter in which order I generate transformations with the constraints. So for instance I could start from a given configuration and transform it using constraint number 1 and then number 2 or vice-versa. Mathematically the consistency conditions state that the difference of the action of two constraints in the two orders is either zero or is proportional to a constraint. When this occurs one talks of the constraints "closing an algebra". When one proceeds to quantize theories, the constraints have to be promoted to quantum operators. The procedure has a significant degree of ambiguity, particularly for complex field theories like general relativity. One does have some guiding tools. For instance the quantum constraints should also "close an algebra" like their classical counterparts. It is expected that achieving such requirement will cut down on the ambiguities of the quantization process and offer guidance when building the quantum theory. The main symmetry of the general theory of relativity is called "diffeomorphism invanriance". This symmetry states that a priori all points of space-time are equivalent and can be dragged into each other. This is a very natural notion in an empty universe. Say you were lost in a ship the middle of the ocean in a very calm day so far away from the coast that you cannot see it and that the sky is overcast and at night. You could not tell where your ship is or one point of the ocean from another they will all look the same to you.

73

The same is true in any empty universe. To start to distinguish one point from another you need to introduce objects in your universe, for instance fields. Then you can identify a point by knowing what is the value of the field at that point. Your theory will still be diffeomorphism invariant if when you drag a point into another you drag the corresponding value of the field.

Ordinary non-gravitational physics is not formulated in a diffeomorphism invariant way. Suppose your physics problem was to find your way from Florida Blvd and N. 22nd. St. to the entrance of Progress Park in Baton Rouge, Louisiana. Suppose you had "solved" your problem by buying a GPS unit. Suppose now that an earthquake takes place that does not cause much destruction but deforms the street grid as the figure shows. Your GPS is now useless as the maps it has were those prior to the earthquake. But suppose you had solved your problem by asking for directions. So you have written in a piece of paper "walk North on 22nd street, turn rigth on North St., turn left on N. 30th. St., etc. Such "solution" would still allow you to reach your

74

destination. The reason it is still valid is that the earthquake will have moved around your trajectory and also your reference points for the turns. The end result is therefore invariant.

Similarly ordinary quantum field theories can be reformulated in such a way that they are invariant under diffeomorphisms. Basically one uses additional physical fields to label the points of space time such that when a diffeomorphism takes place the points of space and the fields one is studying move in unison, just like your directions and your route moved in unison providing an invariant final result. When one formulates them that way they are called "parameterized field theories" because the coordinates are not fixed anymore but are parameters you can vary. This is what Madhavan Varadarajan in collaboration with Alok Laddha considered in this talk. They study a field theory in a flat 2 dimensional space-time to simplify things and formulate it in a diffeomorphism invariant fashion (in the jargon this is called 1+1 dimensional to indicate one dimension is space and one is time). This is a subject with a long history. But what is new is that they use the techniques of loop quantum gravity to treat the problem, which in turn solves some problems that were encountered in previous

75

attempts to treat the problem with conventional quantum field theory techniques. They find that the theory has constraints that need to be promoted to quantum operators and that they close an algebra that is very similar to the one that arises in the case of general relativity in four space-time dimensions. The advantage is that the 1+1 model is much simpler to deal with and they can complete the process of promoting the constraints to operators and check that they are consistent (they "close an algebra"). This in turn leads to valuable insights of how one would have to proceed in the case of general relativity, in particular which versions of the constraints to use and which spaces of quantum states to use.

76

Quantum evaporation of 2-d black holes


September 29, 2010 by John Baez Abhay Ashtekar, Quantum evaporation of 2-d black holes, 21 September 2010. PDF of the slides, and audio in either .wav (45MB) or .aif format (4MB).

Abhay Ashtekar has long been one of the leaders of loop quantum gravity. Einstein described gravity using a revolutionary theory called general relativity. In the mid-1980s, Ashtekar discovered a way to reformulate the equations of general relativity in a way that brings out their similarity to the equations describing the other forces of nature. Gravity has always been the odd man out, so this was very intriguing. Shortly thereafter, Carlo Rovelli and Lee Smolin used this new formulation to tackle the problem of quantizing gravity: that is,

77

combining general relativity with the insights from quantum mechanics. The result is called "loop quantum gravity" because in an early version it suggested that at tiny distance scales, the geometry of space was not smooth, but made of little knotted or intersecting loops. Later work suggested a network-like structure, and still later time was brought into the game. The whole story is still very tentative and controversial, but it's quite a fascinating business. Maybe this movie will give you a rough idea of the images that flicker through people's minds when they think about this stuff: Embedded File ()

... though personally I hear much cooler music in my head. Now, one of the goals of any theory of quantum gravity must be to resolve certain puzzles that arise in naive attempts to blend general relativity and quantum mechanics. And one of the most famous is the so-called black hole information paradox. (I don't think it's actually a "paradox", but that's what people usually call it.) The problem began when Hawking showed, by a theoretical calculation, that black holes aren't exactly black. In fact he showed how to compute the temperature of a black hole, and found that it's not zero. Anything whose temperature is above absolute zero will radiate light: visible light if it's hot enough, infrared if it's cooler,

78

microwaves if it's even cooler, and so on. So, black holes must 'glow' slightly. Very slightly. The black holes that astronomers have actually detected, formed by collapsing stars, would have a ridiculously low temperature: for example, about 0.00000002 degrees Kelvin for a black hole that's 3 times the mass of our Sun. So, nobody has actually seen the radiation from a black hole. But Hawking's calculations say that the smaller a black hole is, the hotter it is! Its temperature is inversely proportional to its mass. So, in principle, if we wait long enough, and keep stuff from falling into our black hole, it will 'evaporate'. In other words: it will gradually radiate away energy, and thus lose mass (since E = mc2), and thus get hotter, and thus radiate more energy, and so on, in a vicious feedback loop. In the end, it will disappear in a big blast of gamma rays! At least that's what Hawking's calculations say. These calculations were not based on a full-fledged theory of quantum gravity, so they're probably just approximately correct. This may be the way out of the "black hole information paradox". But what's the paradox? Patience I'm gradually leading up to it. First, you need to know that in all the usual physical processes we see, information is conserved. If you've studied physics you've probably heard that various important quantities don't change with time: they're "conserved". You've probably heard about conservation of energy, and momentum, and angular momentum

79

and electric charge. But conservation of information is equally fundamental, or perhaps even more so: it says that if you know everything about what's going on now, you can figure out everything about what's going on later and vice versa, too! Actually, if you've studied physics a little but not too much, you may find my remarks puzzling. If so, don't feel bad! Conservation of information is usually not mentioned in the courses that introduce the other conservation laws. The concept of information is fundamental to thermodynamics, but it appears in disguised form: "entropy". There's a minus sign lurking around here: while information is a precise measure of how much you do know, "entropy" measures how much you don't know. And to add to the confusion, the first thing they tell you about entropy is that it's not conserved. Indeed, the Second Law of Thermodynamics says that the entropy of a closed system tends to increase! But after a few years of hard thinking and heated late-night arguments with your physics pals, it starts to make sense. Entropy as considered in thermodynamics is a measure of how much information you lack about a system when you only know certain things about it things that are easily measured. For example, if you have a box of gas, you might measure its volume and energy. You'd still be ignorant about the positions of all the molecules inside. The amount of information you lack is the entropy of the gas. And as time passes, information tends to pass from easily measured forms to less easily measured forms, so people say entropy increases. But the information is still there in principle it's

80

just hard to access. So information is conserved. There's a lot more to say here. For example: why does information tend to pass from easily measured forms to less easily measured forms, instead of the reverse? Does thermodynamics require a fundamental difference between future and past a so-called "arrow of time"? Alas, I have to sidestep this question, because I'm supposed to be telling you about the black hole information paradox. So: back to black holes! Suppose you drop an encyclopedia into a black hole. The information in the encyclopedia seems to be gone. At the very least, it's extremely hard to access! So, people say the entropy has increased. But could the information still be there in hidden form? Hawking's original calculations suggested the answer is no. Why? Because they said that as the black hole radiates and shrinks away, the radiation it emits contains no information about the encyclopedia you threw in or at least, no information except a few basic things like its energy, momentum, angular momentum and electric charge. So no matter how clever you are, you can't examine this radiation and use it to reconstruct the encyclopedia article on, say, Aardvarks. This information is lost to the world forever! So what's the black hole information paradox? Well, it's not exactly a "paradox". The problem is just that in every other process known to physicists, information is conserved so it seems very

81

unpalatable to allow any exception to this rule. But if you try to figure out a way to save information conservation in the case of black holes, it's tough. Tough enough, in fact, to have bothered many smart physicists for decades. Indeed, Stephen Hawking and the physicist John Preskill made a famous bet about this puzzle in 1997. Hawking bet that information wasn't conserved; Preskill bet it was. In fact, they bet an encyclopedia!

In 2004 Hawking conceded the bet to Preskill, as shown above. It happened a conference in Dublin I was there and blogged about it. Hawking conceded because he did some new calculations suggesting that information can gradually leak out of the black hole, thanks to the radiation. In other words: if you throw an encyclopedia in a black hole, a sufficiently clever physicist can indeed reconstruct the article on Aardvarks by carefully examining the radiation from the black hole. It would be incredibly hard, since the information

82

would be highly scrambled. But it could be done in principle. Unfortunately, Hawking's calculation is very hand-wavy at certain crucial steps in fact, more hand-wavy than certain calculations that had already been done with the help of string theory (or more precisely, the AdS-CFT conjecture). And, neither approach makes it easy to see in detail how the information comes out in the radiation. This finally brings us to Ashtekar's talk. Despite what you might guess from my warmup, his talk was not about loop quantum gravity. Certainly everyone working on loop quantum gravity would love to see this theory resolve the black hole information paradox. I'm sure Ashtekar is aiming in that direction. But his talk was about a warmup problem, a "toy model" involving black holes in 2d spacetime instead of our real-world 4-dimensional spacetime. The advantage of 2d spacetime is that the math becomes a lot easier there. There's been a lot of work on black holes in 2d spacetime, and Ashtekar is presenting some new work on an existing model, the Callen-Giddings-Harvey-Strominger black hole. This new work is a mixture of analytical and numerical calculations done over the last 2 years by Ashtekar together with Frans Pretorius, Fethi Ramazanoglu, Victor Taveras and Madhavan Varadarajan. I will not attempt to explain this work in detail! The main point is this: all the information that goes into the black hole leaks back out in the form of radiation as the black hole evaporates.

83

But the talks also covers many other interesting issues. For example, the final stages of black hole evaporation display interesting properties that are independent of the details of its initial state. Physicists call this sort of phenomenon "universality". Furthermore, when the black hole finally shrinks to nothing, it sends out a pulse of gravitational radiation, but not enough to destroy the universe. It may seem very peculiar to imagine that the death spasms of a black hole could destroy the universe, but in fact some approximate "semiclassical" calculations of Hawking and Stewart suggested just that! They found that the dying black hole emitted a pulse of infinite spacetime curvature dubbed a "thunderbolt" which made it impossible to continue spacetime beyond that point. But they suggested that a more precise calculation, taking quantum gravity fully into account, would eliminate this effect. And this seems to be the case. For more, listen to Ashtekar's talk while looking at the PDF file of his slides!

84

Вам также может понравиться