Today, in our Mathematical Quantum Mechanics lecture, I put my foot in my mouth. I claimed that under very general circumstances there cannot be spontaneous symmetry breaking in quantum mechanics. Unfortunately, there is an easy counter example:

Take a nucleus with charge Z and add five electrons. Assume for simplicity that there is no Coulomb interaction between the electrons, only between the electrons and the nucleus (this is not essential, you can instead take the large Z limit as explained in this paper by Frieseke. The only way the electrons see each other is via the Pauli exclusion principle. The Hamiltonian for this system has an obvious SO(3) rotational symmetry. The ground state, however is what chemists would call 1S^2 2S^2 2P^1. That is, there is one electron in a P-orbital and in fact this state is six-fold degenerate (including spin). Of course, there is a symmetric linear combination but in that six dimensional eigen-space of the Hamiltonian there are also linear combinations that are not rotationally invariant. Thus, the SO(3) symmetry here is in general spontaneously broken.

This is in stark contrast to the folk theorem that for spontaneous symmetry breaking you need at least 2+1 non-compact dimensions. This for example is discussed by Witten in lecture 1 of the IAS lectures or here and is even stated in the Wikipedia.

Witten argues using the Stone-von Neumann theorem on the uniqueness of the representation of the Weyl group (the argument is too short for me) and explicitly only discusses the particle on the real line with a potential (the famous double well potential where the statement is true). In 1+1 dimensions, there is the argument due to Coleman, that in the case of symmetry breaking you would have a Goldstone boson. But free bosons do not exist in 1+1d since the 2-point function would be a log which is in conflict with positivity.

I talked to a number of people and they all agreed that they thought that "in low dimensions (QM being 0+1d QFT), quantum fluctuations are so strong they destroy any symmetry breaking". Unfortunately, I could not get hold of Prof. Wagner (of the Mermin-Wagner theorem) but maybe you my dear reader have some insight what the true theorem is?

## Wednesday, December 10, 2008

## Monday, December 08, 2008

### Spectral Action Part II

After in part I, I have explained how to go back and forth between spaces with metric information and (C*)-algebras it's now time to add some physics: Let us now explore how to formulate an action principle that will lead to equations of motion when extremised.

Let me stress however, that although it may look differently, this is entirely classical physics, there is no quantum physics to be found anywhere here even though having possibly non-commutative algebras might remind you of quantum mechanics. This, however, is only a way of writing strange spaces and has nothing to do with the quantisation of a physical theory. We will even compute divergences of one loop Feynman diagrams, but still this only a technical trick to write the classical Einstein action (integral of the Ricci scalar over the manifold) and has nothing to do with quantum gravity! Our final result will be the classical equations of motion of gravity coupled to a gauge theory and a scalar field (the Higgs)!

The trick goes as follows: Last time, we saw that in order to encode metric information we had to introduce a differentiation operator so we could formulate the requirement that a function should have a gradient which has length less or equal to 1. One could have taken the gradient directly but there was a slight advantage to take the Dirac operator instead since that maps spinors to spinors instead of the gradient mapping scalars to vector fields.

Another advantage of the Dirac operator is that when it is squared it gives the Laplacian plus the Ricci scalar (which we want to integrate to obtain the Einstein Hilbert action) divided by some number which I vaguely remember to be 12 but which is not essential. In formulas, we have . Even better, when we are in d dimensions, taking the d-th power gives us the volume element. Thus, taking these two observations together and using the Clifford algebra, we find that taking powers of gives us . The second term is obviously what we need to integrate to obtain the EH action.

But how to get the integration? Here the important observation is that we can pretend that this operator is the kinetic term for some field. Then, we can compute the one loop divergence of this field. On one hand, we know that this is the functional trace of the log of this operator. On the other hand, we can compute the divergence of this expression either diagrammatically or with slightly advanced technology in terms of the heat-kernel.

I will explain the heat-kernel formalism at some other time. However, the result of that treatment is a series of "heat-kernel coefficients" which are scalars expressions of the curvature of mass dimension . That is is 1, is basically the Ricci scalar, is a linear combination of scalar contractions of the curvature squared and so on. All those can interpreted as the coefficients of a power series in a formal parameter , i.e. .

The important result now is that the effective action is the integral of over s from 0 to infinity and over all space-time points. Because of the negative powers of , this integral diverges at 0. It turns out, this diverge is nothing but the UV divergence of the one-loop diagrams. Now you can apply your favourite regularisation procedure (dimensinal regularisation, Pauli-Vilars, you name it). Here, for simplicity, we just use a cut-off and start our integration at instead of 0. Connes and Chamseddine do something very similar. Just instead of a sharp cut-off they use a function in the integral that decays exponentially when s approaches 0 (NB has mass-dimension -2 and thus acts as ).

For concreteness, take . Then the term leads to times 1 integrated over the space (i.e. a cosmological constant). The term from gives times the Einstein-Hilbert term and is times an integral of curvature squared. The remaining terms are finite when Lambda is removed. Thus, we find the Einstein action (including a huge cosmological constant) plus a curvature squared correction as the divergence of the one-loop effective action.

But the effective action can also be written as the functional trace of the log of the operator. For this, we don't need any field for which the operator is the kinetic operator of. Imagine we happen to know all the eigenvalues of . Then we have just found that the Einstein Hilbert action can be written as the divergent part of the sum of the log of all those eigenvalues. And this is the spectral action principle: .

As it happens, Connes and Chamseddine really know the eigenvalues of the Dirac operator on spheres. So, they can really do this sum (which turns out to be expressible in terms of zeta-functions). The spheres are of course compact and thus the eigenvalues are discrete. In our field theory argument above, however, we implicitly used the usual continuous momentum space arguments for the effective action. In the limit of large momentum (which is relevant for the divergence) corresponding to short distances this should not really matter, one can pretend that momentum space is actually continuous. However, with a cut-off, this is not precisely true and the discreteness comes in in sub-leading orders. The difference between the continuous computation and the discrete one is of course tiny for a large cut-off. And it is exactly this difference that lets the two authors find agreement to "astronomical precision" (p. 15).

OK, up to now, we have reformulated the gravitational action in terms of the spectrum of the Dirac operator. But what about gauge interactions. But every physicist should now now how to proceed: Do gravity in higher dimensions and perform a Kaluza-Klein reduction. If the compact space has a symmetry group G then besides some scalars you will find YM-theory with gauge group G.

In the non-commutative setting, one can as well take a non-commutative space for the compact directions. Here, Connes and Chamseddine argue for a minimal example (given some conditions of unclear origin, one might be suspicions that those are tuned to give the correct result). It is minimal in terms of irreducibility. However, the total space being the product (by definition reducible) of this compact space by a classical commutative 4d space time (again reducible) make the naturalness of this requirement a but questionable.

For the concrete specification of the compact space, some Clifford magic is employed (including Bott periodicity) but this is standard material. You end up with a non-commutative description of two points for the two spinor chiralities. The symmetry then determines the gauge group. Here I am not completely sure but it seems to me that they employ the usual representation theory arithmetic from GUT-theories to sort all standard model particles in nice representations.

The non-commutative formulation allows the KK-gauge boson A_mu to also have a leg in the compact direction between the two points. From the 4d perspective this is of course a scalar. This of course will be the Higgs. The two points have a finite distance (see the Landi notes for details) and give a mass term inversely proportional to the distance (that is opposite to a superficially similar D-brane construction as noted by Michael Douglas some years ago).

That's it. We have an algebraic formulation of the classical action for the standard model. Let's recap what went in: The NCG version of a space with metric information in terms of a Dirac operator. Some heat-kernel material to write the gravitational action in terms of eigenvalues, GUT-type representations theory and KK-theory. What kind of 'surprises' were found? GUT type relations are rediscovered, treating the discrete spectrum of on spheres can well be approximated by continuous momentum space for momenta large compared to the inverse radius. And there is a final observation detailed in appendix A of the paper: For there are some cancellations of unclear origin which make vanish. This however is a statement about a heat-kernel coefficient and does not have a priori any connection with the non-commutative approach. Furthermore, the physical implications are left in the dark.

Let me stress however, that although it may look differently, this is entirely classical physics, there is no quantum physics to be found anywhere here even though having possibly non-commutative algebras might remind you of quantum mechanics. This, however, is only a way of writing strange spaces and has nothing to do with the quantisation of a physical theory. We will even compute divergences of one loop Feynman diagrams, but still this only a technical trick to write the classical Einstein action (integral of the Ricci scalar over the manifold) and has nothing to do with quantum gravity! Our final result will be the classical equations of motion of gravity coupled to a gauge theory and a scalar field (the Higgs)!

The trick goes as follows: Last time, we saw that in order to encode metric information we had to introduce a differentiation operator so we could formulate the requirement that a function should have a gradient which has length less or equal to 1. One could have taken the gradient directly but there was a slight advantage to take the Dirac operator instead since that maps spinors to spinors instead of the gradient mapping scalars to vector fields.

Another advantage of the Dirac operator is that when it is squared it gives the Laplacian plus the Ricci scalar (which we want to integrate to obtain the Einstein Hilbert action) divided by some number which I vaguely remember to be 12 but which is not essential. In formulas, we have . Even better, when we are in d dimensions, taking the d-th power gives us the volume element. Thus, taking these two observations together and using the Clifford algebra, we find that taking powers of gives us . The second term is obviously what we need to integrate to obtain the EH action.

But how to get the integration? Here the important observation is that we can pretend that this operator is the kinetic term for some field. Then, we can compute the one loop divergence of this field. On one hand, we know that this is the functional trace of the log of this operator. On the other hand, we can compute the divergence of this expression either diagrammatically or with slightly advanced technology in terms of the heat-kernel.

I will explain the heat-kernel formalism at some other time. However, the result of that treatment is a series of "heat-kernel coefficients" which are scalars expressions of the curvature of mass dimension . That is is 1, is basically the Ricci scalar, is a linear combination of scalar contractions of the curvature squared and so on. All those can interpreted as the coefficients of a power series in a formal parameter , i.e. .

The important result now is that the effective action is the integral of over s from 0 to infinity and over all space-time points. Because of the negative powers of , this integral diverges at 0. It turns out, this diverge is nothing but the UV divergence of the one-loop diagrams. Now you can apply your favourite regularisation procedure (dimensinal regularisation, Pauli-Vilars, you name it). Here, for simplicity, we just use a cut-off and start our integration at instead of 0. Connes and Chamseddine do something very similar. Just instead of a sharp cut-off they use a function in the integral that decays exponentially when s approaches 0 (NB has mass-dimension -2 and thus acts as ).

For concreteness, take . Then the term leads to times 1 integrated over the space (i.e. a cosmological constant). The term from gives times the Einstein-Hilbert term and is times an integral of curvature squared. The remaining terms are finite when Lambda is removed. Thus, we find the Einstein action (including a huge cosmological constant) plus a curvature squared correction as the divergence of the one-loop effective action.

But the effective action can also be written as the functional trace of the log of the operator. For this, we don't need any field for which the operator is the kinetic operator of. Imagine we happen to know all the eigenvalues of . Then we have just found that the Einstein Hilbert action can be written as the divergent part of the sum of the log of all those eigenvalues. And this is the spectral action principle: .

As it happens, Connes and Chamseddine really know the eigenvalues of the Dirac operator on spheres. So, they can really do this sum (which turns out to be expressible in terms of zeta-functions). The spheres are of course compact and thus the eigenvalues are discrete. In our field theory argument above, however, we implicitly used the usual continuous momentum space arguments for the effective action. In the limit of large momentum (which is relevant for the divergence) corresponding to short distances this should not really matter, one can pretend that momentum space is actually continuous. However, with a cut-off, this is not precisely true and the discreteness comes in in sub-leading orders. The difference between the continuous computation and the discrete one is of course tiny for a large cut-off. And it is exactly this difference that lets the two authors find agreement to "astronomical precision" (p. 15).

OK, up to now, we have reformulated the gravitational action in terms of the spectrum of the Dirac operator. But what about gauge interactions. But every physicist should now now how to proceed: Do gravity in higher dimensions and perform a Kaluza-Klein reduction. If the compact space has a symmetry group G then besides some scalars you will find YM-theory with gauge group G.

In the non-commutative setting, one can as well take a non-commutative space for the compact directions. Here, Connes and Chamseddine argue for a minimal example (given some conditions of unclear origin, one might be suspicions that those are tuned to give the correct result). It is minimal in terms of irreducibility. However, the total space being the product (by definition reducible) of this compact space by a classical commutative 4d space time (again reducible) make the naturalness of this requirement a but questionable.

For the concrete specification of the compact space, some Clifford magic is employed (including Bott periodicity) but this is standard material. You end up with a non-commutative description of two points for the two spinor chiralities. The symmetry then determines the gauge group. Here I am not completely sure but it seems to me that they employ the usual representation theory arithmetic from GUT-theories to sort all standard model particles in nice representations.

The non-commutative formulation allows the KK-gauge boson A_mu to also have a leg in the compact direction between the two points. From the 4d perspective this is of course a scalar. This of course will be the Higgs. The two points have a finite distance (see the Landi notes for details) and give a mass term inversely proportional to the distance (that is opposite to a superficially similar D-brane construction as noted by Michael Douglas some years ago).

That's it. We have an algebraic formulation of the classical action for the standard model. Let's recap what went in: The NCG version of a space with metric information in terms of a Dirac operator. Some heat-kernel material to write the gravitational action in terms of eigenvalues, GUT-type representations theory and KK-theory. What kind of 'surprises' were found? GUT type relations are rediscovered, treating the discrete spectrum of on spheres can well be approximated by continuous momentum space for momenta large compared to the inverse radius. And there is a final observation detailed in appendix A of the paper: For there are some cancellations of unclear origin which make vanish. This however is a statement about a heat-kernel coefficient and does not have a priori any connection with the non-commutative approach. Furthermore, the physical implications are left in the dark.

## Friday, December 05, 2008

### KK description of Black Holes?

Still no second part of the spectral action post. But instead a litte puzzle that came up over coffee yesterday: What is the Kaluza-Klein description of a black hole?

To be more explicit: Take pure gravity on R^4xK for compact K and imagine that it is large (some parsec in diameter say). Then you could imagine you have something that looks like a blackhole in this total space-time. What is its four dimenional description in KK theory?

With KK theory I mean the 4d theory with an infinite number of fields. I want to include the whole KK tower. This theory should be equivalent to the higher dimensional one since both are related by a (generalised) Fourier transform on K. One might be worried that a black hole is so singular that this Fourier transform has problems, does not converge or something. But if that is your worry, take a black hole that is not eternal but one that is formed by the collision of graviational waves say. In the past, those waves came in from infinity and if you siufficiently go back in time all fields are weak. This weak field configuration should have no problem being described in KK language and then the evolution is done in the 4d perspective. What would the 4d observer see when the black hole forms in higher dimensions?

The question I would be most interested in is if there is always a black hole in terms of the 4d gravity or if the 4d gravity can remain weak and the action can be entirely in the other fields.

One scenario I could imagine is as follows: The 4d theory has besides the metric some gauge fields and some dilatons. If the black hole is well localised in K then many higher Fourier modes of K will participate. From the 4d perspective, the KK momentum is the charge under the gauge fields and the unit is dependent on the dilaton. So could it be that there is a gauge theory black hole, i.e. a charged configuration that is confined to small region of space time where the coupling is strong with all the causality implications of black holes in gravity?

To be more explicit: Take pure gravity on R^4xK for compact K and imagine that it is large (some parsec in diameter say). Then you could imagine you have something that looks like a blackhole in this total space-time. What is its four dimenional description in KK theory?

With KK theory I mean the 4d theory with an infinite number of fields. I want to include the whole KK tower. This theory should be equivalent to the higher dimensional one since both are related by a (generalised) Fourier transform on K. One might be worried that a black hole is so singular that this Fourier transform has problems, does not converge or something. But if that is your worry, take a black hole that is not eternal but one that is formed by the collision of graviational waves say. In the past, those waves came in from infinity and if you siufficiently go back in time all fields are weak. This weak field configuration should have no problem being described in KK language and then the evolution is done in the 4d perspective. What would the 4d observer see when the black hole forms in higher dimensions?

The question I would be most interested in is if there is always a black hole in terms of the 4d gravity or if the 4d gravity can remain weak and the action can be entirely in the other fields.

One scenario I could imagine is as follows: The 4d theory has besides the metric some gauge fields and some dilatons. If the black hole is well localised in K then many higher Fourier modes of K will participate. From the 4d perspective, the KK momentum is the charge under the gauge fields and the unit is dependent on the dilaton. So could it be that there is a gauge theory black hole, i.e. a charged configuration that is confined to small region of space time where the coupling is strong with all the causality implications of black holes in gravity?

## Tuesday, December 02, 2008

### Spectral Actions Imprecisely

A post from Lubos triggered me to write a post on non-commutative geometry models for gravity plus the standard model as for example promoted by Chamseddine and Connes in a paper today.

Before I start I would like to point out that I have not studied this paper in any detail but only read over it quickly. Therefore, there are probably a number of misunderstandings on my side and you should read this as a report of my thoughts when scanning over that paper rather than a fair representation of the work of Chamseddine and Connes.

Most of what I know about Connes' version of non-commutative geometry (rather than the *-product stuff which has only a small overlap with this) I know from the excellent lecture notes by Landi. If you want to know more about non-commutative geometry beyond the *-product this is a must read (except maybe for the parts on POSETs which are a hobby horse of the author and which can be safely ignored).

But enough of these preliminary remarks. Let's try to understand the spectral action principle!

Every child in kindergarden knows that if you have a compact space you get a commutative C*-algebra for free: You just have to take the continuous functions and add and multiply complex conjugate them point-wise. As norm you can take the supremum/maximum norm (here the compactness helps). This is what is presented in every introduction section of a talk on non-commutative geometry, but onfortunately, this is completely trivial.

The non-trivial part (due to Gelfand, Naimark and Segal) is that it works also the other way round: Given a (unital) commutative C*-algebra, one can construct a compact space such that this C*-algebra is the algebra of the functions on it. Furthermore, if one has got the algebra from the functions on a space, the new space is homeomorphic to (i.e. the same as) the original one.

How can this work? Of course, anybody with some knowledge in algebraic geometry (a similar endeavour but there one deals with polynomials rather than continuous functions) knows how: First, we have to find the space as a set of points. Let's assume that we started from a space and we know what the points are. Then for each point x we get a map from the functions on the manifold to the numbers, we simply map f to f(x). A short reflection reveals that this is in fact a representation of the algebra which is one-dimensional and thus irreducible. It turns out that all irreducible representations of the algebra are of this form. Thus, we can identify the points of the space with the irreducible representations of the algebra.

I could have told an equivalent story in therms of maximal ideals which arise as kernels of the above maps, i.e. the ideals of functions that vanish at x.

Next, we have to turn this set into a topological space. One way to do this is to come up with a collection of all open or all closed sets. In this case, however, it is simpler to define the topology in terms of a closure map, that is a map that maps a set of points to the closure of this set. Such a map has to obey a number of obvious properties (for example if I apply the closure a second time it doesn't do anything or a set is always contained in its closure). In order to find this map we have to make use of the fact that if a continuous functions vanishes on a set of points then by continuity it vanishes as well on the limit points of that set, that is on its closure. Therefore, I can define the closure of a set A of points as the vanishing points of all continious functions that vanish on A. This definition then has an obvious reformulation in terms of irreducible representations instead of points. Think about it, as a homework!

Now that we have a topological space, we want to endow it with a metric structure. But instead of giving a second rank symmetric tensor, we specify a measure for the distance between two points x and y as this is closer to our formalism so far. How can we do this in terms of functions?

The trick is now to introduce a derivative. This allows us to restrict our attention to functions which are differentiable and whose gradient is nowhere greater then 1, i.e. the supremum norm of the gradient is bounded by one (this is where the metric enters implicitly since we use it to compute the length of the gradient). Amongst all such functions f we maximize l=|f(x)-f(y)|. Take now the shortest path between x and y (a geodesic). Since the derivative of f along this paths is bounded by one l cannot be bigger than the distance between x and y. And taking the supremum over all possible f we find that l becomes the distance.

Again, I leave it as a homework to reformulate this construction in the algebraic setting in terms of irreducible representations and the distance between them. There is only a slight technical complication: We started with the algebra of scalar functions on the manifold but the gradient maps those to vector fields which is a different set of sections. This makes life a bit more complicated. Connes' solution is here to forget about scalar functions and take spinors (sections of a spinor bundle precisely) instead. If those exist, all the previous constructions work equally well. But now we can use the Dirac operator and this maps spinors to spinors (with another slight complication for Weyl spinors in even dimensions).

In the algebraic setting, the Dirac operator D is just some abstract linear (unbounded) operator D which fulfills a number of properties listed on p2 of the Chamseddine_Connes paper and once that is given by some supernatural being in addition to the algebra, you can actually reconstruct a Riemannian manifold from an abstract commutative C*-algebra and D.

Next, we have to write down actions. Unfortunately, I now have to run to get to a seminar on the status of LHC. This will be continued, so stay tuned!

Before I start I would like to point out that I have not studied this paper in any detail but only read over it quickly. Therefore, there are probably a number of misunderstandings on my side and you should read this as a report of my thoughts when scanning over that paper rather than a fair representation of the work of Chamseddine and Connes.

Most of what I know about Connes' version of non-commutative geometry (rather than the *-product stuff which has only a small overlap with this) I know from the excellent lecture notes by Landi. If you want to know more about non-commutative geometry beyond the *-product this is a must read (except maybe for the parts on POSETs which are a hobby horse of the author and which can be safely ignored).

But enough of these preliminary remarks. Let's try to understand the spectral action principle!

Every child in kindergarden knows that if you have a compact space you get a commutative C*-algebra for free: You just have to take the continuous functions and add and multiply complex conjugate them point-wise. As norm you can take the supremum/maximum norm (here the compactness helps). This is what is presented in every introduction section of a talk on non-commutative geometry, but onfortunately, this is completely trivial.

The non-trivial part (due to Gelfand, Naimark and Segal) is that it works also the other way round: Given a (unital) commutative C*-algebra, one can construct a compact space such that this C*-algebra is the algebra of the functions on it. Furthermore, if one has got the algebra from the functions on a space, the new space is homeomorphic to (i.e. the same as) the original one.

How can this work? Of course, anybody with some knowledge in algebraic geometry (a similar endeavour but there one deals with polynomials rather than continuous functions) knows how: First, we have to find the space as a set of points. Let's assume that we started from a space and we know what the points are. Then for each point x we get a map from the functions on the manifold to the numbers, we simply map f to f(x). A short reflection reveals that this is in fact a representation of the algebra which is one-dimensional and thus irreducible. It turns out that all irreducible representations of the algebra are of this form. Thus, we can identify the points of the space with the irreducible representations of the algebra.

I could have told an equivalent story in therms of maximal ideals which arise as kernels of the above maps, i.e. the ideals of functions that vanish at x.

Next, we have to turn this set into a topological space. One way to do this is to come up with a collection of all open or all closed sets. In this case, however, it is simpler to define the topology in terms of a closure map, that is a map that maps a set of points to the closure of this set. Such a map has to obey a number of obvious properties (for example if I apply the closure a second time it doesn't do anything or a set is always contained in its closure). In order to find this map we have to make use of the fact that if a continuous functions vanishes on a set of points then by continuity it vanishes as well on the limit points of that set, that is on its closure. Therefore, I can define the closure of a set A of points as the vanishing points of all continious functions that vanish on A. This definition then has an obvious reformulation in terms of irreducible representations instead of points. Think about it, as a homework!

Now that we have a topological space, we want to endow it with a metric structure. But instead of giving a second rank symmetric tensor, we specify a measure for the distance between two points x and y as this is closer to our formalism so far. How can we do this in terms of functions?

The trick is now to introduce a derivative. This allows us to restrict our attention to functions which are differentiable and whose gradient is nowhere greater then 1, i.e. the supremum norm of the gradient is bounded by one (this is where the metric enters implicitly since we use it to compute the length of the gradient). Amongst all such functions f we maximize l=|f(x)-f(y)|. Take now the shortest path between x and y (a geodesic). Since the derivative of f along this paths is bounded by one l cannot be bigger than the distance between x and y. And taking the supremum over all possible f we find that l becomes the distance.

Again, I leave it as a homework to reformulate this construction in the algebraic setting in terms of irreducible representations and the distance between them. There is only a slight technical complication: We started with the algebra of scalar functions on the manifold but the gradient maps those to vector fields which is a different set of sections. This makes life a bit more complicated. Connes' solution is here to forget about scalar functions and take spinors (sections of a spinor bundle precisely) instead. If those exist, all the previous constructions work equally well. But now we can use the Dirac operator and this maps spinors to spinors (with another slight complication for Weyl spinors in even dimensions).

In the algebraic setting, the Dirac operator D is just some abstract linear (unbounded) operator D which fulfills a number of properties listed on p2 of the Chamseddine_Connes paper and once that is given by some supernatural being in addition to the algebra, you can actually reconstruct a Riemannian manifold from an abstract commutative C*-algebra and D.

Next, we have to write down actions. Unfortunately, I now have to run to get to a seminar on the status of LHC. This will be continued, so stay tuned!

## Sunday, November 30, 2008

### What is (the) time?

After seeing Sean revise his view on Templeton funded events and submitting an essay to the FQXi essay contest it seems that this is now officially PC.

So, nothing stands in my way to submit my own.

So, nothing stands in my way to submit my own.

## Friday, November 14, 2008

### Picking winners

I just came across a post at fontblog where it is described how they picked a winner from 36 contributors: They take a dice and throw it once. The number shown is the number of further throws of the dice that are added up to yield the winning number. Obviously, any number 1..36 can be picked, but the distribution is not even: Contributor 1 is picked if the first throw yields a one and the second one as well: probability 1/36. Contributor 2 will be picked by two sequences: 1 2 and 2 1 1 giving 1/6 x 1/6 + 1/6 x 1/6^2 and so on. Conributor 36 is only picked when 7 sixes are thrown in a row, i.e. with probability 1/6^7.

Of course I could not resist and write a perl program to compute the probability distribution, here it is:

Somewhat surprisingly, contributor 29 was eventually picked although he only had a probability of 0.28% a tenth of the average 1/36.

And here is the program:

Of course I could not resist and write a perl program to compute the probability distribution, here it is:

Somewhat surprisingly, contributor 29 was eventually picked although he only had a probability of 0.28% a tenth of the average 1/36.

And here is the program:

#!/usr/bin/perl

%h = (0 => 1);for$t(1..6){

%h = nocheinwurf(%h);

"$t:\n";

foreach$s(sort{$a <=> $b}keys%h){

"$s:".$h{$s}/6**$t." ";

$total[$s] += $h{$s}/6**($t+1);

}

"\n\n";

}"total:\n";foreach$s(1..36){

"$s $total[$s]\n";

}subnocheinwurf{

my%bisher = @_;

my%dann;

for$w(1..6){

foreach$s(keys%bisher){

$dann{$s+$w} += $bisher{$s};

}

}

return(%dann);

}

## Monday, October 27, 2008

### Acer Aspire One and Fonic UMTS

Sorry for these uninspired titles of computer related posts. I picked them for search engine optimization since I had not been able to find information on these topics easily in google and at least want others to have a simpler life.

So this post is about how I managed to hook up my shiny new (actually there are already the first few spots on the keyboard...) netbook to the inceadible Fonic UMTS flat rate. They offer a USB stick (a Huawai E160 internally) and a SIM card for 90 Euros. There is no further monthly cost and you only pay 2.50 Euro per calender day you use it. That includes the first GB at UMTS speed per day, if you need more it drops down to a GPRS connection.

The fact that you can read this post is proof that I got it to work as I type this in the departure lounge of Schönefeld airport waiting for my flight to Munich.

Basically, there are two pieces of software that are required: usb_modeswitch which turns the stick from a USB storage device to a serial device. I wanted to compile it myself which lead me to learn (I didn't expect that) that the GNU Linux of the Aspire One comes without the C compiler and I had to manually install gcc. Furthermore, this had to be selected from the list of packages since just selecting the "development environment" of the grouped package selection lead to a version conflict that the stupid package manager (I was complaining about before) was unable to resolve. But once gcc is there, compilation is a matter of seconds. I also had to move the config file to /etc and change a few semicolons which are the comment markers to activate the sections that refer to the E160.

So now, to make a connection I turn on the Aspire One and insert the Fonic stick. Then, as root, I run

The actual connection is made with umtsmon. I have to wait a few seconds after running the usb_modeswith as otherwise the stick will not be detected. In the config the APN has to be set to pinternet.interkom.de and the checkbox noauth as to be ticked while "replace default route" has to be unchecket since the version of pppd that comes with the Aspire One does not understand this option (which apparently was introduced by SuSE and adopted bey Debian). Then you click connect and it should work!

Update: I forgot to mention that because you had to disable the "replace default route" option, the default route (if existent) will not be disabled. Thus you either have to do it by hand (using 'route') or better just make sure you don't have already a network connection running when trying to connect eg. WiFi (and why would you want to connect UMTS if you already had a better newtowrk connection???)

So this post is about how I managed to hook up my shiny new (actually there are already the first few spots on the keyboard...) netbook to the inceadible Fonic UMTS flat rate. They offer a USB stick (a Huawai E160 internally) and a SIM card for 90 Euros. There is no further monthly cost and you only pay 2.50 Euro per calender day you use it. That includes the first GB at UMTS speed per day, if you need more it drops down to a GPRS connection.

The fact that you can read this post is proof that I got it to work as I type this in the departure lounge of Schönefeld airport waiting for my flight to Munich.

Basically, there are two pieces of software that are required: usb_modeswitch which turns the stick from a USB storage device to a serial device. I wanted to compile it myself which lead me to learn (I didn't expect that) that the GNU Linux of the Aspire One comes without the C compiler and I had to manually install gcc. Furthermore, this had to be selected from the list of packages since just selecting the "development environment" of the grouped package selection lead to a version conflict that the stupid package manager (I was complaining about before) was unable to resolve. But once gcc is there, compilation is a matter of seconds. I also had to move the config file to /etc and change a few semicolons which are the comment markers to activate the sections that refer to the E160.

So now, to make a connection I turn on the Aspire One and insert the Fonic stick. Then, as root, I run

usb_modeswitch -c /etc/usb_modeswitch.conf -W

The actual connection is made with umtsmon. I have to wait a few seconds after running the usb_modeswith as otherwise the stick will not be detected. In the config the APN has to be set to pinternet.interkom.de and the checkbox noauth as to be ticked while "replace default route" has to be unchecket since the version of pppd that comes with the Aspire One does not understand this option (which apparently was introduced by SuSE and adopted bey Debian). Then you click connect and it should work!

Update: I forgot to mention that because you had to disable the "replace default route" option, the default route (if existent) will not be disabled. Thus you either have to do it by hand (using 'route') or better just make sure you don't have already a network connection running when trying to connect eg. WiFi (and why would you want to connect UMTS if you already had a better newtowrk connection???)

## Sunday, October 26, 2008

### Why gold?

I have seen this being discussed in several newspaper articles: In view of the financial crisis people withdraw their money from the bank and buy gold. They do this to such an extend that Germany's main internet gold coin dealer has stopped accepting orders. Amazing. What do they think they are doing?

Ok, you don't trust your local bank XY anymore and, depending where you are, you either have so much money that you are beyond the limit or you even don't trust the banking guarantee system anymore. Therefore, you don't want to leave your money in your account. Fine, you are a pessimist.

But why on earth are you buying gold? Obviously you are so afraid that you are willing to renounce the interest you could be getting for your money. But why don't you keep your money in coins and bills and put it in a save place (safe, mattress, cookie jar).

But that's not good enough for you. You don't even trust your (or any other) currency anymore and want to avoid the risk of inflation. That is you don't trust that somebody is willing to give you enough real stuff for your bills. That is, you mistrust your complete local economy.

You really want to make sure. Therefore you buy gold. Because that keeps its value. That you know.

But why on earth do you believe that? Why do you think that the value of gold is in any way less symbolic or just based on common agreement than the agreement of many people that you will give you food if you provide them with enough pieces of paper on which somebody printed the portrait of some former president in green color?

I have bad news for you: You cannot eat the gold, at least it has zero nutrition value (see wiki article linked above). You can use it for electric contacts that will not corrode. And yes, you can make jewelery from it (as it is done with a third of the gold that is mined). But again, the value of the jewelery is just that "everbody knows gold is precious". And this is only the case as long as everybody believes that. But there ist not much you can actually do with gold that you cannot for example to with copper or palladium (which is an excellent catalyst for many reactions involving hydrogen by the way). The value of gold is only high because everybody believes that. But that's the same for dollar bills or any other currency (leaving out the Icelandic crone for the moment).

And it is not beyond historical example that people stopped believing that gold has high value: As pointed out buy my history teacher in high school this was the main strategic mistake of the Spanish crown: not realising that you can destroy the marked if you suddenly have a lot of it. The Spanish kept importing huge amounts of gold that they had found in the Americas without noticing early enough that the others lost their interest in gold as soon as the Spanish suddenly had so much. At the same time, others found that it's a much better to invest into the real economy.

It took some more years to understand that the value of a currency is not so much based on all the gold that the central bank holds but much more on the economy that backs it.

And of course the telephone desinfactants that stranded on the earth in our past as told by the hitchhiker's guide to the galaxy also had the great idea to use leaves as bills. Which lead them to burn down all their woods.

Ok, you don't trust your local bank XY anymore and, depending where you are, you either have so much money that you are beyond the limit or you even don't trust the banking guarantee system anymore. Therefore, you don't want to leave your money in your account. Fine, you are a pessimist.

But why on earth are you buying gold? Obviously you are so afraid that you are willing to renounce the interest you could be getting for your money. But why don't you keep your money in coins and bills and put it in a save place (safe, mattress, cookie jar).

But that's not good enough for you. You don't even trust your (or any other) currency anymore and want to avoid the risk of inflation. That is you don't trust that somebody is willing to give you enough real stuff for your bills. That is, you mistrust your complete local economy.

You really want to make sure. Therefore you buy gold. Because that keeps its value. That you know.

But why on earth do you believe that? Why do you think that the value of gold is in any way less symbolic or just based on common agreement than the agreement of many people that you will give you food if you provide them with enough pieces of paper on which somebody printed the portrait of some former president in green color?

I have bad news for you: You cannot eat the gold, at least it has zero nutrition value (see wiki article linked above). You can use it for electric contacts that will not corrode. And yes, you can make jewelery from it (as it is done with a third of the gold that is mined). But again, the value of the jewelery is just that "everbody knows gold is precious". And this is only the case as long as everybody believes that. But there ist not much you can actually do with gold that you cannot for example to with copper or palladium (which is an excellent catalyst for many reactions involving hydrogen by the way). The value of gold is only high because everybody believes that. But that's the same for dollar bills or any other currency (leaving out the Icelandic crone for the moment).

And it is not beyond historical example that people stopped believing that gold has high value: As pointed out buy my history teacher in high school this was the main strategic mistake of the Spanish crown: not realising that you can destroy the marked if you suddenly have a lot of it. The Spanish kept importing huge amounts of gold that they had found in the Americas without noticing early enough that the others lost their interest in gold as soon as the Spanish suddenly had so much. At the same time, others found that it's a much better to invest into the real economy.

It took some more years to understand that the value of a currency is not so much based on all the gold that the central bank holds but much more on the economy that backs it.

And of course the telephone desinfactants that stranded on the earth in our past as told by the hitchhiker's guide to the galaxy also had the great idea to use leaves as bills. Which lead them to burn down all their woods.

## Saturday, October 18, 2008

### Aharonov Bohm phase from self-adjointness

This week started the course in "Mathematical Quantum Mechanics" that I am co-teaching with Laszlo ErdÃ¶s this term. Since I had to rush a bit in the end of yesterday's lecture, I composed some notes on how to extend the momentum operator in a self-adjoint way for the particle on the interval and how one can see an Aharonov-Bohm phase being the ambiguity of this procedure.

## Thursday, October 16, 2008

### Unbounded operators are not defined on all of H

I am looking for an elementary proof of the fact that an unbounded operator cannot have the whole Hilbert space as its domain of definition. In the textbooks I had a look at this follows from the closed graph theorem which then is proved using somewhat heavy functional analysis machinery. What I am looking for is something that is accessible to physicists that have just learned about unbounded operators and that could be turned into a homework problem. If you know such a reference or could give me a hint (in the comments or to helling@atdotde.de) I would greatly appreciate it!

## Wednesday, September 17, 2008

### Anti crackpot blog and falsifyability

There is a new blog dealing with crackpots in HEP. It probably grew out of the recent hype about LHC being the doomsday machine. I left a comment to the first post that I would like to pull out of the comment threat so I reproduce it here:

You would like to add that I think that criteria for good science are unfortunately slightly more complicated than just asking for "falsifyability".

This notion comes from the (pre war) neo-positivist school of thought with Karl Popper being one of the major figures. It is a tremendous progress after realising that strictly speaking you will never be able to prove empirical statements like "all ravens are black" (the classic example) by looking only at a finite subsample of all ravens.

This is how far the education of a typical physicist in philosophy of science goes.

Unfortunately, falsifyability is not a good criterium either from the standpoint of logic at least: The thing is that non-trivial scientific statemens are much more complex than a simple "all ravens are black". When you say such a thing (or write it in a paper), there are many qualifyers that go with it, at least implicitly. You have to know what exactly "raven" means, what exatly you call "black" and not "dark grey", you have to say how you measure the color and that might come with a theory of itself that makes sure your measuring aparatus acutally measures color (as for example percieved by the eye).

Speaking in terms of formal logic, all these presuppositions are connected to the "all ravens are black" by a long long chain of "and"s.

Now imagine you observe a white raven (after having seen a million black ravens). At least you would be tempted to further investigate whether that bird is really a raven or was painted white. It's typically not the first reaction to throw away the "black ravens" theory if it had a lot of support before. Often the reaction is that you rather fiddle with the presuppositions of your theory (for example by changing slightly the definition of what you call 'raven').

The "all ravens are black" part is somewhat protected after receiving some positive evidence for some time and you rather add bits and pieces to the silent background theory before giving up the big statements.

Thus we have to agree that it's not that easy to falsify even simple statments like "all ravens are black" that easily. Furthermore, this is not what happens in science (historically speaking)! You could probably still find an ether theory where the ether is dragged along by the earth in its orbit around the sun and that has many more very unusual properties that is not ruled out empirically. Still nobody in her right mind believes in it since relativity is some much more successful!

All this is much better explained in Thomas Kuhn's "Structure of Scientific Revolutions" and I urge everybody with a slight interst in these matters to read this classic. He argues much more for the consensus of the scientific community to be the relevant criterium (which might be a self referential criterium for science consiracy theorists).

I believe that naive following this wrong criterium of "falsifyability" has gotten beyond the standard model theory in a lot of trouble in the public perception. I would attribute a lot of this to trying to follow the wrong criterium.

You would like to add that I think that criteria for good science are unfortunately slightly more complicated than just asking for "falsifyability".

This notion comes from the (pre war) neo-positivist school of thought with Karl Popper being one of the major figures. It is a tremendous progress after realising that strictly speaking you will never be able to prove empirical statements like "all ravens are black" (the classic example) by looking only at a finite subsample of all ravens.

This is how far the education of a typical physicist in philosophy of science goes.

Unfortunately, falsifyability is not a good criterium either from the standpoint of logic at least: The thing is that non-trivial scientific statemens are much more complex than a simple "all ravens are black". When you say such a thing (or write it in a paper), there are many qualifyers that go with it, at least implicitly. You have to know what exactly "raven" means, what exatly you call "black" and not "dark grey", you have to say how you measure the color and that might come with a theory of itself that makes sure your measuring aparatus acutally measures color (as for example percieved by the eye).

Speaking in terms of formal logic, all these presuppositions are connected to the "all ravens are black" by a long long chain of "and"s.

Now imagine you observe a white raven (after having seen a million black ravens). At least you would be tempted to further investigate whether that bird is really a raven or was painted white. It's typically not the first reaction to throw away the "black ravens" theory if it had a lot of support before. Often the reaction is that you rather fiddle with the presuppositions of your theory (for example by changing slightly the definition of what you call 'raven').

The "all ravens are black" part is somewhat protected after receiving some positive evidence for some time and you rather add bits and pieces to the silent background theory before giving up the big statements.

Thus we have to agree that it's not that easy to falsify even simple statments like "all ravens are black" that easily. Furthermore, this is not what happens in science (historically speaking)! You could probably still find an ether theory where the ether is dragged along by the earth in its orbit around the sun and that has many more very unusual properties that is not ruled out empirically. Still nobody in her right mind believes in it since relativity is some much more successful!

All this is much better explained in Thomas Kuhn's "Structure of Scientific Revolutions" and I urge everybody with a slight interst in these matters to read this classic. He argues much more for the consensus of the scientific community to be the relevant criterium (which might be a self referential criterium for science consiracy theorists).

I believe that naive following this wrong criterium of "falsifyability" has gotten beyond the standard model theory in a lot of trouble in the public perception. I would attribute a lot of this to trying to follow the wrong criterium.

## Monday, September 15, 2008

### Timing of arXiv.org submissions

I have learned an interesting fact from Paul Ginsparg's talk at Bee's Science in the 21st Century conference: around the middle of his presentation he showed a histogram of at what time of the day people submit their preprints to the astro-ph archive. This histogram showed an extremely pronounced peak right after the deadline. After. I had suspected that preprints would pile up before the deadline because people still want to make it into tomorrow's listing because they fear to be scooped. But this is not the case. People want to be on the top of the listing!

My immediate reaction was that the number of nutters amongst astrophysicists is higher than expected. But the next news was at least as shocking: It seems to work! The papers submitted just after the deadline accumulate 90 citations on the average compared to 45 for generic astro-ph papers.

There seem to be two possible explanations of this fact (which is still there for other archives but not quite as pronounced): It could just work and the reader's attention span is so short (you might want to blame Sesame Street with its 1'30" spots for it) that they get distracted before they reach the end of the listing of new papers (this is the only place where submission time matters according to Ginsparg). Alternatively, people who care to submit right after the deadline also care about their citation count a lot and use all means to get it up.

Some further investigation seems to show that the second effect is much stronger than the first. Somebody from the audience suggested to study the correlation with self citations and small citation circles with submission time.

What I would want to look at would be if those are also the people that have a large percentage of their citations from revised versions of preprints (i.e. are those the people who care to write these emails begging for citations --- I have to admit I was one of them today)?

My immediate reaction was that the number of nutters amongst astrophysicists is higher than expected. But the next news was at least as shocking: It seems to work! The papers submitted just after the deadline accumulate 90 citations on the average compared to 45 for generic astro-ph papers.

There seem to be two possible explanations of this fact (which is still there for other archives but not quite as pronounced): It could just work and the reader's attention span is so short (you might want to blame Sesame Street with its 1'30" spots for it) that they get distracted before they reach the end of the listing of new papers (this is the only place where submission time matters according to Ginsparg). Alternatively, people who care to submit right after the deadline also care about their citation count a lot and use all means to get it up.

Some further investigation seems to show that the second effect is much stronger than the first. Somebody from the audience suggested to study the correlation with self citations and small citation circles with submission time.

What I would want to look at would be if those are also the people that have a large percentage of their citations from revised versions of preprints (i.e. are those the people who care to write these emails begging for citations --- I have to admit I was one of them today)?

## Friday, September 12, 2008

### New Toy: Acer Aspire One 150L

Since yesterday 11pm I have my shiny new toy: A Acer Aspire One 150L Netbook Computer. Intel Atom CPU, 1GB RAM, 1200x600 pixels and less than 1kg of weight should make it the ideal travel companion. The "L" in the name means that this is the first computer I ever bought that came with Linux preinstalled. What distinguishes it from its competitors (like the Asus Eec PC) is that it has a proper hard drive which has comfortable 120GB and not just a tiny solid state disk. The thing comes for 349 Euros from amazon.de (I have seen it for US$350 at Best Buy in San Francisco).

Of course, such a brand new device comes with a number of things to tweek. And not all solutions easily found by just googling. Therefore, I will keep updating this post to record what I have done so far:

Of course, such a brand new device comes with a number of things to tweek. And not all solutions easily found by just googling. Therefore, I will keep updating this post to record what I have done so far:

- The thing comes out of the box and boots in a few seconds into some desktop with application icons all over the place. What is missing is a terminal window! But you can use the file system browser to run /usr/bin/terminal which is a good start.
- During set-up you have to come up with a password. It turns out, this is then set both as root password and as a password for the preconfigured user "User". I have not yet dared to set the root password to something else and to rename user "User" to "robert" since the build in applications might assume that I am User.
- Once you have a terminal you should run (as root) xfce-setting-show as this allows you to turn on the pop-up menu when right clicking on the desktop.
- In this pop-up menu you can find a package manager. I becomes obvious that this Linux is based on fedora and the conflict resolution just plain sucks. But it's better than nothing and you can eventually install openssh to be able to log in to uni. I hope that I will have a Debian running on this machine but currently the corresponding web sites still look a bit scary.
- They have preinstalled a vpnc client. But they forgot to include the tunneling kernel module tun.ko . This gives nasty error messages about /dev/net/tun . Luckily, this module is available here . Just download it to
`/lib/modules/2.6.23.9lw/kernel/drivers/net/ and reboot (or insmod it).`

## Tuesday, July 29, 2008

### Cloud computing

This morning I read an article in this week's Zeit "Die angekündigte Revolution". Its author claims that the fact that in the future we will not have computational power in our homes (or with us) but rather use computing centers that are centralised and accessible over the net. He equates that with the revolutions that came about as an centralised electricity supply was established (instead of factories having their own generators in the basement) and centralised water supply made people independent of a well in the backyard.

I am not so sure.

First of all we've already had that: In the past, there were mainframes connected to many terminals. The pain that came with this set-up was only relieved when computational power appeared on everybody's desks thanks to PCs. So why go back to those old days?

In addition, what I am observing is that the computational power I have local access to is exponentially growing for as long as I can think. And I see no end to that trend. Currently, my mobile phone has more memory than my PC a few years ago and has a processor that can do amazing things. The point is not that I need that but that it's so cheap I don't mind having it.

True enough, I only very rarely really need CPU cycles. But when I need them (computing something in mathematica or visiting a web page with some broken java or javascript that makes my browser busy) I usually want it right now. It's not that I am planning it and could as well outsource it to some centralised server.

It might be a bit different with storage. Having some centralised storage that can be accessed from everywhere (and where other people worry about backup so I don't have to) could be useful. Not only for backup and mobile access to all kinds of documents. All that assuming that data privacy has been taken care of. But also things like configuration files (like bookmarks), documents, media files. That already partly exists (at least for specific applications) but nothing unified, as of today (as far as I am aware of).

But I cannot see people giving up local computational power. Recently the part of PCs where performance has been growing most strongly are the video cards (by now massive multiprocessor rendering engines). That development was of course driven by the video game market. I don't see how that would be moved to a centralised computer.

As of today, I do not know anybody that uses Google Docs. It's more a proof of concept that an application for everyday use. If I really want to collaborate on documents I would rather use subversion or CVS. Again, that has a centralised storage but computation is still local.

Let me finish with two rants about related problems that I recently had: First, I use liferea as my RSS aggregator. That is a nice little program with intuitive user interface that allows me to quickly catch up with the feeds I am interested in. Unfortunately, it keeps its state (which posts it has already downloaded and which posts I have looked at) in a stupid format: Its actually a whole directory of xml and html files! So to continue reading blogs on my laptop from where I left of on my PC requires scp'ing that whole directory. Not to mention there is no way to somehow 'merge' two states...

The other thing is email. You might think this is trivial, but to me it seems it is not. My problem is first I am getting a lot and want my computer to do some of the processing for me. Thus I have the mail delivery program sort mail from mailing lists into appropriate folders (including a spam folder). Then, on a typical day I want to read it with an advanced reader (which in my case is alpine, the successor of pine). The killer function being to automatically save incoming mail in a folder matching my nickname for the author or the author's name and saving outgoing mail to a

folder according to the recipient. Not to mention I have about half a gig of old mail distributed over 470 folders, more than what one can easily deal with one of the GUI clients like thunderbird.

That is all nice and well. Once I am at my PC I just run alpine. Or if I am at home or travelling and connecting for a somewhat decent machine (i.e. one that has an ssh client or at least allows me to install one) I ssh to my PC and read mail there (actually, I ssh to the firewall of the physics department from there ssh to one of the computers of the theory group and from there eventually to my PC as it is well hidden from the outside world due to some other people's idea of computer security).

What if that other computer cannot do ssh but there is only a web browser? My current (and for my upcoming four week holiday in the south west of the USA) solution is to go to a page that has an ssh client as a java applet and then to step one above. But that is a but like the mathematician in the joke that sees the dust bin in his hotel room burning and takes the buring bin bin to the physicist's hotel room thereby reducing the problem to an already solved one (the physicist had extinguished a fire in the previous joke).

Why is it so hard these days to set up decent webmail? At least for the duration of a holiday? My problem is that there are three classes of computers: Type one are connected to the internet but I do not have sufficient privileges to install software. Type two I have the privileges but they don't give me an IP that is routed to the internet (even more: that accepts incoming connections). Type three: A computer to which I have root access and which as a routed IP but where the software installation is so out of date I cannot install software with one of the common tools (i.e. apt-get) without risiking to install/upgrade other packages that require at least a reboot. I should mention that that computer is physically located some hundred kilometers away from me and I am the only person who could reboot it. A major update is likely to effectively make me lose that computer.

These things used to be so much easier in the past: Since the days of my undergraduate studies I always had some of my own linux boxes hooked up to some university (or DESY in that case) network. On that I could have done the job. But with recent obsession with (percieved) security, you only get DHCP addresses (with which one can deal using dyndns etc) but also which are behind firewalls that do not allow for incoming connections. Stupid, stupid, stupid!!!

I am really thinking about renting one of those (virtual) servers at one of the hosters which you can now do for little money to solve all these problems. But that should not really be neccessary!

I am not so sure.

First of all we've already had that: In the past, there were mainframes connected to many terminals. The pain that came with this set-up was only relieved when computational power appeared on everybody's desks thanks to PCs. So why go back to those old days?

In addition, what I am observing is that the computational power I have local access to is exponentially growing for as long as I can think. And I see no end to that trend. Currently, my mobile phone has more memory than my PC a few years ago and has a processor that can do amazing things. The point is not that I need that but that it's so cheap I don't mind having it.

True enough, I only very rarely really need CPU cycles. But when I need them (computing something in mathematica or visiting a web page with some broken java or javascript that makes my browser busy) I usually want it right now. It's not that I am planning it and could as well outsource it to some centralised server.

It might be a bit different with storage. Having some centralised storage that can be accessed from everywhere (and where other people worry about backup so I don't have to) could be useful. Not only for backup and mobile access to all kinds of documents. All that assuming that data privacy has been taken care of. But also things like configuration files (like bookmarks), documents, media files. That already partly exists (at least for specific applications) but nothing unified, as of today (as far as I am aware of).

But I cannot see people giving up local computational power. Recently the part of PCs where performance has been growing most strongly are the video cards (by now massive multiprocessor rendering engines). That development was of course driven by the video game market. I don't see how that would be moved to a centralised computer.

As of today, I do not know anybody that uses Google Docs. It's more a proof of concept that an application for everyday use. If I really want to collaborate on documents I would rather use subversion or CVS. Again, that has a centralised storage but computation is still local.

Let me finish with two rants about related problems that I recently had: First, I use liferea as my RSS aggregator. That is a nice little program with intuitive user interface that allows me to quickly catch up with the feeds I am interested in. Unfortunately, it keeps its state (which posts it has already downloaded and which posts I have looked at) in a stupid format: Its actually a whole directory of xml and html files! So to continue reading blogs on my laptop from where I left of on my PC requires scp'ing that whole directory. Not to mention there is no way to somehow 'merge' two states...

The other thing is email. You might think this is trivial, but to me it seems it is not. My problem is first I am getting a lot and want my computer to do some of the processing for me. Thus I have the mail delivery program sort mail from mailing lists into appropriate folders (including a spam folder). Then, on a typical day I want to read it with an advanced reader (which in my case is alpine, the successor of pine). The killer function being to automatically save incoming mail in a folder matching my nickname for the author or the author's name and saving outgoing mail to a

folder according to the recipient. Not to mention I have about half a gig of old mail distributed over 470 folders, more than what one can easily deal with one of the GUI clients like thunderbird.

That is all nice and well. Once I am at my PC I just run alpine. Or if I am at home or travelling and connecting for a somewhat decent machine (i.e. one that has an ssh client or at least allows me to install one) I ssh to my PC and read mail there (actually, I ssh to the firewall of the physics department from there ssh to one of the computers of the theory group and from there eventually to my PC as it is well hidden from the outside world due to some other people's idea of computer security).

What if that other computer cannot do ssh but there is only a web browser? My current (and for my upcoming four week holiday in the south west of the USA) solution is to go to a page that has an ssh client as a java applet and then to step one above. But that is a but like the mathematician in the joke that sees the dust bin in his hotel room burning and takes the buring bin bin to the physicist's hotel room thereby reducing the problem to an already solved one (the physicist had extinguished a fire in the previous joke).

Why is it so hard these days to set up decent webmail? At least for the duration of a holiday? My problem is that there are three classes of computers: Type one are connected to the internet but I do not have sufficient privileges to install software. Type two I have the privileges but they don't give me an IP that is routed to the internet (even more: that accepts incoming connections). Type three: A computer to which I have root access and which as a routed IP but where the software installation is so out of date I cannot install software with one of the common tools (i.e. apt-get) without risiking to install/upgrade other packages that require at least a reboot. I should mention that that computer is physically located some hundred kilometers away from me and I am the only person who could reboot it. A major update is likely to effectively make me lose that computer.

These things used to be so much easier in the past: Since the days of my undergraduate studies I always had some of my own linux boxes hooked up to some university (or DESY in that case) network. On that I could have done the job. But with recent obsession with (percieved) security, you only get DHCP addresses (with which one can deal using dyndns etc) but also which are behind firewalls that do not allow for incoming connections. Stupid, stupid, stupid!!!

I am really thinking about renting one of those (virtual) servers at one of the hosters which you can now do for little money to solve all these problems. But that should not really be neccessary!

### Observing low scale strings without landscape problems

Tom Taylor is currently visiting Munich and a couple of days ago he had a paper with Dieter Lüst and Stephan Stieberger which discusses (besides many detailed tables) a simple observation: Assume that for some reason the string scale is so much lower than the observed 4d Planck scale that it can be reached by LHC (a possible but admittedly unlikely scenario) and in addition the string coupling is sufficiently small. Then they argue the 2 to 2 gluon amplitude is dominated by the first few Regge poles.

The important consequence of this observation is that this implies that the amplitudes are (up to the string scale, the only parameter) are independent of the details of the compactification and the way susy is broken: This amplitude is the same all over the landscape in all 10^500 vacua!

Observationally this would mean the following: At some energy there would be a resonance in the gg->gg scattering (or even better several). The angular distribution of the scattering products are characteristic for the spins of the corresponding Regge poles (i.e. 0 for the lowest, 1 for the next etc) and most importantly, the decay width can be computed from the energy of the resonance (which itself measures the free parameter, the string scale).

Of course, those resonances could still be attributed to some particles but the spin and decay width would be very characteristic for stings. As I said, all this is with the proviso that the string scale is so low it can be reached by LHC (or any accelerator looking for these resonances) and that the coupling is small (which is not so much a constraint as the QCD coupling is related to the string coupling and at those scales is already very small).

The important consequence of this observation is that this implies that the amplitudes are (up to the string scale, the only parameter) are independent of the details of the compactification and the way susy is broken: This amplitude is the same all over the landscape in all 10^500 vacua!

Observationally this would mean the following: At some energy there would be a resonance in the gg->gg scattering (or even better several). The angular distribution of the scattering products are characteristic for the spins of the corresponding Regge poles (i.e. 0 for the lowest, 1 for the next etc) and most importantly, the decay width can be computed from the energy of the resonance (which itself measures the free parameter, the string scale).

Of course, those resonances could still be attributed to some particles but the spin and decay width would be very characteristic for stings. As I said, all this is with the proviso that the string scale is so low it can be reached by LHC (or any accelerator looking for these resonances) and that the coupling is small (which is not so much a constraint as the QCD coupling is related to the string coupling and at those scales is already very small).

## Tuesday, July 08, 2008

### Formulas down

The computer that serves formulas for this blog (via mimeTeX) is down. Since it's located under my old desk in Bremen I cannot just reboot it from here. Please be patient (or let me know another host for mimeTeX and a simple way to migrate all the old posts...).

Having said this mathphys.iu-bremen.de is up again thanks to Yingiang You!

*Update:*Having said this mathphys.iu-bremen.de is up again thanks to Yingiang You!

## Wednesday, July 02, 2008

### Chaos: A Note To Philosophers

For some reasons (not too hard to guess) I was recently exposed to a number of texts (both oral and written) on the relation between science (or physics) and religion. In those, a recurring misconception is a misunderstanding of the meaning of "chaos":

In the humanities it seems, an allowed mode of argument (often used to make generalisations or find connections between different subjects) is to consider the etymology of the term used to describe the phenomenon. In the case of "chaos", wikipedia is of help here. But at least in math (and often in physics) terms are thought to be more like random labels and yield no further information. Well, sometimes they do because the people which coined the terms wanted them to imply certain connections, but in case of doubt, they don't really mean something.

A position which I consider not much less naive than a literal interpretation of a religious text when it comes to questions of science (the world was created in six days and that happened 6000 years ago) is to allow a deity to intervene (only) in the form of the fundamental randomness of the quantum world. For example, this is quite restricting and most of the time, this randomness just averages out for macroscopic bodies like us making the randomness irrelevant.

But for people with a liking for a line of argument like this, popular texts about chaos theory come to a rescue: There, you can read that butterflies cause hurricanes and that this fact fundamentally restricts predictability even on a macroscopic scale --- room for non-deterministic interference!

Well, let me tell you, this argument is (not really surprisingly) wrong! As far as the maths goes, the nice property of the subject is, that it is possible to formalise vague notions (unpredictable) and see how far they really carry. So, what is meant here is that the late time behavior is a dis-continuous function of the initial conditions at t=0. That is, if you can prepare the initial conditions only up to a possible error of epsilon, you cannot predict the outcome (up to an error delta that might be given to you by somebody else) even by making epsilon as close to 0 as you want.

The crucial thing here if of course what is meant by "late time behavior": For any late but finite time t (say in ten thousand years), the dependence on initial conditions is still continuous, for any given delta, you can find an epsilon such that you can predict the outcome within the margin given by delta. Of course the epsilon will be a function of t, that is if you want to predict the farther future you have to know the current state better. But this epsilon(t) will always (as long as the dynamics is not singular) be strictly greater than 0 allowing for an uncertainty in the initial conditions. It's only in the limit of infinite t that it might become 0 and thus any error in the observation/preparation of the current state, no matter how small leads to an outcome significantly different from the exact prediction.

In the humanities it seems, an allowed mode of argument (often used to make generalisations or find connections between different subjects) is to consider the etymology of the term used to describe the phenomenon. In the case of "chaos", wikipedia is of help here. But at least in math (and often in physics) terms are thought to be more like random labels and yield no further information. Well, sometimes they do because the people which coined the terms wanted them to imply certain connections, but in case of doubt, they don't really mean something.

A position which I consider not much less naive than a literal interpretation of a religious text when it comes to questions of science (the world was created in six days and that happened 6000 years ago) is to allow a deity to intervene (only) in the form of the fundamental randomness of the quantum world. For example, this is quite restricting and most of the time, this randomness just averages out for macroscopic bodies like us making the randomness irrelevant.

But for people with a liking for a line of argument like this, popular texts about chaos theory come to a rescue: There, you can read that butterflies cause hurricanes and that this fact fundamentally restricts predictability even on a macroscopic scale --- room for non-deterministic interference!

Well, let me tell you, this argument is (not really surprisingly) wrong! As far as the maths goes, the nice property of the subject is, that it is possible to formalise vague notions (unpredictable) and see how far they really carry. So, what is meant here is that the late time behavior is a dis-continuous function of the initial conditions at t=0. That is, if you can prepare the initial conditions only up to a possible error of epsilon, you cannot predict the outcome (up to an error delta that might be given to you by somebody else) even by making epsilon as close to 0 as you want.

The crucial thing here if of course what is meant by "late time behavior": For any late but finite time t (say in ten thousand years), the dependence on initial conditions is still continuous, for any given delta, you can find an epsilon such that you can predict the outcome within the margin given by delta. Of course the epsilon will be a function of t, that is if you want to predict the farther future you have to know the current state better. But this epsilon(t) will always (as long as the dynamics is not singular) be strictly greater than 0 allowing for an uncertainty in the initial conditions. It's only in the limit of infinite t that it might become 0 and thus any error in the observation/preparation of the current state, no matter how small leads to an outcome significantly different from the exact prediction.

## Tuesday, June 24, 2008

### Ping

I have been silent here for far too long, mainly because there were a number of other things that had higher priority on my list. So, just as some sort of sign of life here is a video of a cool gadget that I have been toying around with recently:

The Wiimote can be found for less than 30 Euros on Ebay and there are probably many fun things to do with the IR sensor and the accelerometer. Maybe combined with an Arduino.

The whiteboard functionality just cries for an application combined with a USB digital TV stick and gromit. This should be just in time for a Kloppomat for the semi-finals starting tomorrow (which will be watched chez the Christandls).

Currently, I am trying to do damage control on my failure to update my laptop from Etch to Lenny (this post is written while I wait for the backup of home directories to be finished).

There are many other items in my pipeline to to be written about. Let me list them so I have some additional motivation in the near future:

So watch this space!

The Wiimote can be found for less than 30 Euros on Ebay and there are probably many fun things to do with the IR sensor and the accelerometer. Maybe combined with an Arduino.

The whiteboard functionality just cries for an application combined with a USB digital TV stick and gromit. This should be just in time for a Kloppomat for the semi-finals starting tomorrow (which will be watched chez the Christandls).

Currently, I am trying to do damage control on my failure to update my laptop from Etch to Lenny (this post is written while I wait for the backup of home directories to be finished).

There are many other items in my pipeline to to be written about. Let me list them so I have some additional motivation in the near future:

- In april I attended a conference on different approaches to quantum gravity which was a lot better than it sounds. I would like to comment at least on the talks on exact renormalisation group and a non-gaussian fixedpoint of gravity and
- the talks by the loop people (including Ashtekar's talk which he last week also presented as an ASC colloquium here at LMU). Just as a teaser: In the extended discussion sessions at the conference Thiemann claimed that by coupling to his version of gravity he can quantise any theory. We was then asked about anomalous theories and still claimed that that wouldn't matter. Ashtekar however seemed to smell that that was not the best answer and chimed in by mentioning that maybe later the anomalous theories do not have flat low energy limits. Nice move according to the classic "blame it on the part of the theory nobody has any idea about" strategy.
- I have a little project going trying to join some archeological data with google earth/maps. What seemed like a trivial coordinate transformation in spherical coordinates turned out to be much more complicated due to the precision required (one has to take into account not only that the earth is an ellipsoid but also that there are different reference ellispoids used by different people etc).

So watch this space!

## Thursday, April 10, 2008

### Standing on the shoulders of giants

As mentioned before, since getting an IPod for christmas I am a huge fan of podcasts. I find it's like listening to the radio but you decide what they talk about. Currently, my favourite feeds are Chaos Radio Express (in German) and the In Our Time BBC programme.

Recently, I was listening to an episode about Newton's Principia that discussed the scientific trends of the time when Newton published his seminal book. In Cambridge, I had learned before that Newton's remark that he was standing on the shoulders of giants was not meant as modest as it might sound. In fact, it's meaning was rather sarcastic as it was referring to Robert Hooke whose bad posture mad it obvious that Newton was in fact saying that he did not learn anything from him.

To me, that had always sounded rather reasonable given that the law that carries Hooke's name does not sound particularly deep from a modern perspective: It states that to leading order the elastic deformation is linear in applied force, basically a statement saying that no surprise happens and the first order does not vanish. Formulated this way quite a minor contribution compared to Newton's axioms and his inverse square law for gravity.

From the BBC programe, however, I learned that the situation is not that simple: In fact, Hooke had already found an inverse square law for gravity experimentally and suggested to Newton in a letter that that might me responsible for the elliptic motions of the planets. Hooke himself did (could?) not prove that and was asking Newton for his opinion.

Later in their life, the two men both of difficult character had an ongoing dispute about scientific priorities and this is where the famous quotation is from. The wikipedia page contains more information about it and buts it in the context of (wave) optics rather than gravity.

Recently, I was listening to an episode about Newton's Principia that discussed the scientific trends of the time when Newton published his seminal book. In Cambridge, I had learned before that Newton's remark that he was standing on the shoulders of giants was not meant as modest as it might sound. In fact, it's meaning was rather sarcastic as it was referring to Robert Hooke whose bad posture mad it obvious that Newton was in fact saying that he did not learn anything from him.

To me, that had always sounded rather reasonable given that the law that carries Hooke's name does not sound particularly deep from a modern perspective: It states that to leading order the elastic deformation is linear in applied force, basically a statement saying that no surprise happens and the first order does not vanish. Formulated this way quite a minor contribution compared to Newton's axioms and his inverse square law for gravity.

From the BBC programe, however, I learned that the situation is not that simple: In fact, Hooke had already found an inverse square law for gravity experimentally and suggested to Newton in a letter that that might me responsible for the elliptic motions of the planets. Hooke himself did (could?) not prove that and was asking Newton for his opinion.

Later in their life, the two men both of difficult character had an ongoing dispute about scientific priorities and this is where the famous quotation is from. The wikipedia page contains more information about it and buts it in the context of (wave) optics rather than gravity.

## Friday, April 04, 2008

### Einstein book and Einstein thoughts

Getting presents is not always easy especially if you you believe the presenter has put some thought into picking the present but failed due to lack of knowledge in the area of the present: Sometime in high school, for my birthday, a close friend gave me a cardboard circle of fifth so I could look up how many sharps there are in A major or G minor. That seemed like a good idea since I like to play music a lot. Except those kinds of things the cardboard display showed you are supposed to know and reproduce from memory (if not spinal cord) if you want to get into jazz improvisation. Thus, I could only produce some "uhm, thank you, how nice....".

Same thing happens with popular science physics books. I have not read any in many years since the density of information new to me is usually extremely low. Maybe I browse a bit in a book shop to see which topics are covered and read a page or two to see how a controversial topic is covered. I think this is the same for any worker in the field. Therefore, in the discussions of the controversial physics books of recent years, the authors' response to criticism of string theorists was often "you have not read the book" and indeed, this was true most of the time. But still, people haveing browsed through the book as above usually knew what was going on even without reading the book cover to cover.

That is the background to my reaction when my parents gave me a book for christmas which they had bought on their US trip in autumn: "My Einstein", a collection of essays edited by John Brockman. I assumed this would be just another Einstein book and one that was even a bit late for the Einstein year 2005. So the book sat on my shelf for a couple of months. But a few weeks ago I stated reading and was surprised: This was the most interesting book with a physics theme I have read in years! I can strongly recommend it!

The idea of the book is to ask 24 experts in fields related to Einstein's work or life to say what "Einstein" means to them. And the positive thing is that this is not 24 introductions to special relativity but 24 aspects of the physicist, the man, the pop star, the philosopher, the politician in 2006, more than fifty years after his death.

The authors include John Archibald Wheeler (the only one which actually interacted with Einstein), Lenny Susskind, Anton Zeilinger, Lee Smolin, George Smoot, Frank Tipler, George Dyson (the son of Freeman Dyson who was baby sit by Einstein's secretary), Maria Spirolulu, Lawrence Krauss and Paul Steinhardt. All of them find interesting and very diverse aspects of the Einstein topic and reading the book a number of physics questions came to my mind.

One essay was pointing out that what was peculiar about Einstein's way of thinking was that it was based on thought experiments and thus much more driven by elegance than by observation in the lab. This was illustrated by the fact that the reasoning that lead to special relativity was based on an analysis of Maxwell's equations (which are of course symmetric under Lorentz transformations, a fact which was known to Lorentz) rather than on an analysis of the Michelson Morley experiment.

One should note however that this argument is not logically tight: Of course it much more aesthetic to deduce from the fact that the speed of light comes out of Maxwell's equations that it should be universal and that if should be the same in all directions. However this does not follow directly as one can see by considering the acoustic analogue: From an analysis of kinetic gas theory one can deduce sound waves and the speed of sound can be expressed in terms of the molar weight of the gas etc. One finds a wave equation and that equation is invariant under boosts where the speed of light is replaced by the speed of sound. Under those acoustic Lorentz transformations, the speed of sound is the same in all frames. However, this is not a symmetry of the rest of nature and thus there is a preferred frame, the frame in which the air is at rest.

It could have been, that the world is invariant under Galilei transformations and Maxwell's equations hold only in a preferred frame (the rest frame of the ether say). This possibility cannot be ruled out by pure thought (like a Gedankenexperiment), one has to see which possiblity nature has chosen. And this is done in a Michelson Morley experiment for example.

But still: Buy that book and you will enjoy it!

Same thing happens with popular science physics books. I have not read any in many years since the density of information new to me is usually extremely low. Maybe I browse a bit in a book shop to see which topics are covered and read a page or two to see how a controversial topic is covered. I think this is the same for any worker in the field. Therefore, in the discussions of the controversial physics books of recent years, the authors' response to criticism of string theorists was often "you have not read the book" and indeed, this was true most of the time. But still, people haveing browsed through the book as above usually knew what was going on even without reading the book cover to cover.

That is the background to my reaction when my parents gave me a book for christmas which they had bought on their US trip in autumn: "My Einstein", a collection of essays edited by John Brockman. I assumed this would be just another Einstein book and one that was even a bit late for the Einstein year 2005. So the book sat on my shelf for a couple of months. But a few weeks ago I stated reading and was surprised: This was the most interesting book with a physics theme I have read in years! I can strongly recommend it!

The idea of the book is to ask 24 experts in fields related to Einstein's work or life to say what "Einstein" means to them. And the positive thing is that this is not 24 introductions to special relativity but 24 aspects of the physicist, the man, the pop star, the philosopher, the politician in 2006, more than fifty years after his death.

The authors include John Archibald Wheeler (the only one which actually interacted with Einstein), Lenny Susskind, Anton Zeilinger, Lee Smolin, George Smoot, Frank Tipler, George Dyson (the son of Freeman Dyson who was baby sit by Einstein's secretary), Maria Spirolulu, Lawrence Krauss and Paul Steinhardt. All of them find interesting and very diverse aspects of the Einstein topic and reading the book a number of physics questions came to my mind.

One essay was pointing out that what was peculiar about Einstein's way of thinking was that it was based on thought experiments and thus much more driven by elegance than by observation in the lab. This was illustrated by the fact that the reasoning that lead to special relativity was based on an analysis of Maxwell's equations (which are of course symmetric under Lorentz transformations, a fact which was known to Lorentz) rather than on an analysis of the Michelson Morley experiment.

One should note however that this argument is not logically tight: Of course it much more aesthetic to deduce from the fact that the speed of light comes out of Maxwell's equations that it should be universal and that if should be the same in all directions. However this does not follow directly as one can see by considering the acoustic analogue: From an analysis of kinetic gas theory one can deduce sound waves and the speed of sound can be expressed in terms of the molar weight of the gas etc. One finds a wave equation and that equation is invariant under boosts where the speed of light is replaced by the speed of sound. Under those acoustic Lorentz transformations, the speed of sound is the same in all frames. However, this is not a symmetry of the rest of nature and thus there is a preferred frame, the frame in which the air is at rest.

It could have been, that the world is invariant under Galilei transformations and Maxwell's equations hold only in a preferred frame (the rest frame of the ether say). This possibility cannot be ruled out by pure thought (like a Gedankenexperiment), one has to see which possiblity nature has chosen. And this is done in a Michelson Morley experiment for example.

But still: Buy that book and you will enjoy it!

## Thursday, March 06, 2008

### You have to look hard to see quantum gravity

Of course, we still don't know what the true theory of quantum gravity looks like. But many people have explained over and over again that there are already extremely tight constraints on what this theory will look like: That theory in the small should better look like the standard model for energies up to about 100GeV and it has to look like General Relativity for length scales starting from a few micrometers to tests at very high precision at scales of the size of the solar system (screwing up the laws of gravity at that scale just by a tiny amount will change the orbits of the moon --- known with millimeter precission --- an the planets on the long run and will very likely render the solar system unstable at time scales of the age of the solar system). At larger scales up to the Hubble scale we still have good evidence for GR, at least if you accept dark matter (and energy). These are the boundary conditions a reasonable candidate for quantum gravity has to live with.

Considerations like this suggest that most likely you will need observations close to the Planck scale to see anything new but not so radically new that you would have seen it already.

I know only of two possible exceptions: Large extra dimensions (which lower the Planck scale significantly) allowing for black hole production at colliders (which I think of as possible but quite unnatural and thus unlikely) and cosmological gravitational waves (aka tensor modes). At least string theory has not spoken the final verdict on those but there are indications that those are way beyond detection limits in most string inspired models (if you accept those as being well enough understood to yield reliable predictions).

Let me mention two ideas which I consider extremely hard to realise such they yield observable predictions and are now yet ruled out: The first example is a theory where you screw with some symmetry or principle by adding terms to your Lagrangian which have a dimensionful coefficient corresponding to high energies. Naively you would think that this should influence your theory only at those high energies and these effects are hidden for low energy observers like us unless we look really hard.

One popular example of such theories are theories that mess with the relativistic dispersion relation and for example introduce an energy dependent speed of light. Proponents of such theories suggest one should look at ultra high energy gamma rays which have traveled a significant fraction of the universe. Those often come in bursts of very short duration. If one assumes those gammas were all emitted at the same time but then one observes that the ones of higher energies within one burst arrive here either systematically earlier or later this would suggest that the speed at which they travel depends on the energy.

The problem with screwing with the dispersion relation is that you are very likely breaking Lorentz invariance. The proponents of such theories then say that this breaking is only by a tiny amount visible only at large energies. Such arguments might actually work in classical theories. But in a quantum theory, particles running in loops are transmitting such breaking effects to all scales.

Another way to put this: In the renormalisation procedure, you should allow for all counter terms allowed by you symmetries. For example in phi^4 theory, there are exactly three renormalisable, Lorentz invariant counter terms: phi^2, phi^4 and phi Box phi. Those correspond to mass, coupling constant and wave function renormalisation. But if your theory is not Lorentz invariant at some scale you have no right to exclude terms like for example phi d_x phi (where d_x is the derivative in the x-direction). Once those terms are there, they have no reason to have tiny coefficients (after renormalisation group flow). But we know with extremely high precission that those coefficients are extremely tiny if non-zero at all in the real world (in the standard model say).

So if your pet theory breaks Lorentz invariance you should better explain why (after taking into account loop effects) we don't see this breaking already today. So far, I have not seen any proposed theory that achieves this.

There is an argument of a similar spirit in the case of non-commutative geometry. If you start out with [x,y] = i theta then theta has dimension length^2 (and in more than 2D breaks Lorentz invariance but that's another story). If you make it small enough (Planck length squared, say) it's suggestive to think of it as a small perturbation which will go unnoticed at much larger scales (like the quantum nature of the world is not visible if you typical action is much larger than h-bar). Again, this might be true in the classical theory. But once you consider loop effects you have UV/IR mixing (the translation from large energy scales to low energy scales) and your tiny effect is seen at all scales. For example in our paper we worked it out in a simple example and demonstrated that a 1/r^2 Coulomb type force law is modified and the force dies out exponentially over distance scales of the order of sqrt(theta), the length scale you were going to identify with the Planck scale in the first place and, whoops, your macroscopic force is gone...

A different example are variable fundamental constants. At first, those look like an extremely attractive feature to a string theorist: We know that all those constants like the fine structure constant are for example determined by details of your compactification. Those in turn are parametrised by moduli and thus it's very natural to think of fundamental constants as scalar field dependent. Concretely, for example for the fine structure constant, in your Lagrangian you replace F^2 by sF^2 for some scalar s which you add and which by some potential is stabilised at the value that corresponds to alpha=1/137.

The problem is once again that either the fixing is so tight that you will not see s changing or s is effectively sourced by electric fields and shows up in violations of the equivalence principle as it transmits a fifth force. You might once more be able to avoid this problem by extreme fine tuning the parameters of this model just to avoid detection (you could make the coupling much weaker than the coupling to gravity). But this fine tuning is once more ruined by renormalisation and after including the first few loop orders, s will not anymore couple only to F^2 but to all standard model fields and have an even harder time to play hide and seek and up to today remain unobserved (an yes, tests of the equivalence principle have very high precission).

You might say that non-commutative geometry and varying constants are not really quantum gravity theories. But the idea should be clear: We already know a lot of things about physics and it's very hard to introduce radical new things without screwing up things we already understand. I'm not saying it's impossible. It's just that you have to be really clever and a naive approach is unlikely to get anywhere. So the New Einstein really has to be a new Einstein.

But in case you feel strong today, consider this job ad.

Considerations like this suggest that most likely you will need observations close to the Planck scale to see anything new but not so radically new that you would have seen it already.

I know only of two possible exceptions: Large extra dimensions (which lower the Planck scale significantly) allowing for black hole production at colliders (which I think of as possible but quite unnatural and thus unlikely) and cosmological gravitational waves (aka tensor modes). At least string theory has not spoken the final verdict on those but there are indications that those are way beyond detection limits in most string inspired models (if you accept those as being well enough understood to yield reliable predictions).

Let me mention two ideas which I consider extremely hard to realise such they yield observable predictions and are now yet ruled out: The first example is a theory where you screw with some symmetry or principle by adding terms to your Lagrangian which have a dimensionful coefficient corresponding to high energies. Naively you would think that this should influence your theory only at those high energies and these effects are hidden for low energy observers like us unless we look really hard.

One popular example of such theories are theories that mess with the relativistic dispersion relation and for example introduce an energy dependent speed of light. Proponents of such theories suggest one should look at ultra high energy gamma rays which have traveled a significant fraction of the universe. Those often come in bursts of very short duration. If one assumes those gammas were all emitted at the same time but then one observes that the ones of higher energies within one burst arrive here either systematically earlier or later this would suggest that the speed at which they travel depends on the energy.

The problem with screwing with the dispersion relation is that you are very likely breaking Lorentz invariance. The proponents of such theories then say that this breaking is only by a tiny amount visible only at large energies. Such arguments might actually work in classical theories. But in a quantum theory, particles running in loops are transmitting such breaking effects to all scales.

Another way to put this: In the renormalisation procedure, you should allow for all counter terms allowed by you symmetries. For example in phi^4 theory, there are exactly three renormalisable, Lorentz invariant counter terms: phi^2, phi^4 and phi Box phi. Those correspond to mass, coupling constant and wave function renormalisation. But if your theory is not Lorentz invariant at some scale you have no right to exclude terms like for example phi d_x phi (where d_x is the derivative in the x-direction). Once those terms are there, they have no reason to have tiny coefficients (after renormalisation group flow). But we know with extremely high precission that those coefficients are extremely tiny if non-zero at all in the real world (in the standard model say).

So if your pet theory breaks Lorentz invariance you should better explain why (after taking into account loop effects) we don't see this breaking already today. So far, I have not seen any proposed theory that achieves this.

There is an argument of a similar spirit in the case of non-commutative geometry. If you start out with [x,y] = i theta then theta has dimension length^2 (and in more than 2D breaks Lorentz invariance but that's another story). If you make it small enough (Planck length squared, say) it's suggestive to think of it as a small perturbation which will go unnoticed at much larger scales (like the quantum nature of the world is not visible if you typical action is much larger than h-bar). Again, this might be true in the classical theory. But once you consider loop effects you have UV/IR mixing (the translation from large energy scales to low energy scales) and your tiny effect is seen at all scales. For example in our paper we worked it out in a simple example and demonstrated that a 1/r^2 Coulomb type force law is modified and the force dies out exponentially over distance scales of the order of sqrt(theta), the length scale you were going to identify with the Planck scale in the first place and, whoops, your macroscopic force is gone...

A different example are variable fundamental constants. At first, those look like an extremely attractive feature to a string theorist: We know that all those constants like the fine structure constant are for example determined by details of your compactification. Those in turn are parametrised by moduli and thus it's very natural to think of fundamental constants as scalar field dependent. Concretely, for example for the fine structure constant, in your Lagrangian you replace F^2 by sF^2 for some scalar s which you add and which by some potential is stabilised at the value that corresponds to alpha=1/137.

The problem is once again that either the fixing is so tight that you will not see s changing or s is effectively sourced by electric fields and shows up in violations of the equivalence principle as it transmits a fifth force. You might once more be able to avoid this problem by extreme fine tuning the parameters of this model just to avoid detection (you could make the coupling much weaker than the coupling to gravity). But this fine tuning is once more ruined by renormalisation and after including the first few loop orders, s will not anymore couple only to F^2 but to all standard model fields and have an even harder time to play hide and seek and up to today remain unobserved (an yes, tests of the equivalence principle have very high precission).

You might say that non-commutative geometry and varying constants are not really quantum gravity theories. But the idea should be clear: We already know a lot of things about physics and it's very hard to introduce radical new things without screwing up things we already understand. I'm not saying it's impossible. It's just that you have to be really clever and a naive approach is unlikely to get anywhere. So the New Einstein really has to be a new Einstein.

But in case you feel strong today, consider this job ad.

## Thursday, February 28, 2008

### Zeit Sudoku Bookmarklet

As you know, I am a Sudoku addict. For a nice ten minute break I often download the daily sudoku from Die Zeit. Until recently, from this bookmarked page, I had to change the level from "leicht" to "schwer" and then press the button for the PDF version to print out.

But now, they introduced even another page before that (where you have to choose between the Flash version and the puzzle from the printed (only weekly) newspaper or the oldstyle version that allows the download. You could no longer bookmark the next page as there the URL already contained some sort of session ID.

Of course, they want me to go through all these pages to make sure I do not miss any of the ads they want to present me. But I think that this new version requires a few mouse clicks too much and so I decided to have a look at the page's source code.

It turns out that the URL for the PDF no longer contains a session ID but instead contains today's date, slightly more than you can do with a static bookmark. But that got me thinking that one might be able to solve this problem with a bookmarklet, a bookmark that makes use of the fact you can have JavaScript in a URL and in a bookmark.

Last time I looked into JavaScript was roughly ten years ago and at that time it seemed like a very stupid idea to have some crippled language where you have to transmit all of the source code to the browser which then slowly interprets it if you can do much much more powerful things on the server side with CGI scripts.

Since then, a lot of time has passed and I have heard many interesting things (let me mention only AJAX) suggesting I should maybe reconsider my old dismissal of JavaScript. I had a look though a number of reference sheets and here it is: My first own JavaScript sniplet: A sudoku download bookmarklet. Clicking on this link (or bookmarking it and retrieving the bookmark) brings you directly to the PDF version on the latest "schweres" sudoku! Here is the source:

But now, they introduced even another page before that (where you have to choose between the Flash version and the puzzle from the printed (only weekly) newspaper or the oldstyle version that allows the download. You could no longer bookmark the next page as there the URL already contained some sort of session ID.

Of course, they want me to go through all these pages to make sure I do not miss any of the ads they want to present me. But I think that this new version requires a few mouse clicks too much and so I decided to have a look at the page's source code.

It turns out that the URL for the PDF no longer contains a session ID but instead contains today's date, slightly more than you can do with a static bookmark. But that got me thinking that one might be able to solve this problem with a bookmarklet, a bookmark that makes use of the fact you can have JavaScript in a URL and in a bookmark.

Last time I looked into JavaScript was roughly ten years ago and at that time it seemed like a very stupid idea to have some crippled language where you have to transmit all of the source code to the browser which then slowly interprets it if you can do much much more powerful things on the server side with CGI scripts.

Since then, a lot of time has passed and I have heard many interesting things (let me mention only AJAX) suggesting I should maybe reconsider my old dismissal of JavaScript. I had a look though a number of reference sheets and here it is: My first own JavaScript sniplet: A sudoku download bookmarklet. Clicking on this link (or bookmarking it and retrieving the bookmark) brings you directly to the PDF version on the latest "schweres" sudoku! Here is the source:

javascript:var d=new Date();

var m=d.getMonth()+1;

if(m<10){m="0"+m};

var t=d.getDate();

if(t<10){t="0"+t};

open('http://sudoku.zeit.de/sudoku/kunden/die_zeit/pdf/sudoku_' +

d.getFullYear()+'-'+m+'-'+t+'_schwer.pdf')

## Thursday, February 07, 2008

### Geometric Hamilton Jacobi

Today, over lunch, togther with Christian Römmelsberger, we tried to understand Hamilton-Jacobi theory from a more geometric point of view.

The way this is usually presented (in the very end of a course on classical mechanics) is in terms of generating functions for canonical transformations such that in the new coordinates the Hamiltonian vanishes. Here I will rewrite this in the laguage of symplectic geometry.

As always, let us start with a 2N dimensional symplectic space M with symplectic form . In addition, pick N functions such that the submanifolds are Lagrangian, that is is a Lagrangian subspace of (meaning that the symplectic form of any two tangent vectors of vanishes). If this holds, the can be regarded as position coordinates.

Starting from these Lagrangian submanifolds, we can locally find a family of 1-forms in the normal bundle such that . You should think that for appropriate momentum coordinates on the Lagrangian leaves of constant . But here, these are just coefficient funtions to make a potential for .

Now we repeat this for another set of position coordinates which we assume to be "sufficiently independent" of the meaning that . This implies that locally are coordinates on M. With the comes another 1-form and since are locally related by a "gauge transformation". We have for a function F.

Let's look at a little bit closer. A general normal 1-form would look like . But since we started from Lagrangian leaves, there is no in and thus . But expressing this in coordinates yields

Comparing coefficients we find and . You will recognize the expressions for momenta in terms of a "generating function".

What we have done was to take two Lagrangian foliations given in terms of and and compute a function from them. The trick is now to turn this procedure around: Given only the and a function of these and some N other variables , one can compute the as functions on M: Take a point and define by inverting . For this remember that was defined implicitly above: It is the coefficient of in .

Up to here, we have only played symplectic games independent of any dynamics. Now specify this in addition in terms of a Hamilton function h. Then the Hamilton-Jacobi equations are nothing but the requirement to find a generating function such that the are constants of motion.

Even better, by making everything (that is h and Q and F) explicitly time dependent, by the requirement that the action 1-form is invariant: giving we get a transforming Hamilonian and we can require this to vanish:

If we think of the Hamiltonian h given in terms of the coordinates this is now a PDE for F which has to hold for all . That is, writing F as a function of and it has to hold for all (fixed) as a PDE in the and t.

The way this is usually presented (in the very end of a course on classical mechanics) is in terms of generating functions for canonical transformations such that in the new coordinates the Hamiltonian vanishes. Here I will rewrite this in the laguage of symplectic geometry.

As always, let us start with a 2N dimensional symplectic space M with symplectic form . In addition, pick N functions such that the submanifolds are Lagrangian, that is is a Lagrangian subspace of (meaning that the symplectic form of any two tangent vectors of vanishes). If this holds, the can be regarded as position coordinates.

Starting from these Lagrangian submanifolds, we can locally find a family of 1-forms in the normal bundle such that . You should think that for appropriate momentum coordinates on the Lagrangian leaves of constant . But here, these are just coefficient funtions to make a potential for .

Now we repeat this for another set of position coordinates which we assume to be "sufficiently independent" of the meaning that . This implies that locally are coordinates on M. With the comes another 1-form and since are locally related by a "gauge transformation". We have for a function F.

Let's look at a little bit closer. A general normal 1-form would look like . But since we started from Lagrangian leaves, there is no in and thus . But expressing this in coordinates yields

Comparing coefficients we find and . You will recognize the expressions for momenta in terms of a "generating function".

What we have done was to take two Lagrangian foliations given in terms of and and compute a function from them. The trick is now to turn this procedure around: Given only the and a function of these and some N other variables , one can compute the as functions on M: Take a point and define by inverting . For this remember that was defined implicitly above: It is the coefficient of in .

Up to here, we have only played symplectic games independent of any dynamics. Now specify this in addition in terms of a Hamilton function h. Then the Hamilton-Jacobi equations are nothing but the requirement to find a generating function such that the are constants of motion.

Even better, by making everything (that is h and Q and F) explicitly time dependent, by the requirement that the action 1-form is invariant: giving we get a transforming Hamilonian and we can require this to vanish:

If we think of the Hamiltonian h given in terms of the coordinates this is now a PDE for F which has to hold for all . That is, writing F as a function of and it has to hold for all (fixed) as a PDE in the and t.

Subscribe to:
Posts (Atom)