Thursday, December 21, 2006

Approaching the holiday season

It' there: I started my Christmas holiday. I decided to start it early this year, after I have spent three weeks managing the technicalities of merging contributions of 80 professors into a 200 page 100MB pdf document containing this year's research report of the school for engineering and science of IUB. At least we were using TeX and subversion otherwise the nightmare would have been complete. Submission deadline to the dean was on Monday, so after reporting on esearch I could do some actual research myself or..... go on holiday.

I picked option one for three quite productive days and then moved to option two. Just five minutes before I intended to leave I received an email from a journal editor that requested a referee report on some paper to be written until January 3rd. Just in case I get too bored
under the tree...

Now, I am at my parents' place and turned on the computer so check what has happened on the net over night and there is a message from Clifford asking me to My instructions:
1. Grab the book closest to you.
2. Open to page 123, go down to the fifth sentence.
3. Post the text of next 3 sentences on your blog.
4. Name of the book and the author.
5. Tag three people.


So, let's do it. 1. done 2. done. 3.:
500g Kastanien werden kruz angeröstet, dann von der äußeren und inneren Haut befreit und in Wasser weichgekocht; ich tropfe sie ab und streich sie durch ein feines Sieb. Dann rühre ich 150g Butter mit 150g Zucker und 2 Päckchen Vanillinzucker, einer Prise Salz sowie 3 ganzen Eiern recht schaumig und gebe den dicklichen Kastanienbrei dazu.


4. It's "Backe backe Kuchen mit Erna Horn" by Erna Horn, a recipe book for bakery. The quotation above is from recipe number 242 "Kastanientorte" a maroon cake. 5. is more difficult (given the exponential growth of these kinds of chain letter things). I tried to get out of the string theory circles by handing this over to Georg, Anna, and Amelie.

Tuesday, December 19, 2006

Effectiveness of Symmetry

For some strange reasons, only today I had my copy of the November issue of "Physics Today" in my mailbox, a few days after the December issue arrived. It contains an opinion piece "Reasonably effective: I. Deconstructing a miracle" by Frank Wilczek (online available only to subscribers of Physics Today unfortunately).

He discusses the famous Wigner quote about the unreasonable effectiveness of Mathematics in the Natural Sciences and comes to the conclusion that it can be traced back to symmetries and locality. Then he writes
Since any answer to a "why" question can be challenged with a further "why," any reasoned argument must terminate in premises for which no further reason can be offered.

Let's nevertheless still take this argument a little bit further, just for the fun of it. Why is the world symmetric and local? We are used from string theory that at least continuous symmetries are always local and global symmetries arise only as asymptotic versions of local symmetries. But on the other hand, local gauge symmetries are not really symmetries but just redundancies of a convenient notation. It is possible to rewrite systems with gauge symmetries only in terms of invariant objects (like e.g. Wilson loops) although that formulation is not particularly simple. It's just that in terms of more fields (like longitudinal polarisations of gauge bosons) the formalism simplifies (e.g. becomes linear) and one has to use gauge transformations to get rid of the unphysical polarisations. Thus saying the world has lots of symmetries really means the best formalisation of the world that we know of has many redundancies.

Or turned the other way round: It's not really the symmetries which are properties of the theories. For example people used to point out that GR is diffeomorphism invariant, it keeps its form in any coordinate system. Thus the infinite dimensional group of diffeos make up the symmetries of GR. But this argument is wrong. For example Misner, Thorne and Wheeler spend the entire chapter 12 of the Telephone Book on demonstrating that you can formulate Newtonian gravity in a diffeomorphism invariant way. This is just a fancy way of expressing what every first year student knows: You are not forced to use Cartesian coordinates to discuss Newtonian gravity you can use spherical coordinates as well. Thus this theory is also coordinate invariant.

The real difference between Newton's gravity and GR is that in Newton's version there is a covariantly constant one form dt which is background in the sense that it is not determined by an equation of motion but it is just there. It is externally given. Therefore what is often called "many symmetries" really means absence of such background structure.

But still, why is the world symmetric? One possible answer is that amongst spin 1 fields only gauge potentials have normalisable interactions. Thus, it might well be that at some high, fundamental scale there are many more fields and gauge symmetries are not that preferred. However upon RG-flow these other fields decouple completely. Thus having only gauge interactions remaining really just comes from the fact that our energy scales are much lower than the Planck scale.

You could even try to put an anthropic twist on this: If observers require a large number of degrees of freedom being strongly coupled tuned tuned to critical values (think: neurons in your brain) it is not unreasonable to believe that you have to be quite far from the fundamental scale where all hell of quantum gravity breaks lose. If that were the case, you could argue that observers always see gauge interactions and chiral fermions first as only these survive the running to these scales necessary for observing subjects. Of course, there is still the hierarchy problem of why there is a Higgs. And this argument does not tell us why we observe non-abelian gauge theories, U(1)'s would do as well. For this we might have to invoke moduli trapping or similar mechanisms.

Tuesday, December 05, 2006

Farewell hep-th/yymmxyz

In case you did not notice: The arXiv is going to change the naming scheme. This has become necessary as the current scheme only allows for 999 papers per month in each category and the mathematicians had reached 989 in November already. The new numbering will for example read
arXiv:0701.0001

and there is the possibility to explicitly refer to specific versions as in
arXiv:0701.0001v1

Note that the section (like hep-th) is no longer part of the identifier but it can be added as in
arXiv:0701.0001v1 [q-bio.CB] 1 Jan 2007

I would have liked a more conservative change (like just adding an eighth digit) as now I will have to change a number of regular expression in programs that are supposed to spot references to the arXiv like my seminar announcement web suite and my .bib file updater (which uses Spires to find out if a paper has appeared in print).

Monday, November 27, 2006

Coherent states

If you are not working on quantum optics, you might be in a similar situation as I was a couple of days ago regarding coherent states: You had encountered them in a homework exercise on the harmonic oscillator where you had to prove that they are eigenstates of the creation operator and have minimal uncertainty. And you know that they are important to quantum optics. At least this was my state of knowledge until very recently. Since then, I have read this review (with which I do not agree in all parts) and have spent some thoughts on the subject I would like to share my current understanding.

Let's suppose we have two hermitean operators A and B and want to find states such that the uncertainty is minimal. To this, let's briefly go through the derivation of the uncertainty relation: You assume any state and from it form new states and similarly for B where is the expectation value of A. For these two new states, you use the Cauchy-Schwarz inequality (which basically says in the form ), expand and find


Finally, we use that the absolute value of the imaginary part of a number is less or equal to that number and realise that as A and be are hermitean the imaginary part of the left expectation value is to arrive at which is the usual uncertainty relation.

If you want to rest for a minute think about the following puzzle: Consider a particle on a circle (or on the interval with periodic boundary conditions). Take the wave function and compute that for this state while . This seems to clash with what we just derived. Where is the flaw? (Hint: see a previous post about quantum mechanics)

Back to the main argument. We want to find a state which saturates the inequality. To have that we have to saturate the inequality in the two places where used inequalities: The Cauchy Schwarz and the abs less Im parts of the argument. Cauchy Schwarz is saturated (the scalar product is maximal for vectors of fixed length) if the two vectors are proportional to each other, that is if there is a complex number such that in our case . We can rearrange that to that is has to be an eigenvector of the operator . Furthermore, for the absolute value of a number to be equal to its imaginary part, the number has to be purely imaginary. In our case, this means has to be purely imaginary which can only be if is purely imaginary.

So we found that the uncertainty of operators A and B in a state is minimal if the state is an eigenvector of an operator with real .

So much for the general theory. Now, we can specialise to the usual case A=x and B=p and conclude that states of minimal uncertainty are eigenstates of for some real . Note that so far we have not talked about the harmonic oscillator at all. We have just picked two operators and asked for states in which they have minimum uncertainty. This was a question at the level of Hilbert space operators and we did not specify any sort of dynamics.

Thus, coherent states are not about the harmonic oscillator at all. It just happens that they are eigenstates of annihilation operators for some harmonic oscillator. Above any real does the job and this translates directly to the frequency of the oscillator: What people call "squeezed states" are just coherent states for a different that can in a similar way be related to the annihilation operators at different frequencies.

This so far is my current understanding. In the above mentioned review there is another generalisation which does involve dynamics which I do not yet fully understand. It somehow splits a Hamiltonian into sums of products of 'elementary' operators and then considers the Lie algebra generated by these elementary operators upon commutators. Then you exponentiate this algebra to a group and consider the orbit of the ground state of that Hamiltonian under the action of this group. The part I do not yet understand is how physical this is and how the different choices on the way (the set of elementary operators for example) influence the result.

Wednesday, November 15, 2006

Science and arXiv

I was made part of the team that assembles our school's research report involving contributions from all faculty members including references to their publications. My job is mainly merging all contributions into a single LaTeX document and managing the references. We decided to do them in BibTeX so we asked all faculty to provide a list of their papers in BibTeX format. The idea was of course that only a tiny part of this data would have to be typed as most literature databases (such as spires for our field) provide data in this format and other programs like Endnote can export BibTeX as well. Thus with a bit of cut&paste the job would be easy.

I had not expected the amount of computer illiteracy amongst science professors. OK, I knew that most biologists do not use TeX for their papers. But I must admit that I was impressed receiving an MS Word document containing BibTeX entries but for example having all the title="..." fields set in italics. That is not to speak of the many complaints I received about people having to retype their references. Plus the concept of separating content and layout is completely alien to many.

But what I really wanted to talk about is this: I learned during one of these discussions with an experimental surface physicist (who by the way keeps his references typed in a Word document) why he does not submit his preprints to the arXiv: He told me Science does not accept papers which are in electronic archives others than their own! I find this completely ridiculous and could not believe it since at least my only science paper (btw my first paper at all and still the one with most citations although not in high energy)had a preprint on the arXive. But it seems he is right, at least as far as the current policy is concerned.

Please, please, somebody tell me this interpretation is not true and the greed of the AAAS is not in the way of good scientific practices.

Friday, November 10, 2006

Paper cranes

Yesterday, I served as jury member for the exciting physics competition which is part of the WellenWelten ('wave worlds') physics exhibition in Bremen.

Children (possibly in teams) from 10 to 19 could choose from six construction tasks (announced two months earlier) and present their result in the Congress Centre.

I had to judge paper cranes. The task was to use only paper, glue, sand and twine to build a crane. It should only touch the table in an area of A4 size and be able to hold a 400g weight 40cm above the table and 25cm in front of the base. Furthermore, the crane had to be stable both with and without the weight. With these constraints the task was to build the crane as light as possible.

We had to judge the cranes not only on their stability and weight but also on design, presentation and construction. The level of the 15 submissions was very high and it was not easy to determine the winners. In the end we settled for this crane



which was constructed by two girls from 9th grade (13 years). The three runners up are



.

You find all the pictures here (two pages). Red t-shirts indicate participants and their teachers and dark blue is the jury.

Wednesday, November 08, 2006

Seminars while you drive

If I drive longer distances and get bored of listening to the radio I love audio books. Too bad they are usually quite expensive. But i discovered an alternative: Listen to seminars.

A good place to start is the KITP, for example their Blackboard Lunches.

The audio formats they offer are realaudio and ipod. As I do not own one of these white gadgets, I have to use the other option. My car radio can play mp3's (from its USB port or CDs). So I have to convert the .rm files to mp3. Here is how you can do that (so you don't have to spend as much time on it as I did):

In case it is not done already, install mplayer.

When you call it like
mplayer -quiet -vo null -vc dummy -af volume=0,resample=44100:0:1   -ao pcm:waveheader http://online.itp.ucsb.edu/download/bblunch/dine.rm

it will create a file audiodump.wav which in turn you can convert to an mp3 with bladeenc.

After you've done that for a couple of talks, use
mp3burn *mp3
to write the seminars to a CD and you can hit the road, Jack!

Update: Of course you don't use mp3burn as that would produce an ordinary audio CD with maximally 72 Minutes playing time. You want a data disk with the mp3's on it so you really use a program like xroastcd.

Tuesday, November 07, 2006

Two Sudoku Problems

Here are two Sudoku meta-problems I have been thinging about for a while.

The first is about the normal form of a sudoku. The rules of sudoku have a huge symmetry group of order (3!)^8 x 2 x 9! = 1.22E12. It is generated by permutations of rows in a group of three rows, permutations of these groups, same thing for columns, transposition of the grid and permutations of the numbers 1,...,9 which are just labels for nine different things. So for each grid, there are 1.22E12 grids which are basically the same. Is there an easy way to determine if two puzzles are related by the symmetry and even better, is there a normal form (a distinct element for each orbit)?

There is a trivial solution to this problem: You just write out all 10^12 puzzles you obtain by acting with the symmetry group and then sort them according to some lexicographic ordering. This gives a normal form.

What I am asking for is a more direct construction. One which sounds more like: Use the 9! permutations of the symbols to make the first row read 123456789. Then use row permutations to make the square below the 1 as small as possible. Then use row permutations to make the next squares under that square as small as possible... Unfortunately, starting to permute colums screws up what was achieved with this ordering prescription. So how would a better algorithm read?

The other problem is how to rate the difficulty of a puzzle? Question one really applied after the puzzle is solved, this question is about puzzles to be done. Newepaper which publish these puzzles often give ratings such as "simple", "intermediate", "hard". But I found these ratings differ significantly between papers (what Zeit online considers hard is much much easier compared to what The Guardian calls a hard sudoku) but are also not consistent amongst themselves.

Earlier, I have talked about the perl program I wrote to solve sudokus. It recursively figures out for all squares which numbers are still allowed and then takes the square with the least number and tries to put these numbers there. If there is an empty square with no allowed numbers remaining it backtracks.

Thus the search can be represented by a tree where each node represents a square to be filled out and there are as many branches from that node as there are numbers which are not yet rules out. What I am looking for is a numerical rating for a puzzle which is a predictor of how hard I find to do the puzzle and for example correlates with the time it takes me to solve it. Even if I use a different strategy when doing these puzzles by hand I would expect the information could be obtained form the tree. Do you have any good idea for such a function from trees to the reals, say? Obviously the trees all have a depth given by the number of empty squares in the puzzle and each node can have at most nine branches but typically has much less (even for "hard" puzzles most of the nodes have only one branch).

An easy guess is of course the number of nodes or the number of leaves but I found those at least not be proportional to my manual solution time. To give you an idea: Today's hard puzzle from Die Zeit
..573.864
.4...8...
.83.....1
71....2..
3..6921.7
4...7..9.
....4.97.
..6..5..8
.......1.

has 52 nodes (four times the program encounters situtations with two possibilities, all others are unique or dead ends, manually it took me exactly 6:30) while
.98......
....7....
....15...
1........
...2....9
...9.6.82
.......3.
5.1......
...4...2.

has 2313 nodes and took me well over an hour some months ago.



Of course, if early on you have several possibilities and learn only much later which ones do not work this is much worse than having many possibilities which are ruled out immediately.

UPDATE: In case anybody is interested, I put up the decision tree for the difficult puzzle.

Wednesday, October 11, 2006

Cheap quantum cryptography

Quantum information theory is a fascinating subject. By applying the simple computational rules of quantum mechanics it is often possible to process information much better than with just classical devices. A famous example being Shor's algorithm for the factorisation of integers relevant for breaking many popular public key encryption schemes such as RSA. The speed up is not exponential but "only" by a square root but this can already be substantial. Formal computer scientists amuse themselves by investigating how the many complexity classes change if you have access to some quantum computations, the Complexity Zoo gives a nice overview.

A beatiful introduction into the subject are the lecture notes by John Preskill. A more formal aspect (investigating which quantum machines can be constructed, how does the impossible quantum copier differ from possible devices and that in the language of operator algebras) is treated in the notes by Reinhard Werner.

However, all this, much like string theory, is only theory and does not have real world applications. As far as real experiments go, IBM has been able to factor 15 with a quantum computer in 2001.

Another potentially applicabel area of application is cryptography: It is possible to construct quantum channels that are immune to eavesdropping: If somebody in the middle listens in the information does not reach the intended recipient anymore. This has been demonstrated in a real experiment by Anton Zeilinger: He managed to transmit the keys of a classical encryption scheme via entangled photons (with a bitrate of 400-800bit/s and 3% error).

This still involves a considerable experimental set-up and remember: You only transmit the keys, as the quantum channel is my far too slow to transmit real data (you probably read much faster than 400bit/s). But there are cheap alternatives (not in the theoretical but in the practical sense) which are as well impossible to crack: One-time pads. Assume I have 1GB of data which I want to transmit to you securely. All I need are 1GB of random numbers which I share with you beforehand and then I xor the data with the random numbers transmit the result (which is just noise for anybody in between) and xor the encrypted message to recover the original data.

This is not elegant as we have to share the random numbers beforehand, we have to share as many bits as we want to transmit. But for many practical applications this is easily possible (headquater of some company who want to transmit construction plans to factory, a spy who wants to phone home, you name it): Cheap small harddrives have more room than all the information you would like to transmit secretly in all you life. When you construct the factory you just bring there the harddrive in a sealed box and practially all future communication is secure, the spy carries a sealed usb-stick and has more shared randomness than all the secret pictures he is goint to take and all the reports he has to send. This is not elegant but dead cheap and efficient.

And if you like you can even produce the randomness using quantum physics to make it physically safe: For example you could sample the timing of the ticks of a Geiger counter and have pure quantum randomness.

Here is a small homework: Take t1, t2,...,tN to be the times of N clicks. By themselves, they are not pure random numbers but the time difference follows a Poisson distribution. Assume that the ti are specified with B bits each, so the possible time resolution is dt and the decay constant of the probe is lambda. How many bits of pure randomness can you extract and how do you do it?

Tuesday, September 26, 2006

Admitting my ignorance

I don't know what is you attitude towards rigorous functional analysis but mine could be summarised as "I know it exists. There are subtleties but they don't bite as long as you are not asking for it. So, for everyday quantum mechanics, it's enough to remember that operators are not just matrices and there might be convergence issues (otherwise taking the trace would show that [x,p]=i cannot work)".

I knew that most of the time we are dealing with unbounded operators which are thus not continuous and mathematicians might be worried about their domains of definition (which can only be a proper subset of the Hilbert space) but if you do the natural things (and implicitly work on the proper dense subset of the Hilbert space), you will be ok. Furthermore, the spectrum of an operator can be a bit tricky as everybody knows that the 'eigenfunctions' for example of the momentum operator are plane waves which are not square integrable and similarly, eigenfunctions of x are 0 as elements in L^2. But every child knows that the proper definition of the spectrum of A are those z for which (A-z) is not invertible and any physics argument involving eigenfunctions can be made precise using wave packets which are not exactly eigenfunctions but if one wanted to one could control the error and after a long and messy argument you could prove what the physicist had known right from the beginning.

But, as I have learned, sometimes the subtleties are also physically relevant: The first time I realised this was in my oral diploma exam: I was asked to discuss the particle in a piecewise constant potential (and compute reflection and transmission coefficients etc). I was asked why I picked particular boundary conditions of my wave function at the jumps of the potential. Luckily, instead of parroting what I had read in some textbook ('the probability current has to be continuous so no probability gets lost') I had one of my very few bright moments and realised (I promise I came up with this myself, I had not heard or read it before) that this comes from requiring the Hamiltonian (esp. the kinetic term) to be self-adjoined: If you check this property, you have to integrate by parts and the boundary terms vanish exactly if you assume the appropriate continuity conditions of the wave function.

More recently I learned when the distinction between continuous and point spectrum is physically important: Long ago, in some advanced quantum mechanics class, we were shown some strange, seemingly unmotivated calculation with a random potential which after some time showed that that the eigenfunctions of the Hamiltonian have exponential decay. And "thus, even with arbitrarily small randomness the conductor turns into an isolator." I had never quite understood how this calculation was supposed to imply this conclusion. Only a few months ago, I understood in a seminar by Hajo Leschke that what was really meant was "with probability 1 the spectrum is a pure point spectrum and thus there are no scattering states". For more information check out a PhysRept by Fröhlich and Spence or, if you are particularly brave, the discussion of the RAGE-theorem in Reed Simon vol. III.

But this is not what I came to tell you about. I came to tell you that yesterday over lunch I was reading quant-ph/0609163 by H. Nicolic about myths and facts about quantum mechanics. I could comment on many sections but one particular argument stroke me. It goes back to Pauli and shows that if your Hamiltonian is bounded from below there is no time operator.

Here I will give you a slightly modified version: Consider quantum mechanics on the half line . In the zeroth approximation you would take as your Hilbert space. Obviously, in this space x is a positive operator. From the above reasoning it follows that you probably want to ask your wave functions to vanish at 0 as otherwise p is not symmetric:.


Now take some wave function
such that the expectation value of x is finite, say . Now, you can convice yourself that by applying a translation operator
you produce a new state for which x has the negative expectation !

How did that happen, wasn't x supposed to be a positive operator? The solution can be found in chapter 2.5 of Thirring's text book vol. 3 (no link from Amazon) as pointed out to me by Wolfgang Spitzer. The solution comes really from the functional analysis fine print: p is only symmetric but not self-adjoined. The domain of definition of is strictly larger as it does not require the vanishing condition at 0! There is no self-adjoined extension of p which is still hermitean. And therefore you cannot form the translation operator: It is not defined.

Another way to see this is to realise that for
you need all powers of p. And those are only symmetric (vanishing boundary terms) if actually all derivatives of the wave function vanish at 0. And as the translation operator works nicely only on analytic functions (after all it's just the Taylor series) that requirement does not leave us with too many functions.

Therefore you really have to worry about the finer points of functional analysis not to translate wave packets to where they should not be!

Friday, September 15, 2006

While you wait: web 2.0

There is some physics in the pipe but unfortunately not yet in a state to be discussed at large. So while you wait (hopefully), let me point out an article in this week's Zeit (in German of course) which I find a nice summary of implications of web 2.0. And this wouldn't be web 2.0 if I would not offer you some music videos from youtube to listen to while you read (mainly Michael Brecker, Mike Stern and Keith Jarret).

Thursday, August 31, 2006

Ahrenshoop update

As said earlier, I would like to talk about a few talks here at the Ahrenshoop conference. Let's see how far I get before the afternoon sessions start.

The first talk I would like to mention is Matthias Gaberdiel's about closed string moduli influencing open string moduli. As an example consider strings on a circle. Generically, you can have D0 and D1 branes. D0 sit at a point on the circle and correspond to Dirichlet boundary conditions of open strings while D1 branes wrap the circle and correspond to Neumann conditions. However, if the circle has exactly the self dual radius (fixed under T-duality), the generic U(1) symmetry is enhanced to SU(2) (at level 1 to be specific), thus there is a full SU(2) worth of D-branes. A similar thing happens if the radius is rational in string units R=M/N R_sd say. Then, there besides the genereic branes there are SU(2)/Z_M x Z_N branes.

Thus the spectrum of branes depends critically on R. But R itself is a closed string modulus! You can change it by exciting closed string fields and the obvious question is what happens to the additional branes if you tune the radius away from the special values. Matthias and friends worked out the details and found that because of a bulk boundary 2 point function, in the presence of the special D-brane the operator changing the radius is no longer marginal. Thus changing the radius kicks of an RG flow which they can in fact integrate and show that the special brane decays into either a D0 or D1 brane depending on whether the radius is increased or reduced. They can fill all this prose with calculations which are quite neat and do more general cases. So, go and read their paper!

The next talk I would like to report on was by Niklas Beisert about the spin chain/integrability business. I must admit, in the past I was not following these developments closely and was quite confused. People wrote papers and gave talks reporting that they had done more and more loops for larger and larger subgroups and compared that to many different stringy calculations. But I was lost and had no real idea about where the real progess was happening.

Now Niklas seems to have cleared up a lot of the supergroup theory and the dust has settled considerably. He presented the situation as follows: Both the gauge theory and the stringy side of dilatations operators seem to be integrable in the sense that the S-matrix factorises into products of two particle S matrices. As both sides have N=4 Susy the superalgebra SU(2,2|4) is a symmetry and it seems to restrict this 2 particle S-matrix considerably: The dispersion relation with the square root and the sin is completely fixed by the symmetry and the S-matrix is determined up to a scalar function (diagonal in flavour space). Thus, everything except this function is kinematics and the function contains all the dynamics.

The gauge and the string side of things are different expansions of this function (one from the weak and one from the strong coupling side). On the gauge side, the function to all perturbative orders that have been worked out vanishes while on the other side, the function vanishes at low orders but is non-zero from coupling^3 on. This explains that up to two loops the matching worked (it just tested the kinematics) and why there are discrepancies from 4 loops (where the function starts to matter). I should add that this is not leathal to AdS/CFT since you should not expect a functions expanded around two different regimes to look the same.

Chairwoman just clapped hands, have to go.

Short update:
Internet connectivity is a bit tricky here since all connections go through two ISDN lines and some people use it to do skype effectively stopping connectivity for everybody else. But let me just add to Niklas talk that the reason for the strong conclusions he can draw from the group theory can be traced back to the unusual fact that for that supergroup it happens that the tensor product of two fundamental representations is itself irreducible thus there is no branching. I should also have mentioned that Niklas and friends have a guess for the exact form of that dressing function involving Gamma functions and Betti numbers.

Second update: The paper by Gaberdiel and friends is out.

Wednesday, August 30, 2006

Re: Re:

I am currently attending the 38th Ahrenshoop Symposium, a conference with quite a history, as in cold war times it was the possibility to GDR physicists to invite western collegues and discuss high energy physics with them. There have already be a number of very interesting talks but those might be covered in a later post.

Right now (while I should better listen to Yaron Oz telling us the latest about pure spinors) I feel a certain need to say one or two words about hep-th/0608210 which comments on Guiseppe's and my paper of two years ago.

Thomas accuses us to draw wrong physical conclusions from a correct mathematical calculation. He refers to our discussion of the harmonic oscillator in the polymer Hilbert space. It is not about the fact that there only the ground state is stationary or that time evolution is not continious or that formally that state is a state of infintie temperature. All these things still hold true.

All these might look a bit formal. So, how can you determine if a different version of an oscillator ist physically different from the usual one? You might say "I simply check the spectrum". But that does not work as the alternative does not have the operator you would like to compute the spectrum of. But I hear you cry "that show that it's screwed, I can observe that spectrum for example as optical absorbtion spectrum of molecular vibrations". Unfortunately, that's not true if you just have the oscillator, you would have to couple it to the radiation field and thus the full system is interacting and much more complicated. Thus we didn't take that route in our paper.

Our alternative was to define a family of operators H_e such that you formally would have the Hamilton operator as H_0 if that limit existed (as of course it does not in the polymere case) and show that it has unsusual properties as e goes to zero (for all e the expectation value is 0 but the variance goes like 1/e^2 for almost all states).

So, what does Thomas now say about this? He proposes to restrict attention to a finite subspace of the Hilbert space (the 'nonrelativistic' states), say of dimension n. In this subspace, there are only n^2 independant observables (a finite number!), given by the n x n herminean matrices. Then you compute the expectation values of these n^2 observables in the original Fock space. Finally you employ a theorem that tells you that in any Hilbert space you can find a density matrix that for a finite list of observables gives you expectation values not further off than a given delta.

In other words, if I tell you which finite number of observations I am going to do and which values I expect then you can cook up a state in any Hilbert space that gives these values to any precission.

Note however that the state is chosen after I tell you which observations I am going to make. If I only do one unplanned observation you will get different answes or you have to readjust the state.

Thus, Thomas argues that if I tell you beforehand what I plan to observe, he can prepare any Hilbert space that it looks like my favourite Hilbert space.

OK, we could proceed along those lines. Mankind will only make a finite number of observations (including for example various clicking patterns in particle detectors and the temperature in you office), thus all we need is a finite list of numbers. Thus, in the end any theory of everything just boils down to this list of numbers. All the rest (Lagrangian, branes etc) is just mumbo jumbo!

As always, make up your own mind!

I would really like to hear other ideas of mathematical representaitons of observations that show that we know what a harmonic oscillator looks like!

Before I forget: All this does not touch the main part of the paper: In exactly the critical dimension you don't have to rely on these weakly discontinious representatins of the operator algebra because exactly there there are continious representations in terms of the usual Fock space even if that breaks half of the diffeos spontaneaously and those have to be represented in a non-trivial way. We just suggest that for 'good' theories this should be possible and then try to work out physical consequences you have to face otherwise.

Thursday, August 17, 2006

Scaling of price of margarine

Often people think that physicists have to remember a lot of formulas like one for how to compute the resistance if you know the current and the voltage and another one for how to compute the voltage from the resistance and the current. If they are slightly more educated they realise that you only have to memorise R=U/I and algebra does the rest.

But actually, even that is not true. The way to think about Ohm's law is really to realise that for an Ohmian resistor the current is proportional to the voltage. And if you want, you can call the constant of proportionality resistance (or conductivity if you think in the opposite way). This is the important part of Ohm's law just like it's the 1/r^2 dependence of Newton's law (at least in 3D and at bit later you realise that this is just an expression of the analogue of Gauss' law) or that in string units the radius of the M-Theory circle is proportional to g_s (keeping alpha' fixed) as the mass of a D0 is proportional to 1/g_s. To know how things scale is enough in most cases rather than the knowledge of a formula.

So, let's apply this to an everyday situation. I am slightly worried about my weight so I want to buy Lätta margarine in the supermarket. It comes in two package sizes 250g and 500g. Let's take the prices from here, so you pay 0.85 Euro for 250g and 1.35 for 500g. Obviously, I buy the bigger package as I pay less than twice the money for twice the margarine.

But wait, can we compute how this price comes about? Let's assume the price consists of a price for the package and the price of the actual margarine. Of course, the price for margarine is proportional to the amount M of margarine. The price of the package is likely to be proportional to the surface of the margarine, so it scales like M^(2/3). Thus the total price is something like

P = M value + M^(2/3) package

Plugging in the two prices for the two sizes we can solve for "value" and "package". We find that the price of the margarine is -18.7 cents per kg. That's right, it has a negative price, just like for example nuclear waste. This opens up great possibilities, for example we can work out that 1.76 metric tons of margarine together with its package is exactly for free. Or, if I accept to take ten tons, Unilever will pay me 821.33 Euros! I see another get rich quickly scheme coming up.

Tuesday, August 15, 2006

Finite Group of Order Two

In case you have not yet seen this:



by The Klein Four via Alien Ted.

Tuesday, August 01, 2006

FAZ

The Frankfurter Allgemeine Zeitung has an article on Peter Woit's book but starts out with a portrait of Lubos. Not too bad and entertaining to read (in German).

Wednesday, July 26, 2006

Where have all the trackbacks gone?

I just tried to send trackback pings for the previous posts. First I realised that there are no more trackback links at the ArXiV (because of this discussion?) and then golem.ph.utexas.edu explained to me (via Haloscan)
Problem: Server said 'You are not allowed to send TrackBack pings.'

Too bad.

Mastering anomalies?

After a long and not really fruitful discussion over at Jacques' I had a look a Thomas Thiemann's (with K. Giesel) latest opus magnum (with two further parts). I did not get very far in the introduction as already on page four he mentions an interesting trick: The Master Constraint.

Before I say what it is, let be introduce a bit of the background. We are in the context of theories with gauge invariances which are as everybody knows redundancies in the degrees of freedom. One might think that in quantising the theory one should directly work with the gauge invariant observables but this is often not the case since the description with gauge invariances often has ma much simpler structure, e.g. the space of connections is affine, as discussed elsewhere.

The price one has to pay are is the gauge invariance one drags around and which one has to mod out after the quantization. This can turn out to be impossible and for chiral theories it generically is. Of course I have just described the fact that the theory is anomalous.

Let me discuss this in a concrete example, the bosonic string in which the Virasoro algebra plays the role of the gauge invariance. In this example, the modes of everything are labeled by integers rather than by continuous variables (as for example for the axial anomaly) so there are fewer pitfalls from integrals etc one has to avoid. Plus we have discussed this case in detail in our paper so I don't have to repeat myself too much (for this discussion ignore all the parts on polymer states and the LQG way of doing things, just focus on the mathematical description of what one usually does (Fock space, Gupta Bleuler etc)).

The task in quantization is to turn the classical algebra of observables (functions on phase space) into a quantum algebra with representations on Hilbert spaces. The problem is that the classical algebra is a Poisson algebra with two multiplicative structures, the usual (pointwise) product of functions and the Poisson bracket. Both are supposed to map into in single product in the quantum algebra such that the Poisson bracket becomes the commutator for that product. Already in quantum mechanics of a single degree of freedom you know that this does not work exactly but only "up to higher order h-bar terms".

What does this mean in practice? The usual procedure is to take a subset of observables (typically coordinates of the phase space, or x and p and 1, or the field and its canonical momentum) which have simple Poisson brackets and which generate the classical algebra in terms of the pointwise product. Then one 'promotes' them to operators such that the Poisson bracket goes to commutator rule holds exactly. For all the other observables, one fixes a way of wring them in terms of the simple ones (aka one fixes an operator ordering prescription) and uses this and the product in the operator algebra to define their quantum versions.

Now, what about the gauge symmetry? In the classical theory, Noether's theorem tells us, that all symmetries are inner, that is, for each symmetry transformation, there is a function of phase space which generates it via Poisson brackets. Furthermore, the group relations for the transformations map to Poisson brackets of the generators.

In the quantum theory, you now have to deal with the gauge symmetry. There are two slightly different ways of saying what goes on: The first is quite abstract and is the one we used in the LQG string paper: You take your quantum algebra as above and have your symmetry act on it by an automorphism. Now, in a representation on a Hilbert space, you demand that this automorphism is implemented by unitary operators U(S) (for a gauge transformation S). There is no direct way to obtain these, educated guessing is probably best. The property you demand is that when A is in the algebra and p is the representation you have .

This implies that the U(S) nearly implement the group law: where is a phase. Of course, the above constistency condition for U(S) does not change if you change U(S) by a phase. The question is if you can find a consistent assignment of phases for all U(S) such that all the go away. If this is impossible, you have an anomaly.

In the other approach you use your knowledge of the classical symmetry generators and quantise them as all the other functions on phase space. Often as in the case of the bosonic string, they are quadratic in the basic fields which you quantised directly. This implies that the ordering ambiguity is just a complex number (imaginary for anti-hermitean generators). Again, the difficult step is to find an assignment of these such that the group law holds in terms of commutators.

If you don't succeed you could subtract the left hand side from the right hand side of your expression of the commutator and have the physical state condition that this anomaly (a complex non-zero number) annihilates physical states. This condition of course immediately empties your physical Hilbert space and you are left with nothing.

So the upshot of all this is: In this canonical quantisation approach, the way the anomaly manifests itself is in the inability to get the quantum symmetry algebra to work.

Now we come to the Master Constraint Trick: Assume, we write all our symmetry generators as C_i for i in some index set (let's not worry for a second that this will be infinite in the examples and thus one should worry about existence of the sums). Then form for some positive a_i.

As you can see, M annihilates a state exactly iff all C_i annihilate the state. Thus this constraint contains all the other constraints! Even better, as we only have one constraint, the algebra is trivial and for obvious reasons it also holds in the quantised version.

One is of course not yet done as again the kernel of M could be empty and thus the spectrum of this positive operator could be bound away from zero. But Giesel and Thiemann instruct as what to do:
This can be cured by subtracting from the Master Constraint the minimum of the spectrum provided of course that it is finite and vanishes as so that the modified constrain still has the same classical limit as the original one. One then defines the physical Hilbert space as the (generalised) kernel of the Master Constraint,...


Great, now we finally know how to get rid of these stupid anomalies!

Tuesday, July 04, 2006

Everything solved!

While I was away for a wonderful vacation in southern France, I nearly missed gr-qc/0606121. In the first paragraph we are reminded
The issue of the dynamics is perhaps the central problem in canonical quantization approaches to totally constrained theories like quantum general relativity. There are three salient aspects of the problem that have prevented from advancing in the quantization. The first one is how to construct a space of physical states for the theory that are annihilated by the quantum constraints and that is endowed with a proper Hilbert space structure. The second issue is related to the introduction of a correspondence principle with the classical theory, in particular to check the constraint algebra at a quantum level. The third problem is how to address the ``problem of time'' that is, to introduce a satisfactory picture for the dynamics of the theory in terms of observable quantities.

Then come three pages of semi-technical stuff (finite number of degrees of freedom models, Legendre transformations) and eventually
Summarizing, the method of uniform discretizations allows to tackle
satisfactorily the three central problems of the dynamics of quantum
general relativity and provides new avenues for studying numerically
classical relativity as well.

Well done, guys! Now we can stop worrying about quantum gravity and spend all your energies to cheer up Klinsi's Jungs!

Sorry, I didn't have anything more intelligent to say.

Monday, May 29, 2006

Higher order stuff

It has been over a month since my last post. And as I will be on vacation in a few days which will probably take me off line for a while I thought I should send some sort of ping before disappearing to Provence.

We had been busy finishing our paper on WMAP multipole vectors and soon after got busy with Wolfgang, my office mate, thinking about entanglement entropy. The latter project is not yet in a stage to be discussed in detail but I think this potentiallly quite interesting.

Instead, today I would like to mention a book that I have been reading recently: It's "Higher Order Perl" by Mark Jason Dominus. This is the most interesting computer book I have read for years. It has the potential to change my thinking about programming as much as the first time I learned about Perl.

If you have formal trainig in computer science and talk Lisp each day, this is will not be too interesting to you. But if you like me learned programming the street style there are a few things to note.

When I have to explain why I like Perl so much I could say it's because you can write nice short effective programs and it's very easy to communicate to the compiler what you want. Plus you have regular expressions and don't have to worry about memory management and garbage collection. But usually, I explain how much I liked the idea (of course like all not unique to Perl) that if you have a collection of several things integers are natural labels in very few cases, thus an array is rarely what you really want. It's a bit lik e coordinates. One option is to call things by their name which gives you a hash ($lastname{Robert} is much more natural than $lastname[1]).

The other case is that you don't care that some element is the 4711th as long as you get all of them either at once or you get one after the other (and can iterate over those like in a foreach $element(@list) construct). This gives you lists. If you have a list, you can take the first element and the rest and you can add elements to the beginning and the end. No need to worry how many elements there are as long as there are more than zero.

The "higher order" in the title of the book refers to the possiblility to have functions that return functions (or rather references to functions. So what?, I hear you think. Well, for the mathematically inclined: This allows you to go to the dual space of your data! Instead of manipulating the data, you can now manipulate the ways to access the data. And you should know that the dual space can in general be of very different size. Think of your examples in functional analysis or about distributions: There you first restrict yourself to very nice functions, the "test functions" and then look at their dual space which gives you distributions, quite powerful objects that are more general than functions.

Now back to the programming examples: Think again of a list. I mentioned, the only thing you needed to do with lists is take them as a whole or access the next element. But this is really enough! It is enough to pretend you know the list if you know a way to always get hold of the next element.

For example, with this you can come up the the universal doubling function. It takes a list and returns a list that has each element doubled. But in fact, all you do is to answer the question for the next element by going to the original list, taking the next element from that, double it and return it. Or you can interlace two lists or concatenate them by using obvious strategies to return the next element of the resulting lists.

This way, you can of course handle infinite lists that don't fit into your memory like the list of all integers. This is where really the power of the dual space notion comes in. All you return is a function that computes the next element. From this you can obtain the list of even numbers by applying the doubling function or the list of primes by discarding integers that are not prime.

Isn't that neat? Go get this book and read it!

Wednesday, April 19, 2006

Fast strings

When I explain strings to non-stringy physicist, I often start out by stating that a string is like a rubber band and has a potential energy which is proportional to its length (like a 'relativistic Hook law'). Then, all you have to do is to covariantize this statement and you arrive at the Nambu Goto action.

You can, of course read this backwards: A string with potential energy E has a length proportional to E. Now, you can often read this as an explanation of why hard string scattering behaves much better than hard scattering of particles: At high energies, the strings in fact expand and thus the interaction delocalizes.

This, however, is only semi-true: I would think of what you have in hard scattering are strings which have been accelerated so that they have large centre of mass energy. But the centre of mass energy is decoupled from the internal energy of the oscillators and thus a boosted short string will still be short although it has large (kinetic) energy. On the other hand, you can have a long string with zero kinetic energy which just happens to be very heavy. So, in general, heavy strings are long, not fast ones.

So far, this is just kinematics, but can we see this in practice? What happens to a string that runs through a linear accelerator? So, the set-up would look as follows: You start with a string which is in a low mass state which is charged under some U(1) (maybe in a KK type theory, D-branes are welcome as well). Now it feels a electric field strength (of some cavity say). This electric field is a condensate of low energy (given by the normal frequency of the cavity) photons. So, you have to compute the scattering of the string with lots and lots of low energy strings in the vector field state.

Question: Even if the individual photon has energy much less than 1/sqrt(alpha'), does this scattering excite any of the higher oscillator modes (which make the string grow)? A Feynman diagram would look somewhat like


e----x-----x-----x-----x- ... -x-----h
| | | | ... |
A A A A A

where e is the charged low energy state, A is the gauge field and h is a heavy state. Has this been done before?

Monday, April 10, 2006

Eurostrings

I just came back from one week of Eurostrings at DAMTP, Cambridge which was a combination of a network meeting of the EU String Network then turning into a celebration of Michael Green's 60th birthday. So, before anything else:

Happy Birthday, Michael!

This was a particularly nice event and quite different from other european meetings there was also a large number of people from the Americas attending which I assume was due to a) celebrate Michael and b) that this year's Strings '06 conference in Beijing is not too attractive for a number of people for various reasons.

Once more, I was surprised how many people actually read this blog and came to me during the conference mentioning some of my entries in the past.

As here was no wireless network operational during the conference and unlike the Loops '05 I did not feel the strong urge to report. Victor Rivelles already has and Peter Woit has as well.

There were no really big surprises, just look at the titles of the talks and you get a pretty good idea what was going on; there are online proceedings for those who want more details. Looking through my notes reminds me of a few that are worth mentioning never the less: There were talks by Damour, West and Kleinschmidt about the relations between M-Theory and hyperbolic Kac-Moody algebras. By now, it becomes clear how this works dynamically (at least at low levels). The KMA structure even fixed numbers like the coefficient of the CS term in 11d sugra which is usually determined by supersymmetry. I would really like to see worked out how this works in detail, it would not be the first time, there is a relation between exceptional Lie algebras and susy.

A number of people talked about the relations between spin chains and N=4 SYM and strings and another theme discussed by several speakers was the relation between black holes and topological strings (known under the names of OSV). Especially, Strominger gave a nice derivation on the blackboard of the mysterious square formula.

Seiberg gave two talks both quite interesting, the first on a paradox if you apply T-duality in the euclidean time direction to relate high and low temperature physics and how this is related to the Hagedorn transition and the second on his findings in N=1 theories which even if they have a vanishing Witten index often have a non-susy meta-stable state at the origin of scalar field space. This is potentially very interesting phenomenologically as it provides a mechanism of dynamical susy breaking but unfortunately I understand too little of N=1 gauge theories to give you more information. But you can read it all in the paper.

Finally, I would like to point out a little triviality in elementary quantum mechanics which seems not to be generally appreciated. Imagine that some degrees of freedom are not accessible to you, maybe because they are behind a curtain or even the horizon of a black hole. Formally, you write your Hilbert space as a tensor product . The whole system is in a state described by a density matrix which could well be a pure state . As you see only part of the degrees of freedom, you observe only the partial trace , where the trace is over .

The time evolution is is given by the Heisenberg equation and this implies that the entropy does not change with time. Especially, a pure state (with entropy 0) cannot evolve to a mixed state and vice versa.

This however is not true for the reduced state . It evolves unitarily as only if the total Hamiltonian is a tensor product that is if the two tensor factors of the Hilbert space do not interact.

Otherwise, for example if you throw stuff behind the curtain (or horizon) the time evolution of is more complicated and will change in time. This means, if we only observe part of the Hilbert space, a state that was pure in our part of the Hilbert space can become mixed by interactions with the other degrees of freedom.

This is of course well known to people working on decoherence but somehow not so much amongst people thinking about quantum cosmology.