Wednesday, February 15, 2006

Complex Landscape

Last week, there was a paper by Fred Denef and Michael Douglas that studies the computational complexity of problems one is likely to encounter invesitgating the landscape according to them. Once again a paper by these authors that studies a well defined mathematical problem and not just contains vague speculations of various extend as it is not uncommon amongst landscapers. The obvious point for the reader to make up his mind is of course the judge the physical relevance of the abstract mathematical problem investigated.

The paper at hand contains some sort of meta-discussion in that it looks at various problems one is likely to encounter once more information about string vacua is known and points out that a lot of them although solvable in finite time are practically intractable because it is likely that the computation time is astronomical because it scales exponentially with some size parameter of the problem.

A typical example is the toy model due to Arkani-Hamed, Dimopoulos and Kachru: There you have N scalar fields f_i and the total potential is a sum of potentials V_i(f_i) each depending on only one scalar and having two minima V_i^+ and V_i^- of any sign and both typipcally of Planck mass scale. The problem is to assign each of the scalars to one of its minima such that the total potential energy is of order 10^-120 M_p or to decide if such an asignment is possible.

This is supposed to be a simplification of what actually goes on in string theory and it is to decide if string theory allows of our universe for which one might assume some early cosmological dynamics would pick a vacuum with minimal positive cosmological constant at least for some part of the universe which then expands to become our visible universe.

It turns out that this toy problem is one in the class of NP complete problems which are generally assumed to have a worst case that requires computational time which scales exponentially with N making it practically intractible even for N of intermediate size.

Of course, this toy model is infinitely simpler than the full problem: The true string theory effective potential is not believed just to be a sum of individual terms but the choices of vacua of the individual fields are coupled. Furthermore, in order for this problem to make sense, the terms in the potential have to be known with an accuracy of 10^-120 which is not only unrealistic but maybe impossible given the asymptotic nature of some of the series expansions envolved.

My main objection however is that this type of investigation is premature until one has better information about the actual problem: The complexity class is only about worse case behaviour and it is well possible that there is additional information available which then renders the problem much much easier. If for example I hapen to know that the V_i^+ and V_i^- decrease exponentially with i then the decision problem is a piece of cake. Until you can be sure there is no such additional information and you really have to solve the generic problem it does not really make sense to worry about complexity.

It is well known that in many real world examples of problems which are in principle NP hard, practical instances have efficient solutions (sometimes approximate) which serve at least most purposes. For example I am always amazed how German rail can provide very good connection information not only between train stations (and there is already a large number of trains) but even at street level resolution as it contains also local public transport including busses. I heard Italians claim that they even use this web page to obtain connections between places in Italy. This is a hard problem, but that webpage works in fracitons of a second rather than the aeons that it takes a poor salesman to try out all possible connections via Calcutta. And that is because the data is more structured and there is more information available.

You could worry that these special cases with additional information are just a set of small measure in the full class of problems so one should not hope for such additional information until one has it. Well, as I said, experience shows differently.

Furthermore, what would have happend if for example the old Greeks had decided that there are myriads of phenomena in the natural world and it likely takes exponential computational power to work them out even if one knew the governing laws. So time is better invested in in practising one's harp playing than working out those laws? This is the whole point of the "unreasonable effectiveness of mathematics in the sciences" that our world is structured and there are simple laws behind everything that we have understood so far. So I would not judge it completely unrealistic to hope for additional structure that makes science tracktable even when it comes to the most fundamental degrees of freedom.

Monday, February 13, 2006

AdS/CFT spin chain integrability

Until tomorrow, I am at DESY for a two day north German (including Copenhagen) string meeting. Urs is here as well, don't know if he's blogging as well. The first two talks by Matthias Staudacher and Sakura Schäfer-Nameki on the latest on the relation between anomalous dimensions in the gauge theory and string energies in AdS. I do not intend to give you a full scale coverage but instead would like to mention one recent result that Matthias reported on.

In brief, the story about the spin chain integrability is the following: The chiral primary operators of the N=4 gauge theory are single trace operators that contain the the six scalar fields and their covariant derivatives as well as fermions and field strengths. One can compute two point functions of these operators and they are basically given by one number for each pair of operators. The aim is then to diagonalize the matrix of these numbers and the eigenvalues are the anomalous dimensions to be compared with string energies in AdS x S^5. Over recent years, there were a lot of papers reporting on "integrability" of this problem. So what is meant by this?

The matrix of two point functions has some block structure that allows one to study in restricted subsectors (like containing only opertors with two of the complex scalar fields and a fixed number of insertions of these fields). There the matrices are finite. But everybody knows that any finite symmetric matrix can be diagonalized. So what's the hype?

On the other hand, the experts talk about "three loop integrability" etc, but in truely integrable models, integrability is not a perturbative statement, it usually refers to the full, non-perturbative S-matrix.

Of course, all this is a confusion of language. The point of the integrability is that there are algebraic formulas for the eigenvalues (Bethe Ansätze) which do not explicitly contain the length of the operators but work simultaneously for a large class of subsectors (for example for all lengths of operators or at least asymptotically). And "three loop" refers to the fact that the matrix of two point functions was calculated using three loop Feynman diagrams (or equivalent methods). This matrix is the interpreted as the Hamiltonian of an auxillary problem, typically a spin chain and that Hamiltonian is integrable.

So far, all this was just about the integrability of this auxilliary Hamiltonian and was not refereing to the physical space-time S-matrix of the original N=4 YM theory.

But now the news is, that there is now a connection, although still quite a weak one: In a paper on MHV amplitudes (an upshot of the twistor story), Bern etal conjecture a form for all loop gluon scattering amplitudes. This contains three (not really known) functions. But one of these functions (the one called f) also appears in the Bethe Ansatz business as Matthias reports. Thus there is hope that the spin chain results yield information on space-time scattering beyond two point functions!