Tuesday, February 17, 2009

Initial data for higher order equations of motion

Over lunch, today, we had a discussion on higher order quantum corrections in the effective action. You start out with a classical action that only contains terms with up to two derivatives. This corresponds to equations of motion that are second order in time. As such, for the physical degrees of freedom (but I want to ignore a possible gauege freedom here) you then have to specify the field and its time derivate on a Cauchy surface to uniquely determine the solution.

Loop corrections, however, tyically lead to terms with any number of derivatives in the effective action. Corresponding equations of motion allow then for more initial data to be specified. The question then is what to do with the unwanted solutions. If you want this is the classical version of unitarity.

Rather than discussing higher derivative gravity (where our lunch discussion took off) I would like to discuss a much simpler system. Say, we have a one dimensional mechanical system and the classical equation of motion is as simple as it can get, just \dot x=\alpha x. To simplify things, this is only first order in time and I would like to view a second order term already as "small" correction. The higher order equation would then be \dot x = \alpha x + \lambda \ddot x with small \lambda.

To find solutions, one uses the ansatz x(t)=\exp(\gamma t) and finds \gamma = \frac{1\pm\sqrt{1-4\alpha\lambda}}{2\lambda}. For small \lambda, the two exponents behave as \gamma_1=\frac 1\lambda-\alpha+O(\lambda) which blows up and \gamma_2=\alpha+O(\lambda) which approaches the solution of the "classical equation".

The general solution is a linear combination of the two exponential functions. We see that the solution blows up over a time-scale of 1/\lambda unless the initial data satisfies the classical equation \dot x(0)=\alpha x(0).

We can turn this around and say that if the classical equation is satisfied initially, we are close to the classical solution for long time (it's not exactly the same since \gamma_2 differs from the "classical exponent" \alpha by order \lambda terms. For other initial data, the solution blows up exponentially on a "quantum time" inversely proportional to the small parameter \lambda.



This plot shows x(t) for \lambda=0.1. On the axis that goes into the picture there is a parameter for the initial conditions which is 0 for data satisfying the classical equation initially. You can see that this parameter determines if x goes to +\infty or -\infty over short time. Only the classical initial data stays small for much longer.



Unfortunately, this still leaves us with the question of why nature chooses to pick the "classical" initial data and seems not to use the other solutions. In the case of higher order gravity there is of course an anthropic argument that suggests itself but I would rather like to live without this. Any suggestions?

16 comments:

tca said...

In the case of higher order gravity theories you still have to deal with the geodesic motion of a free falling test particle. Is it still a second order ode?

Unknown said...

Any suggestions?

Yes, the observed structure is projecting a thermodynamic energy conservation law that preserves the arrow of time and the second law indefinitely via the as yet unidentified mechanism that enables the universe to "leap"/bang to higher orders of the same basic configuration.

It's still the anthropic principle, only higher.

Robert said...

If dealing with test particle rather than fields coupled to gravity you would have to say how you incorporate those in your quantum theory (that is supposed to generate the higher order corrections). You could write down a field theoretical action for gravity and add to that an integral over the world-line of the particle with an action that contains the gravitational field in addition to the coordinates of the particle (e.g. via minimal coupling). I cannot see any reason why in that setting quantum corrections would be restricted to be at the two derivative level.

Robert said...

Sorry, island, I have no idea what you are talking about.

Unknown said...

I'm talking about this:

http://www.lns.cornell.edu/spr/2006-02/msg0073320.html

Or, in more detail...

http://dorigo.wordpress.com/2007/10/18/

Robert said...

And, ahm, what is the relevance to this post?

Unknown said...

Well, I would contend that it answers your question correctly and that there is only one correct answer to it:

Unfortunately, this still leaves us with the question of why nature chooses to pick the "classical" initial data and seems not to use the other solutions.

Can I help it if you guys are on a hundred and sumthin year tangent to nowheresville... ?;)

Sorry, my knowledge is very specific, so I sometimes don't know if I'm on topic.

The quantum gravity that falls from this is very different.

Luboš Motl said...

Dear Robert, if you want to get second-order equations, you need actions that only depend on first derivatives (not second, as the typo in your post says).

The Einstein-Hilbert action seems to depend on 2nd derivatives but up to a total derivative, it can be rewritten in terms of first derivatives.

Luboš Motl said...

Concerning your interesting question, I don't have a full answer to all cases.

But I am convinced that in every class of problems, an answer exists, and it is not necessarily universal. Most importantly, I disagree that all the additional solutions are unphysical. There would be no way for Nature to separate the solutions you like from those that you don't like. All solutions are legitimate and consistent theories don't lead to any unacceptable physics arising from the unwanted, new solution.

In your perturbed exponential function, you say that the function x "blows up". But you know that in any realistic theory, a degree of freedom that would be blowing up would induce an infinite energy, and the energy conservation would prevent you from going there.

So your particular example is sick in the sense that one is evolving into a very high, exponential "x", and the energy is not naturally conserved.

If you imagine that "x" is a tachyon rolling down the hill, higher-derivative corrections can indeed make the collapse from the maximum of the potential even more dramatic, but it will be after a long enough time when the initial Universe will already be devastated by the instability.

If you consider stable potentials, with e.g. (Nabla Nabla R)^2 corrections to gravity, these terms are suppressed by 1/Mplanck^power.

The correction is indeed small if the metric's gradient is much smaller than Planckian, and perturbative expansion is OK. It will actually always be OK, self-consistently. Your runaway example allows the small effects to dominate at some time but it is only possible because "x" itself got out of control.

Imagine that your extra term comes from lambda.x.x'' term in the Lagrangian - it's not easy to write exactly what you need. This term in the Lagrangian will become very large - and even very x-dependent - for fixed x'' but very big "x" comparable to 1/lambda.

Systems in Nature don't have this disease - that the action would allow higher-derivative terms to be inflated arbitrarily for affordable low-enough energy states.

So you can view this reasoning as evidence that your model doesn't satisfy some broader consistency criteria of physics. It will never happen with nice theories we like.

In nice theories we like, quantum corrections indeed lead to new physics. They may e.g. change the shape of the potential. All these things - and new minima - are physical. And higher-derivative terms will always have a small effect on all configurations that you can obtain by evolving reasonably-low energy configurations/states.

Luboš Motl said...

Let me say one more thing, about propagators.

A propagator in QFT is the inverse of the differential operator of the quadratic part of the action, something like 1/(p^2-m^2) in the free case.

If you add higher derivative terms, the differential operator may have extra terms, like p^2-m^2+lambda.p^4 etc. Consequently, the propagator will have new poles that are "far" for small lambda.

Now I claim that they're always either "too far" - for momenta where your effective theory is no longer valid - or they're within the validity of the theory, but then these poles are physical and must have the right sign of the residue (no ghosts!) and all such things needed for new physical states.

Most typically, the new poles, if they're at accessible momenta (by the effective QFT), will appear at unphysical values of momentum (imaginary...), I guess.

You would have to bring an example you are interested in. But it is simply wrong to present a physically sick model and argue that Nature has to do everything - like choose the "right" initial conditions - to make your sick model healthy.

Nature has no such obligation and it surely never choose an ad hoc censorship of solutions that are equally consistent as others. Your model really sucks!

Robert said...

I picked that example of an equation because it was the simplest I could think of and it does not have any notational ballast. I cannot see why more realistic examples would not share these properties as I would expect them to be generic. Let's see I think I could come up with something more physical, just not on a Saturday evening.

Robert said...

BTW when I wrote 'two derivative action' I meant terms that contain two derivates in total and not necessarily of one field (as you point our correctly that could be changed by an integration by parts).

Luboš Motl said...

I think that I have explained - or proved - that stable models with a Hamiltonian bounded from below and a positive-definite Hilbert space cannot suffer from your problem.

If you don't understand the proof, couldn't you please try to spend more time with it instead of repeating your wrong statements? Thanks. The essential thing here shouldn't be called "notational ballast": it is actually called "physical consistency" and it is the most important thing in theoretical physics that is only marginalized by crackpots.

This is far from the first time I noticed your tendency to prove that inconsistent theories have to be consistent - like your unifications of sting theory with LQG and similar stuff. Respectfully, can't you imagine that there is a much more general lesson here for you to learn? Some theories are really inconsistent and the homework problem "how to look at them so that they are consistent" has no solutions.

Robert said...

Motl, will you please behave! I am happy to discuss physics with you but please stick to the facts.

I don't see a proof in any of your comments you just claim that "this does not happen".

All I was saying is that if you have a differential equation with up to n time derivates you need to specify n values of initial data. And I was asking where the additional ones besides field and its first time derivative that we usually specify should come from.

If you have a reason to believe that for example in the R+R^4 theory that you get from IIA when you include the 1 loop correction the solution is specified uniquely by giving g_ij and its first time derivative on a Cauchy surface (obeying constraints etc) please explain.

As I am sure you know, it is always the highest derivative term in a differential equation that determines the nature of the solution space no matter how small the coefficient.

I don't want to make any claims about generic behaviour of the "additional" solutions like blowing up or whatever. I just wanted to point out that they deviate strongly after a very short time from the original solution. And which type you have only depends on the additional initial data x'' that has to be specified once there is a higher derivative term. (The count of derivates here does not refer to the first order equation of the post since you did not like that but to an equation where the classical equation is second order in time).

As I already said in the post, you can of course always use the unperturbed equation to determine x'' from x and x' and use that as the additional input. This I believe is equivalent to what you call self consistency.

But this procedure is ad hoc: Besides the "true" (that is fully quantum corrected higher derivative effective) equation you need some information about some "bare" (classical, before corrections) equation to determine the additional terms in your initial data. This knowledge of what is "bare" and what is perturbation I would consider somewhat arbitrary. But maybe there is a physical principle that determines it.

And one last time about LQG since you brought it up (but this is not the place to discuss it): If I want to form an opinion on what other people are saying rather than stick to my prejudices I actually have to listen to what they are saying and try to understand it.

That's why we translated the loop statements to a language we could understand. Then we could compare that to what string theorists do (the "gold standard" if you like). What we found is that there is --- besides all language and propaganda --- they actually make a technical assumption --- that diffeomorphisms are not spontaneously broken --- that we know is not justified: For example the only classical diffeo invariant metric is g_ij=0 which is unphysical. Or for the string, the vacuum state is only annihilated by the L_n with n positive. The assumption that states have to be invariant (rather than covariant) is what is wrong and what leads to all these weird non-continious Hilbert space representations.

It's fair enough to say one is not interested in what some other people are doing and ignore it. But in order to dismiss it one has to actually listen.

Luboš Motl said...

Virtually all your statements are wrong but because you so stubbornly refuse to listen to anything, you will never learn anything new and your writing will be increasingly more confused.

For example, it is just not true that in physically consistent theories, higher-derivative corrections to the action require one to specify an increasing number of "initial conditions". Every A-student of quantum field theory courses should know why.

Plausible initial conditions in stable consistent quantum field theories can be defined in terms of particles' momenta in the initial state. This is an exact statement that holds even when all quantum corrections are exactly included.

The a priori possible particles are always in one-to-one correspondence with poles in Green's functions, by unitarity, and vice versa. All poles of such Green's functions have to correspond to physical objects.

The corresponding classical statement, obtained by considering coherent states of particles i.e. classical waves, says that the possible initial waves are linked to these poles, too.

All finite-energy solutions in a given topology sector can be expanded in these waves.

So the correct counting of degrees of freedom always reproduces the theory with the minimum number of derivatives to yield a dynamical field. What you're saying is essentially that because new terms with N derivatives - for N arbitrarily large - are generated, quantum corrections multiply the number of independent fields per point by infinity.

Everyone should be able to see that this is clearly rubbish. Quantum corrections don't change anything "so" qualitative about an effective field theory.

In consistent theories, all counterparts of your sick solutions lead either to completely legitimate new particles that are qualitatively on par with the old ones; or they lead to non-normalizable states; or they lead to states with a divergent energy; or they lead to poles located at high energies which are outside the range of validity of the effective field theory.

In different contexts, the answers are different, but Nature surely never "censors" solutions that appear on par as independent solutions of the equations of motion, just because someone would like to censor them.

Luboš Motl said...

Otherwise, it is very clear what category your R+R^4 example belongs. The higher-derivative terms are suppressed by the string or Planck scale, so the new poles would appear at energies that exceed the string or Planck scale, and they are beyond the range of validity of the effective theories.

So if one wants to talk about the effective theories, which is where R+R^4 occurs, the R^4 corrections must always be treated as small corrections to the Einstein-Hilbert R-term. Redefining the number of degrees of freedom by looking at the highest-derivative term (which would really be the infinite-derivative term, whatever it is) is clearly wrong.

However, I can also give you another example of a theory that is not effective but holds even above its characteristic mass scale: superstring field theory. It is perturbatively equivalent to (open) superstring theory, including trans-stringy energies, but it has a complicated non-polynomial function of the "box" in the Lagrangian.

Would you also say that because of the existence of the exp(-box) or whatever it is, one has infinitely more degrees of freedom? It's clearly absurd. This exp(-box) is a pure technicality. In fact, for particular solutions, one can redefine the fields by similar exp(-box) so that the exp(-box( disappears.

It is a nonlocal change of variables that may be needed to do so, but what's important is that the main qualitative lesson holds: the number of degrees of freedom (or particle species) in the theory is exactly like what it would be if the higher-derivative terms were absent.