tag:blogger.com,1999:blog-8883034.post7129582648092232044..comments2017-06-27T17:14:04.272+02:00Comments on atdotde: Not quite infiniteRobert Hellinghttps://plus.google.com/118220336522940810893noreply@blogger.comBlogger18125tag:blogger.com,1999:blog-8883034.post-18125603155261508702007-09-20T20:22:00.000+02:002007-09-20T20:22:00.000+02:00Robert:Interesting subject. I think I can accept t...Robert:<BR/>Interesting subject. I think I can accept the "mathematical" definition of summing divert series because the "sum" can be defined in a potentially consistent manner.<BR/><BR/>However, in any real world situation the question does it EVER make sense using such a divergent series? Would it ever make sense add (sum?)an infinite number of measurable quantites? If not exactly what is being added in ST? Thanks.cecil kirkseynoreply@blogger.comtag:blogger.com,1999:blog-8883034.post-44388986416087773952007-09-19T14:36:00.000+02:002007-09-19T14:36:00.000+02:00Thanks Robert. You seem to have in mind the nonper...Thanks Robert. You seem to have in mind the nonperturbative lattice formulation used in computer simulations, but there is also a perturbative version which does expand in the coupling constant - see, e.g., T.Reisz, NPB 318 (1989) 417 where perturbative renormalizability of lattice QCD was proved to all orders in the loop expansion. However, it is not clear to me that this will always give the correct physical results for any choice of lattice QCD formulation. There must surely be some conditions on the formulation; in particular some minimal locality condition. That's why I was surprised by the claim that any regularization (preserving the symmetries) will must lead to the same end results <BR/><BR/>(Btw, extraction of physics results from the lattice involves perturbative calculations as well as the computer simulations. I recall some nice posts about this on the "life on the lattice" blog at some point..)amusednoreply@blogger.comtag:blogger.com,1999:blog-8883034.post-29194604463907269052007-09-19T11:57:00.000+02:002007-09-19T11:57:00.000+02:00I know there is some literature about different re...I know there is some literature about different regularisation/renormalisation schemes giving identical results but trying to locate some using google scholar was unsuccessful. I know for sure that BPHZ and Epstein-Glaser have been shown to be equivalent and would be surprised if the ones more often used in practical calculations (i.e. dim reg) would not have been connected as well. Step zero for such a proof (which in character is mathematical and not very physics oriented) is to define what exactly you mean by scheme X. That would have to be a prescription that works at all loop order for all graphs and not like in QFT textbooks where a few simple graphs are calculated (most often only one loop so they do not encounter overlapping divergencies) and then a "you proceed along the same lines for other graphs" instruction is given.<BR/><BR/>Lattice regularisation, however, is very different in spirit as it is not perturbative (it does not expand in the coupling constant) so it is not supposed to match a perturbative calculation up to some fixed loop order. Thus it does not compare directly with Feynman graph calculations. Only the continuum limit of the lattice theory is supposed to match with an all loop calculation that also takes into account non-perturbative effects.<BR/><BR/>In fact, the lattice version of gauge theories is probably the best definition of what you mean by "the full quantum theory including non-perturbative effects" as those are not computed directly in perturbation theory and there are only indirect hints from asymptotic expansions and of course S-duality.<BR/><BR/>OTOH, starting from the lattice theory, you have to show that the continuum limit in fact has Lorentz symmetry and is causal, two properties that this regularisation destroys. Once you managed this, it's likely you are not too far from claiming the 1 million dollars:<BR/><BR/>http://www.claymath.org/millennium/Yang-Mills_Theory/Roberthttps://www.blogger.com/profile/06634377111195468947noreply@blogger.comtag:blogger.com,1999:blog-8883034.post-58005313139765947892007-09-19T10:55:00.000+02:002007-09-19T10:55:00.000+02:00Hi Robert, Lubos, and anyone else,I have a questio...Hi Robert, Lubos, and anyone else,<BR/>I have a question/doubt about something Lubos wrote in his post on this topic and would appreciate your views or clarifications. (Normally I would post this on the blog of the person who wrote it,<BR/>but seeing as in this case it's Lubos...hope you don't mind me posting it here instead)<BR/><BR/>LM wrote:<BR/>"The fact that different regularizations lead to the same final results is a priori non-trivial but can be mathematically demonstrated to be inevitably true by the tools of the renormalization group."<BR/><BR/>Is this really true? E.g., I don't recall any mention of this in Peskin & Schroeder's book, even though they discuss RG group in detail.. To explain my doubts, consider the case of perturbative QCD: two different regularizations which preserve gauge invariance are dimensional reg. and lattice formulation. In fact there are a whole lot of different possible lattice discretizations, and not all of them can be expected to produce results which agree with the physical ones obtained using dimensional regularization. E.g., there must at least be some kind of locality condition on the lattice QCD formulation that one uses, and I don't think anyone knows at present what the mildest possible locality requirement is that guarantees that the lattice formulation will produce correct results. In light of this, I don't see how it can be asserted that different regularizations (which preserve the appropriate symmetries) are always guaranteed to give the same final results...amusednoreply@blogger.comtag:blogger.com,1999:blog-8883034.post-53337365239018237012007-09-19T09:16:00.000+02:002007-09-19T09:16:00.000+02:00Why are zeta-function techniques better than simpl...Why are zeta-function techniques better than simply calculating the action of the Virasoro generators on some state? It is very easy to compute [L_m, L_-m] |0>, and you can read off the central charge from this, without ever having to introduce any infinities.<BR/><BR/>What is less trivial is how to generalize this to d dimensions, where the diffeomorphism generators are labelled by vectors m = (m_0, m_1, ...) in Z^d rather than an scalar integer m in Z. In fact, I was stuck on this problem for many years (and ran out of funding in the meantime), before it was solved in a <A HREF="http://projecteuclid.org/DPubS/Repository/1.0/Disseminate?view=body&id=pdf_1&handle=euclid.cmp/1104254598" REL="nofollow">seminal paper</A> by Rao and Moody.Thomas Larssonnoreply@blogger.comtag:blogger.com,1999:blog-8883034.post-6690364732551352112007-09-19T07:41:00.000+02:002007-09-19T07:41:00.000+02:00Dear Robert, if you exactly agree with Joe's deriv...Dear Robert, if you exactly agree with Joe's derivation, why do you exactly write that this derivation is based on an "obscure analogy with minimal subtraction"? <BR/><BR/>There is nothing obscure about it and, if looked at properly, there is nothing obscure about the minimal subtraction either. One can easily prove why it works whenever it works.<BR/><BR/>I agree that one must be careful about infinite quantities but we seem to disagree what it means to be careful. In my picture, it means that you must carefully include them whenever they are nonzero. In the polymer LQG string that you researched, for example, they are very careful to throw all these important terms arising as infinities away which is wrong, and your work is an interpolation between the correct result and the wrong result which is thus also wrong, at least partially. ;-)<BR/><BR/>I disagree that your "nonserious" comment is not serious. It is absolutely serious. Don't try to erase this comment because of it. The comment that you call "nonserious" is the standard insight - certainly taught in QFT courses at most good graduate schools - that power law divergences are zero in dim reg. In the case of the log divergence it is still true as long as you consistently extract the finite part by taking correct limits of the integral.Lumohttps://www.blogger.com/profile/17487263983247488359noreply@blogger.comtag:blogger.com,1999:blog-8883034.post-36219625806270989142007-09-19T03:42:00.000+02:002007-09-19T03:42:00.000+02:00When physicists proceed 'formally', its usually ex...When physicists proceed 'formally', its usually explicitly stated as such. <BR/><BR/>There are many examples throughout history where this actually turns out to be wrong when done rigorously.<BR/><BR/>The interesting thing (for mathematicians) is when it turns out to be correct, as it usually means theres some hidden principle in there somewhere and often can lead to new and nontrivial mathematics (eg distribution theory).<BR/>-HaelfixAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-8883034.post-2033703959595385552007-09-18T21:55:00.000+02:002007-09-18T21:55:00.000+02:00My final comment for tonight: For those readers wh...My final comment for tonight: For those readers who did not get this from my comments above: I completely agree with Joe's derivation of including a regularisation and imposing Weyl invariance. Do not try to convince me it is correct. It is.<BR/><BR/>My point about Sabine's calculation was that you can of course (and nobody I believe doubts this) produce non-sense if you are not careful about infinite quantities. Once you regulate, the error is obvious.<BR/><BR/>My final remark (and this is not serious, thus I will delete any comments referring to it) is that there is a shorter version of Sabine's argument which goes: "int dx/x is always zero in dimensional regularisation" (this is how I learned to actually apply dim reg from a particle phenomenologist: Bring your integrals to form finite + int dx/x and set the second term to zero).Roberthttps://www.blogger.com/profile/06634377111195468947noreply@blogger.comtag:blogger.com,1999:blog-8883034.post-33096133345078612032007-09-18T21:43:00.000+02:002007-09-18T21:43:00.000+02:00Let me say more physically what she actually did. ...Let me say more physically what she actually did. In order to calculate a convergent integral in the momentum space (x), she wrote it as a difference of two divergent ones. That would be perfectly compatible with physics and nothing wrong could follow from it. The error only occurs when she rescales the "x" by a factor of 1/2 or 2 in the two terms. This is equivalent to confusing what is her cutoff - by a factor of two up or down. Because her integral is logarithmically divergent, it is a standard example of a running coupling. So she has effectively added "g(2.lambda)-g(lambda/2)" - the difference of gauge couplings at two different scales, pretending that it is zero. Of course, it is not zero: this is exactly the way how running couplings arise.<BR/><BR/>An experienced physicist would never make this error - using inconsistent cutoffs for different contributions in the same expression. Hers is just a physics error, if we interpret it as a physics calculation. One can't say that her calculation is analogous to the correct calculations such as Joe's subtractions of the vacuum energy even though it seems that this is precisely what you're saying.<BR/><BR/>There is a very clear a priori difference between correct and wrong calculations: correct ones have no physical errors of this or other kinds.Lumohttps://www.blogger.com/profile/17487263983247488359noreply@blogger.comtag:blogger.com,1999:blog-8883034.post-15358707659118839922007-09-18T21:36:00.000+02:002007-09-18T21:36:00.000+02:00Dear Robert, I disagree that one can only trust a ...Dear Robert, I disagree that one can only trust a theory if infinities never occur. A particular regularization that replaces infinities by finite numbers as the intermediate results is just a mathematical trick but the actual physical result is independent of all details of the regularization which really means that it directly follows from a correct calculation inside the theory that contains these infinities.<BR/><BR/>In other words, you only need the Lagrangian of standard QCD (one that leads to divergent Feynman diagrams) plus correct physical rules that constrain/dictate how to deal with infinities to get the right QCD predictions. You don't need any theory that is free of infinities. Such a theory is just a psychological help if one feels uncertain.<BR/><BR/>I agree with you that one should be able to decide whether an argument is correct before the result is compared with another one. And indeed, it is possible. This is what this discussion is about. You argue that it is impossible to decide whether an argument or calculation is correct as long as it started with an infinite expression, and others are telling you that it is possible.<BR/><BR/>If you rederive the same physics in what you call "LQG string", why do you talk about "LQG string" as opposed to just a "string"? Cannot you reformulate your argument in normal physics as opposed to one of kinds of LQG physics?<BR/><BR/>Sabine's calculation you linked to is manifestly wrong because she doubles one of the infinities in order to subtract them and get a wrong finite part. There was no symmetry principle that would constrain the right result in her calculation. The original integral was perfectly convergent and she just added (2-1) times infinity (by rescaling the cutoff by a factor of 2 in one term), pretending that 2-1=0. I don't quite know why you think that I am prone to such arguments. ;-) Maybe Sabine is but I am not.<BR/><BR/>She didn't make any proper analysis of counterterms, any proper analysis of any symmetries, and she didn't make any analytical continuation of anything to a convergent region either. Why do you think it's analogous to a valid calculation?<BR/><BR/>If you mentioned it because of the relationship between 1+2+3+... and 1-2+3-4+..., the derived relationship between them may remind you of Sabine's wrong calculation. But it is not analogous. These rescalings and alternating sums can be calculated by the zeta function regularization that allows me to make these arguments adding subseries and rescaling them. <BR/><BR/>For example, you get the correct sum for antiperiodic fields, 1/2 + 3/2 + 5/2 + ... can also be calculated by taking the normal sum 1+2+3 and subtracting a multiple of it from itself.<BR/><BR/>So if the zeta-function reg gives a Weyl-invariant value of the alternating sum, it also gives the right value of the normal sum as well as the shifted Neveu-Schwarz sum and others.Lumohttps://www.blogger.com/profile/17487263983247488359noreply@blogger.comtag:blogger.com,1999:blog-8883034.post-64224818600962722462007-09-18T21:16:00.000+02:002007-09-18T21:16:00.000+02:00By "LQG string" I meant our version where we (in a...By "LQG string" I meant our version where we (in a slightly mathematically more careful language) re-derive the usual central charge (same content, different formalism) rather than the polymer version (different content of which you know I do not approve).Roberthttps://www.blogger.com/profile/06634377111195468947noreply@blogger.comtag:blogger.com,1999:blog-8883034.post-20062209484590308452007-09-18T21:11:00.000+02:002007-09-18T21:11:00.000+02:00All I am saying is you should have a way (fine if ...All I am saying is you should have a way (fine if done retroactively) to treat infinities without them actually occurring. And if you do that by adding an epsilon dependent counter term (that diverges by itself when you take epsilon to 0) that's fine with me. As long as you can physically justify it. <BR/><BR/>Otherwise you are prone to arguments like <BR/>http://backreaction.blogspot.com/2007/08/after-work-chill-out.html<BR/><BR/>And sorry, "an argument is correct if it gives the correct result" is not good enough. I would like to have a way to decide if an argument is valid before I know the answer from somewhere else.Roberthttps://www.blogger.com/profile/06634377111195468947noreply@blogger.comtag:blogger.com,1999:blog-8883034.post-75286715182891060652007-09-18T21:06:00.000+02:002007-09-18T21:06:00.000+02:00Dear Robert, concerning your comment, I understood...Dear Robert, concerning your comment, I understood pretty well that you wanted to define the whole theory for complex unphysical values of "s". <BR/><BR/>That's exactly why I pre-emptively wrote that it is wrong to try to define the whole theory for wrong values of "s" just like it is wrong to define a theory in a complex dimension "d" in dimreg. Such a theory probably doesn't exist, especially not in the dimreg case.<BR/><BR/>But you don't need the full theory in 3.98+0.2i spacetime dimensions in order to prove that dimreg preserves gauge invariance, do you? In the same way, you don't need to define the operator algebra in a CFT for complex values of "s" or something like that.<BR/><BR/>I don't understand how to combine this discussion with the "LQG version of a string". The texts I wrote above were trying to help to clarify how the quantities actually behave in correct physics while LQG is a supreme example how the divergences and other things are treated physically incorrectly.<BR/><BR/>Of course that things I write are incompatible with the LQG quantization. But the reason is that the LQG quantization is wrong while e.g. Joe's arguments are correct. Your conclusion that physics is ambiguous is not a correct conclusion.Lumohttps://www.blogger.com/profile/17487263983247488359noreply@blogger.comtag:blogger.com,1999:blog-8883034.post-23330073135188306962007-09-18T20:53:00.000+02:002007-09-18T20:53:00.000+02:00More generally about your comments, Robert.I think...More generally about your comments, Robert.<BR/><BR/>I think that it is entirely wrong to say "this argument is dodgy blah blah blah" (in the context of the vacuum energy subtraction) because the argument is transparent and rigorous when looked at properly. Both of them in fact.<BR/><BR/>Also, I disagree with your general statement that an infinity means that we have asked a wrong question. Only IR divergences are about wrong questions. UV divergences are about a theory being effective. But even QCD that is UV finite gives UV divergences - they're responsible e.g. for the running. There's no way to ask a better question about the exact QCD theory that we know and love that would remove the infinity.<BR/><BR/>QCD also falsifies your statement that "the integral over all p is unphysical". It's not unphysical. QCD is well-defined at arbitrarily high values of "p" but it still requires one to deal with and subtract the infinities properly.<BR/><BR/>Sorry to say but the comments that physicists are always expected to say "we're dodgy, everything is unreliable, we need experiments" just mean that you don't quite understand the technology. Your comments are Woit-Lite comments. In each case, there is a completely well-defined answer to the questions whether a particular symmetry constrains the terms or not, whether a given regularization preserves the symmetry or not, and consequently, whether a given regularization gives a correct result or not. There is no ambiguity here whatsoever and the examples listed are guaranteed to give the right results.Lumohttps://www.blogger.com/profile/17487263983247488359noreply@blogger.comtag:blogger.com,1999:blog-8883034.post-50930590238764848922007-09-18T20:47:00.000+02:002007-09-18T20:47:00.000+02:00Lubos,you misunderstand me. I have no doubt that i...Lubos,<BR/><BR/>you misunderstand me. I have no doubt that in field theory calculations where for example you want to compute tr(log(O)) for some operator O as this gives you the 1 loop effective action zeta function regularisation of log(0) works as well as any other regularisation (and often nicer as it preserves more symmetries than more ad hoc versions).<BR/><BR/>What I am looking for is a version where you not only reinterpret n as 1/n^s for s=-1 once you encounter an obviously divergent expression but start out with something that includes s from the beginning such that for say Re(s)>1 everything is finite at all stages and in the end you can take s->-1 analytically. Can you come up with (s dependent) definitions of a_n and their commutation relations or L_n such that the commutator of L_n's (which is something you calculate rather than define) gives the expression including s?<BR/><BR/>BTW, in the LQG version of the string, the correct constant appears as Tr([A_2,B_2]) where A and B are generators of diffeomorphisms and the subscript 2 refers to<BR/><BR/>A_2 = (A + JAJ)/2<BR/><BR/>where J multiplies positive modes by i and negative modes by -i. Thus it's the 'beta'-part in the language of Boguliubov transformations. Needless to mention this expression is in fact finite even though there is a trace in an infinite dimensional Hilbert space as can be shown that A_2 is a Hilbert-Schmidt operator (that is the product of two such operators has a finite trace). Of course you need an infinite dimensional space for a commutator to have a non-vanishing trace.Roberthttps://www.blogger.com/profile/06634377111195468947noreply@blogger.comtag:blogger.com,1999:blog-8883034.post-28099121187349170772007-09-18T20:28:00.000+02:002007-09-18T20:28:00.000+02:00Dear robert, I am somewhat confused by your skepti...Dear robert, I am somewhat confused by your skepticism. A similar comment to yours by ori - I suppose it could even be Ori Ganor - appeared on my blog.<BR/><BR/>Why I am confused? Because I think that Joe's argument is, at the level of physics, a rigorous argument. Let me start with the vacuum energy subtraction.<BR/><BR/>We require Weyl invariance of the physical quantities. So the total zero-point function must vanish. It is clearly the case because such a result is dimensionful and any dimensionful quantity has a scale and breaks scale invariance.<BR/><BR/>So one exactly needs to add a counterterms to have the total vacuum energy vanish and this counterterm thus exactly has the role of killing the 1/epsilon^2 term. Joe has a lot of detailed extra factors of length etc. in his formulae to make it really transparent how the terms depend on the length. This makes the mathematical essence of the regularization more convoluted than it is but it should make the physical interpretation much more unambiguous.<BR/><BR/>Now the zeta function.<BR/><BR/>You ask about the "hope" that physics is analytical in complex "s". I don't know why you call it a hope. It is a easily demonstrable fact that is, as you correctly hint, analogous to the case of dim reg. Just substitute a complex "s" and calculate what the result is. You only get nice functions so of course the result is locally holomorphic in "s".<BR/><BR/>Just like in the case of dimreg, one doesn't have to have an interpretation of complex values of "s". The only thing we call "physics for complex s" are the actual formulae and their results and they are clearly holomorphic.<BR/><BR/><A HREF="http://scholar.google.com/scholar?q=beisert+tseytlin+zeta" REL="nofollow">Beisert and Tseytlin</A> have checked a highly nontrivial zeta-function regularization of some AdS/CFT spinning calculation up to four loops. That's where they argued to understand the three-loop discrepancy as an order of limits issue.<BR/><BR/>See also a 600+ citation paper by <A HREF="http://scholar.google.com/scholar?q=hawking+zeta" REL="nofollow">Hawking</A> who checks curved spaces in all dimensions etc. These regularizations work and it's no coincidence.<BR/><BR/>Best<BR/>LubosLumohttps://www.blogger.com/profile/17487263983247488359noreply@blogger.comtag:blogger.com,1999:blog-8883034.post-81384059771003213072007-09-18T20:08:00.000+02:002007-09-18T20:08:00.000+02:00For those readers who don't have Joe's book at han...For those readers who don't have Joe's book at hand let me reproduce his argument: In the cut-off version, epsilon is if fact dimension-full and a constant, n independent term would as well be the consequence of a world sheet cosmological constant. Thus the 1/epsilon^2 is in fact a renormalisation of the world-sheet cosmological constant. This would be in conflict with Weyl invariance and thus one has to add a counter term which makes it vanish. <BR/><BR/>This is what I should have written instead of calling the argument "obscure". <BR/><BR/>This leaves me still looking for a physical justification for the introduction of s in the zeta regularisation and the hope that physics is actually analytic in s. Maybe this could be related to dimensional regularisation on the world sheet?Roberthttps://www.blogger.com/profile/06634377111195468947noreply@blogger.comtag:blogger.com,1999:blog-8883034.post-23630106520634868102007-09-18T19:06:00.000+02:002007-09-18T19:06:00.000+02:00In chapter 1 of my book, eq. 1.3.34, I derive the ...In chapter 1 of my book, eq. 1.3.34, I derive the `correct' value of this infinite sum by the requirement that one cancel the Weyl anomaly introduced by the regulator by a local counterterm; this fixes the finite value completely.<BR/><BR/>At various points later in the book (see index item `normal ordering constants') I derive the constant by a fully finite calculation that respects the Weyl symmetry throughout.Joe Polchinskinoreply@blogger.com