Tuesday, May 24, 2016

Holographic operator ordering?

Believe it or not, at the end of this week I will speak at a workshop on algebraic and constructive quantum field theory. And (I don't know which of these two facts is more surprising) I will advocate holography.

More specifically, I will argue that it seems that holography can be a successful approach to formulate effective low energy theories (similar to other methods like perturbation theory of weakly coupled quasi particles or minimal models). And I will present this as a challenge to the community at the workshop to show that the correlators computed with holographic methods indeed encode a QFT (according to your favorite set of rules, e.g. Whiteman or Osterwalder-Schrader). My [kudos to an anonymous reader for pointing out a typo] guess would be that this has a non-zero chance of being a possible approach to the construction of (new) models in that sense or alternatively to show that the axioms are violated (which would be even more interesting for holography).

In any case, I am currently preparing my slides (I will not be able to post those as I have stolen far too many pictures from the interwebs including the holographic doctor from Star Trek Voyager) and came up with the following question:

In a QFT, the order of insertions in a correlator matters (unless we fix an ordering like time ordering). How is that represented on the bulk side?

Does anybody have any insight about this?

Thursday, April 21, 2016

The Quantum in Quantum Computing

I am sure, by now, all of you have seen Canada's prime minister  "explain" quantum computers at Perimeter. It's really great that politicians care about these things and he managed to say what is the standard explanation for the speed up of quantum computers compared to their classical cousins: It is because you can have superpositions of initial states and therefore "perform many operations in parallel".

Except of course, that this is bullshit. This is not the reason for the speed up, you can do the same with a classical computer, at least with a  probabilistic one: You can also as step one perform a random process (throw a coin, turn a Roulette wheel, whatever) to determine the initial state you start your computer with. Then looking at it from the outside, the state of the classical computer is mixed and the further time evolution also "does all the computations in parallel". That just follows from the formalism of (classical) statistical mechanics.

Of course, that does not help much since the outcome is likely also probabilistic. But it has the same parallelism. And as the state space of a qubit is all of a Bloch sphere, the state space of a classical bit (allowing mixed states) is also an interval allowing a continuum of intermediate states.

The difference between quantum and classical is elsewhere. And it has to do with non-commuting operators (as those are essential for quantum properties) and those allow for entanglement.

To be more specific, let us consider one of the most famous quantum algorithms, Grover's database lookup, There the problem (at least in its original form) is to figure out which of $N$ possible "boxes" contains the hidden coin. Classically, you cannot do better than opening one after the other (or possibly in a random pattern), which takes $O(N)$ steps (on average).

For the quantum version, you first have to say how to encode the problem. The lore is, that you start with an $N$-dimensional Hilbert space with a basis $|1\rangle\cdots|N\rangle$. The secret is that one of these basis vectors is picked. Let's call it $|\omega\rangle$ and it is given to you in terms of a projection operator $P=|\omega\rangle\langle\omega|$.

Furthermore, you have at your disposal a way to create the flat superposition $|s\rangle = \frac1{\sqrt N}\sum_{i=1}^N |i\rangle$ and a number operator $K$ that act like $K|k\rangle= k|k\rangle$, i.e. is diagonal in the above basis and is able to distinguish the basis elements in terms of its eigenvalues.

Then, what you are supposed to do is the following: You form two unitary operators $U_\omega = 1 - 2P$  (this multiplies $|\omega\rangle$ by -1 while being the identity on the orthogonal subspace, i.e. is a reflection on the plane orthogonal to $|\omega\rangle$) and $U_s = 2|s\rangle\langle s| - 1$ which reflects the vectors orthogonal to $|s\rangle$.

It is not hard to see that both $U_s$ and $U_\omega$ map the two dimensional place spanned by $|s\rangle$ and $|\omega\rangle$ into itself. They are both reflections and thus their product is a rotation by twice the angle between the two planes which is given in terms of the scalar product $\langle s|\omega\rangle =1/\sqrt{N}$ as $\phi =\sin^{-1}\langle s|\omega\rangle$.

But obviously, using a rotation by $\cos^{-1}\langle s|\omega\rangle$, one can rotate $|s\rangle$ onto $\omega$. So all we have to do is to apply the product $(U_sU\omega)^k$ where $k$ is the ratio between these two angles which is $O(\sqrt{N})$. (No need to worry that this is not an integer, the error is $O(1/N)$ and has no influence). Then you have turned your initial state $|s\rangle$ into $|omega\rangle$ and by measuring the observable $K$ above you know which box contained the coin.

Since this took only $O(\sqrt{N})$ steps this is a quadratic speed up compared to the classical case.

So how did we get this? As I said, it's not the superposition. Classically we could prepare the probabilistic state that opens each box with probability $1/N$. But we have to expect that we have to do that $O(N)$ times, so this is essential as fast as systematically opening one box after the other.

To have a better unified classical-quantum language, let us say that we have a state space spanned by $N$ pure states $1,\ldots,N$. What we can do in the quantum case is to turn an initial state which had probability $1/N$ to be in each of these pure states into one that is deterministically in the sought after state.

Classically, this is impossible since no time evolution can turn a mixed state into a pure state. One way to see this is that the entropy of the probabilistic state is $\log(N)$ while it is 0 for the sought after state. If you like classically, we only have the observables given by C*-algebra generated by $K$, i.e. we can only observe which box we are dealing with. Both $P$ and $U_\omega$ are also in this classical algebra (they are diagonal in the special basis) and the strict classical analogue would be that we are given a rank one projector in that algebra and we have to figure out which one.

But quantum mechanically, we have more, we also have $U_s$ which does not commute with $K$ and is thus not in the classical algebra. The trick really is that in this bigger quantum algebra generated by both $K$ and $U_s$, we can form a pure state that becomes the probabilistic state when restricted to the classical algebra. And as a pure state, we can come up with a time evolution that turns it into the pure state $|\omega\rangle$.

So, this is really where the non-commutativity and thus the quantumness comes in. And we shouldn't really expect Trudeau to be able to explain this in a two sentence statement.

PS: The actual speed up in the end comes of course from the fact that probabilities are amplitudes squared and the normalization in $|s\rangle$ is $1/\sqrt{N}$ which makes the angle to be rotated by proportional to $1/\sqrt{N}$.

One more resuscitation

This blog has been silent for almost two years for a number of reasons. First, I myself stopped reading blogs on a daily basis as in open Google Reader right after the arXiv an checking what's new. I had already stopped doing that due to time constraints before Reader was shut down by Google and I must say I don't miss anything. My focus shifted much more to Twitter and Facebook and from there, I am directed to the occasional blog post, but as I said, I don't check them systematically anymore. And I assume others do the same.

But from time to time I run into things that I would like to discuss on a blog. Where (as my old readers probably know) I am mainly interested in discussions. I don't write here to educate (others) but only myself. I write about something I found interesting and would like to have further input on.

Plus, this should be more permanent than a Facebook post (which is gone once scrolled out of the bottom of the screen) and more than the occasional 160 character remark on Twitter.

Assuming that others have adopted their reading habits in a similar way to the year 2016, I have set up If This Than That to announce new posts to FB and Twitter so others might have a chance to find them.

Friday, May 23, 2014

Conference Fees for Speakers

Listening to a podcast on open access I had an idea: Many conferences waive conference fees (which can be substantial) for invited speakers. But those are often enough the most senior people who would have the least difficulty in paying the fee from their budget or grant money. So wouldn't it be a good idea for conferences to offer to their invited speakers to instead waive the fee for a graduate student or junior post-doc of the speakers choice and make the speaker pay the fee from their grant (or reduce the fee by 50% for both)?

Discuss!

Wednesday, January 29, 2014

Questions to the inter webs: classical 't-Hooft-limit and path integral entanglement

Hey blog, long time no see!

I am coming back to you with a new format: Questions. Let me start with two questions I have been thinking about recently but that I don't know a good answer to.

't Hooft limit of classical field equations

The 't Hooft limit leads to important simplifications in perturbative QFT and is used for many discoveries around AdS/CFT, N=4 super YM, amplitudes etc etc. You can take it in its original form for SU(N) gauge theory where its inventor realized you can treat N as a parameter of the theory and when you do perturbation theory you can do so in terms of ribbon Feynman diagrams. Then a standard analysis in terms of Euler's polyhedron theorem (discrete version of the Gauss-Bonnet-theorem) shows that genus g diagrams come with a factor 1/N^g such that at leading order for large N only the planar diagrams survive.

The argument generalizes to all kinds of theories with matrix valued fields where the action can be written as a single trace. In a similar vain, it also has a version for non-commutative theories on the Moyal plane.

My question is now if there is a classical analogue of this simplification. Let's talk the classical equations of motion for SU(N) YM or any of the other theories, maybe something as simple as
d^2/dt^2 M = M^3 for NxN matrices M. Can we say anything about simplifications of taking the large N limit? Of course you can use tree level Feynman diagrams to solve those equations perturbatively (as for example I described here), but is there a non-perturbative version of "planar"?
Can I say anything about the structure of solutions to these equations that is approached for N->infinity?

Path Integral Entanglement

Entanglement is the distinguishing feature of quantum theory as compared to classical physics. It is closely tied to the non-comutativity of the observable algebra and is responsible for things like the violation of Bell's inequality.

On the other hand, we know that the path integral gives us an equivalent description of quantum physics, surprisingly in terms of configurations/paths of the classical variables (that we then have to take the weighted integral over) which are intrinsically commuting objects. 

Properties of non-commuting operators can appear in subtle ways, like the operator ordering ambiguity how to quantize the classical observable x^2p^2, should it be xp^2x or px^2p or for example (x^2p^2 + p^2x^2)/2? This is a true quantization ambiguity and the path integral has to know about it as well. It turns out, it does: When you show the equivalence of Schroedinger's equation and the path integral, you do that by considering infinitesimal paths and you have to evaluate potentials etc on some point of those paths to compute things like V(x) in the action. Turns out, the operator ambiguity is equivalent to choosing where to evaluate V(x), at the start of the path, the end, the middle or somewhere else.

So far so good. The question that I don't know the answer to is how the path integral encodes entanglement. For example can you discuss a version of Bell's inequality (or similar like GHZ) in the path integral language? Of course you would have to translate the spin operators to positions .

Tuesday, November 06, 2012

A Few Comments On Firewalls

I was stupid enough to agree to talk about Firewalls in our strings lunch seminar this Wednesday without having read the paper (or what other people say about them) except for talking to Raphael Busso at the Strings 2012 conference and reading Joe Polichinski's guest post over at the Cosmic Variance blog.

Now, of course I had to read (some of) the papers and I have to say that I am confused. I admit, I did not get the point. Even more, I cannot understand a large part of the discussion. There is a lot of prose and very little formulas and I have failed to translate the prose to formulas or hard facts for myself. Many of the statements taken at face value do not make sense to me but on the other hand, I know the authors to be extremely clever people and thus the problem is most likely on my end.

In this post, I would like to share some of my thoughts in my endeavor to decode these papers but probably they are to you even more confusing than the original papers to me. But maybe you can spot my mistakes and correct me in the comment section.

I had a long discussion with Cristiano Germani on these matters for which I am extremely grateful. If this post contains any insight it is his while all errors are for course mine.

What is the problem?

I have a very hard time not to believe in "no drama", i.e. that anything special can happen at an event horizon. First of all, the event horizon is a global concept and its location now does in general depend on what happens in the future (e.g. how much further stuff is thrown in the black hole). So who can it be that the location of a anything like a firewall can depend on future events?

Furthermore, I have never seen such a firewall so far. But I might have already passed an event horizon (who knows what happens at cosmological scales?). Even more, I cannot see a local difference between a true event horizon like that of a black hole and the horizon of an accelerated observer in the case of the Unruh-effect. That the later I am pretty sure I have crossed already many times and I have never seen a firewall.

So I was trying to understand why there should be one. And whenever I tried to flesh out the argument for one they way I understood it it fell apart. So, here are some of my thoughts;

The classical situation

No question, Hawking radiation is a quantum effect (even though it happens at tree level in QFT on curved space-time and is usually derived in a free theory or, equivalently, by studying the propagator). But apart from that not much of the discussion (besides possibly the monogamy of entanglement, see below) seems to be particular quantum. Thus we might gain some mileage by studying classical field theory on the space time of a forming and decaying black hole as given by the causal diagram:
A decaying black hole, image stolen from Sabine Hossenfelder.

Issues of causality a determined by the characteristics of the PDE in question (take for example the wave equation) and those are invariant under conformal transformations even if the field equation is not. So, it is enough to consider the free wave equation on the causal diagram (rather than the space-time related to it by a conformal transformation). 

For example we can give initial data on I- (and have good boundary conditions at the r=0 vertical lines). At the dashed horizontal line, the location of the singularity, we just stop evolving (free boundary conditions) and then we can read off outgoing radiation at I+. The only problematic point is the right end of the singularity: This is the end of the black hole evaporation and to me it is not clear how we can here start to impose again some boundary condition at the new r=0 line without affecting what we did earlier. But anyway, this is in a region of strong curvature, where quantum gravity becomes essential and thus what we conclude should better not depend too much on what's going on there as we don't have a good understanding of that regime.

The firewall paper, when it explains the assumptions of complementarity mentions an S-matrix where it tries to formalize the notion of unitary time evolution. But it seems to me, this might be the wrong formalization as the S-matrix is only about asymptotic states and even fails in much simpler situations when there are bound states and the asymptotic Hilbert spaces are not complete. Furthermore, strictly speaking, this (in the sense of LSZ reduction) is not what we can observe: Our detectors are never at spatial infinity, even if CMS is huge, so we should better come up with a more local concept. 
Two regions M and N on a Cauchy surface C with their causal shadows

In the case of the wave equation, this can be encoded in terms of domains of dependence: By giving initial data on a region of a Cauchy surface I determine the solution on its causal shadow (in the full quantum theory maybe plus/minus an epsilon for quantum uncertainties). In more detail: If I have two sets of initial data on one Cauchy surface that agree on a local region. Than the two solutions have to agree on the causal shadow of this region no matter what the initial data looks like elsewhere. This encodes that "my time-evolution is good and I do not lose information on the way" in a local fashion.

States

Some of my confusion comes from talking about states in a way that at least when taken at face value is  in conflict with how we understand states both in classical and in better understood quantum (both quantum mechanics and quantum field theory) circumstances.

First of all (and quite trivially), a state is always at one instant of time, that is it lives on a Cauchy surface (or at least a space-like hyper surface, as our space-time might not be globally hyperbolic), not in a region of space-time. Hilbert space, as the space of (pure) states thus also lives on a Cauchy surface (and not for example in the region behind the horizon). If one event is after another (i.e. in its forward light-cone) it does not make sense to say they belong to different tensor factors of the Hilbert (or different Hilbert spaces for that matter).

Furthermore, a state is always a global concept, it is everywhere (in space, but not in time!). There is nothing like "the space of this observer". What you can do of course is restrict a state to a subset of observables (possibly those that are accessible to one observer) by tracing out a tensor factor of the Hilbert space. But in general, the total state cannot be obtained by merging all these restricted states as those lack information about correlations and possible entanglement.

This brings me to the next confusion: There is nothing wrong with states containing correlations of space-like separated observables. This is not even a distinguishing property of quantum physics, as this happens all the time even in classical situations: In the morning, I pick a pair of socks from my drawer without turning on the light and put it on my feet. Thus I do not know which socks I am wearing, in particular, I don't know their color. But as I combined matching socks when they came from the washing machine (as far as this is possible given the tendency of socks going missing) I know by looking at the sock on my right foot what the color of the sock on my left foot is, even when my two feet are spatially separated. Before looking, the state of the color of the socks was a statistical mixture but with non-local correlations. And of course there is nothing quantum about my socks (even if in German "Quanten" is not only "quantum" but also a pejorative word for feet). This would even be true (and still completely trivial) if I had put one of my feet through an event horizon while the other one is still outside. This example shows that locality is not a property that I should demand of states in order to be sure my theory is free of time travel. The important locality property is not in the states, it is in the observables: The measurement of an observable here must not depend of whether or not I apply an operator at a space-like distance. Otherwise that would imply I could send signals faster than the speed of light. But it is the operators, not the states that have to be local (i.e. commute for spatial separation).

If two operators, however, are time-like separated (i.e. one is after the other in its forward light cone), I can of course influence one's measurement by applying the other. But this is not about correlations, this is about influence. In particular, if I write something in my notebook and then throw it across the horizon of a black hole, there is no point in saying that there is a correlation (or even entanglement) between the notebook's state now and after crossing the horizon. It's just the former influencing the later.

Which brings us to entanglement. This must not be confused with correlation, the former being a strict quantum property whereas the other can be both quantum or classical. Unfortunately, you can often see this in popular talks about quantum information where many speakers claim to explain entanglement but in fact only explain correlations. As a hint: For entanglement, one must discuss non-commuting observables (like different components of a the same spin) as otherwise (by the GNS reconstruction theorem) one deals with a commutative operator algebra which always has a classical interpretation (functions on a classical space). And of course, it is entanglement which violates Bell's inequality or shows up in the GHZ experiment. But you need something of this complexity (i.e. involving non-commuting observables) to make use of the quantumness of the situation. And it is only this entanglement (and not correlation) that is "monogamous": You cannot have three systems that are fully entangled for all pairs. You can have three spins that are entangled, but once you only look at two they are no longer entangles (which makes quantum cryptography work as the eavesdropper cannot clone the entanglement that is used for coding).

And once more, entanglement is a property of a state when it is split according to a tensor product decomposition of the Hilbert space. And thus lives on a Cauchy surface. You can say that a state contains entanglement of two regions on a Cauchy surface but it makes no sense to say to regions that are time-like to each other to be entangled (like the notebook before and after crossing the horizon). And therefore monogamy cannot be invoked with respect to also taking the outgoing radiation in as the third player.

Monday, September 24, 2012

The future of blogging (for me) and in particular twitter

As you might have noticed, breaks between two posts here get bigger and bigger. This is mainly due to lack of ideas on my side but also as I am busy with other things (now that with Ella H. kid number two has joined the family but there is also a lot of TMP admin stuff to do).

This is not only true for me writing blog posts but also about reading: Until about a year ago, I was using google reader not to miss a single blog post of a list of about 50 blogs. I have completely stopped this and systematically read blogs only very occasionally (that is other than being directed to a specific post by a link from somewhere else).

What I still do (and more than ever) is use facebook (mainly to stay in contact with not so computer affine friends) and of course twitter (you will know that I am @atdotde there). Twitter seems to be the ideal way to stay current on a lot of matters you are interested in (internet politics for example) while not wasting too much time given the 140 character limit.

Twitter's only problem is that they don't make (a lot of) money. This is no problem for the original inventors of the site (they have sold their shares to investors) but the current owners now seem desperate to change this. From what they say they want to move twitter more to a many to one (marketing) communication platform and force users to see ads they mix among the genuine tweets.

One of the key aspects of the success of twitter was its open API (application programmers interface): Everybody could write programs (and for example I did) that interacted with twitter so for example everybody can choose their favourite client program on any OS to read and write tweets. Since the recent twitter API policy changes this is no longer the case: A client can now have only 100,000 users (or if they already have more can double the number of users), a small number given the allegedly about 4,000,000 million twitter accounts. And there are severe restrictions how you may display tweets to your users (e.g. you are not allowed to use them in any kind of cloud service or mix them with other social media sites, i.e. blend them with Facebook updates). The message that this sends is clearly: "developers go away" (the idea seems to be to force users to use the twitter website and their own clients) and anybody who still invests in twitter developing is betting on a dead horse. But it is not hard to guess that in the long run this will also make the while twitter unattractive to a lot of (if not eventually all) their users.

People (often addicted to twitter feeds) are currently evaluating alternatives (like app.net) but this morning I realized that maybe the twitter managers are not so stupid as they seem to be (or maybe they just want to cash in what they have and don't care if this ruins the service), there is still an alternative that would make twitter profitable and would secure the service in the long run: They could offer to developers to allow them to use the old API guidelines but for a fee (say a few $/Euros per user per month): This would bring in the cash they are apparently looking for while still keeping the healthy ecosystem of many clients and other programs. twitter.com would only be dealing with developers while those would forward the costs to their users and recollect the money by selling their apps (so twitter would not have to collect money from millions of users).

But maybe that's too optimistic and they just want to earn advertising money NOW.