Tuesday, July 29, 2008

Cloud computing

This morning I read an article in this week's Zeit "Die angekündigte Revolution". Its author claims that the fact that in the future we will not have computational power in our homes (or with us) but rather use computing centers that are centralised and accessible over the net. He equates that with the revolutions that came about as an centralised electricity supply was established (instead of factories having their own generators in the basement) and centralised water supply made people independent of a well in the backyard.

I am not so sure.

First of all we've already had that: In the past, there were mainframes connected to many terminals. The pain that came with this set-up was only relieved when computational power appeared on everybody's desks thanks to PCs. So why go back to those old days?

In addition, what I am observing is that the computational power I have local access to is exponentially growing for as long as I can think. And I see no end to that trend. Currently, my mobile phone has more memory than my PC a few years ago and has a processor that can do amazing things. The point is not that I need that but that it's so cheap I don't mind having it.

True enough, I only very rarely really need CPU cycles. But when I need them (computing something in mathematica or visiting a web page with some broken java or javascript that makes my browser busy) I usually want it right now. It's not that I am planning it and could as well outsource it to some centralised server.

It might be a bit different with storage. Having some centralised storage that can be accessed from everywhere (and where other people worry about backup so I don't have to) could be useful. Not only for backup and mobile access to all kinds of documents. All that assuming that data privacy has been taken care of. But also things like configuration files (like bookmarks), documents, media files. That already partly exists (at least for specific applications) but nothing unified, as of today (as far as I am aware of).

But I cannot see people giving up local computational power. Recently the part of PCs where performance has been growing most strongly are the video cards (by now massive multiprocessor rendering engines). That development was of course driven by the video game market. I don't see how that would be moved to a centralised computer.

As of today, I do not know anybody that uses Google Docs. It's more a proof of concept that an application for everyday use. If I really want to collaborate on documents I would rather use subversion or CVS. Again, that has a centralised storage but computation is still local.

Let me finish with two rants about related problems that I recently had: First, I use liferea as my RSS aggregator. That is a nice little program with intuitive user interface that allows me to quickly catch up with the feeds I am interested in. Unfortunately, it keeps its state (which posts it has already downloaded and which posts I have looked at) in a stupid format: Its actually a whole directory of xml and html files! So to continue reading blogs on my laptop from where I left of on my PC requires scp'ing that whole directory. Not to mention there is no way to somehow 'merge' two states...

The other thing is email. You might think this is trivial, but to me it seems it is not. My problem is first I am getting a lot and want my computer to do some of the processing for me. Thus I have the mail delivery program sort mail from mailing lists into appropriate folders (including a spam folder). Then, on a typical day I want to read it with an advanced reader (which in my case is alpine, the successor of pine). The killer function being to automatically save incoming mail in a folder matching my nickname for the author or the author's name and saving outgoing mail to a
folder according to the recipient. Not to mention I have about half a gig of old mail distributed over 470 folders, more than what one can easily deal with one of the GUI clients like thunderbird.

That is all nice and well. Once I am at my PC I just run alpine. Or if I am at home or travelling and connecting for a somewhat decent machine (i.e. one that has an ssh client or at least allows me to install one) I ssh to my PC and read mail there (actually, I ssh to the firewall of the physics department from there ssh to one of the computers of the theory group and from there eventually to my PC as it is well hidden from the outside world due to some other people's idea of computer security).

What if that other computer cannot do ssh but there is only a web browser? My current (and for my upcoming four week holiday in the south west of the USA) solution is to go to a page that has an ssh client as a java applet and then to step one above. But that is a but like the mathematician in the joke that sees the dust bin in his hotel room burning and takes the buring bin bin to the physicist's hotel room thereby reducing the problem to an already solved one (the physicist had extinguished a fire in the previous joke).

Why is it so hard these days to set up decent webmail? At least for the duration of a holiday? My problem is that there are three classes of computers: Type one are connected to the internet but I do not have sufficient privileges to install software. Type two I have the privileges but they don't give me an IP that is routed to the internet (even more: that accepts incoming connections). Type three: A computer to which I have root access and which as a routed IP but where the software installation is so out of date I cannot install software with one of the common tools (i.e. apt-get) without risiking to install/upgrade other packages that require at least a reboot. I should mention that that computer is physically located some hundred kilometers away from me and I am the only person who could reboot it. A major update is likely to effectively make me lose that computer.

These things used to be so much easier in the past: Since the days of my undergraduate studies I always had some of my own linux boxes hooked up to some university (or DESY in that case) network. On that I could have done the job. But with recent obsession with (percieved) security, you only get DHCP addresses (with which one can deal using dyndns etc) but also which are behind firewalls that do not allow for incoming connections. Stupid, stupid, stupid!!!

I am really thinking about renting one of those (virtual) servers at one of the hosters which you can now do for little money to solve all these problems. But that should not really be neccessary!

Observing low scale strings without landscape problems

Tom Taylor is currently visiting Munich and a couple of days ago he had a paper with Dieter Lüst and Stephan Stieberger which discusses (besides many detailed tables) a simple observation: Assume that for some reason the string scale is so much lower than the observed 4d Planck scale that it can be reached by LHC (a possible but admittedly unlikely scenario) and in addition the string coupling is sufficiently small. Then they argue the 2 to 2 gluon amplitude is dominated by the first few Regge poles.

The important consequence of this observation is that this implies that the amplitudes are (up to the string scale, the only parameter) are independent of the details of the compactification and the way susy is broken: This amplitude is the same all over the landscape in all 10^500 vacua!

Observationally this would mean the following: At some energy there would be a resonance in the gg->gg scattering (or even better several). The angular distribution of the scattering products are characteristic for the spins of the corresponding Regge poles (i.e. 0 for the lowest, 1 for the next etc) and most importantly, the decay width can be computed from the energy of the resonance (which itself measures the free parameter, the string scale).

Of course, those resonances could still be attributed to some particles but the spin and decay width would be very characteristic for stings. As I said, all this is with the proviso that the string scale is so low it can be reached by LHC (or any accelerator looking for these resonances) and that the coupling is small (which is not so much a constraint as the QCD coupling is related to the string coupling and at those scales is already very small).

Tuesday, July 08, 2008

Formulas down

The computer that serves formulas for this blog (via mimeTeX) is down. Since it's located under my old desk in Bremen I cannot just reboot it from here. Please be patient (or let me know another host for mimeTeX and a simple way to migrate all the old posts...).

Update:
Having said this mathphys.iu-bremen.de is up again thanks to Yingiang You!

Wednesday, July 02, 2008

Chaos: A Note To Philosophers

For some reasons (not too hard to guess) I was recently exposed to a number of texts (both oral and written) on the relation between science (or physics) and religion. In those, a recurring misconception is a misunderstanding of the meaning of "chaos":

In the humanities it seems, an allowed mode of argument (often used to make generalisations or find connections between different subjects) is to consider the etymology of the term used to describe the phenomenon. In the case of "chaos", wikipedia is of help here. But at least in math (and often in physics) terms are thought to be more like random labels and yield no further information. Well, sometimes they do because the people which coined the terms wanted them to imply certain connections, but in case of doubt, they don't really mean something.

A position which I consider not much less naive than a literal interpretation of a religious text when it comes to questions of science (the world was created in six days and that happened 6000 years ago) is to allow a deity to intervene (only) in the form of the fundamental randomness of the quantum world. For example, this is quite restricting and most of the time, this randomness just averages out for macroscopic bodies like us making the randomness irrelevant.

But for people with a liking for a line of argument like this, popular texts about chaos theory come to a rescue: There, you can read that butterflies cause hurricanes and that this fact fundamentally restricts predictability even on a macroscopic scale --- room for non-deterministic interference!

Well, let me tell you, this argument is (not really surprisingly) wrong! As far as the maths goes, the nice property of the subject is, that it is possible to formalise vague notions (unpredictable) and see how far they really carry. So, what is meant here is that the late time behavior is a dis-continuous function of the initial conditions at t=0. That is, if you can prepare the initial conditions only up to a possible error of epsilon, you cannot predict the outcome (up to an error delta that might be given to you by somebody else) even by making epsilon as close to 0 as you want.

The crucial thing here if of course what is meant by "late time behavior": For any late but finite time t (say in ten thousand years), the dependence on initial conditions is still continuous, for any given delta, you can find an epsilon such that you can predict the outcome within the margin given by delta. Of course the epsilon will be a function of t, that is if you want to predict the farther future you have to know the current state better. But this epsilon(t) will always (as long as the dynamics is not singular) be strictly greater than 0 allowing for an uncertainty in the initial conditions. It's only in the limit of infinite t that it might become 0 and thus any error in the observation/preparation of the current state, no matter how small leads to an outcome significantly different from the exact prediction.