Saturday, November 26, 2016

Breaking News: Margarine even more toxic!

One of the most popular posts of this blog (as far as resonance goes) was the one on Scaling the Price of Margarine. Today, I did the family weekend shopping and noticed I have to update the calculation:

At our local Rewe branch, they offer the pound of Lätta for 88 cents while they ask 1.19Euro for half the pound. With the ansatz from the old post, this means the price for the actual margarine is now -9,78Euro/kg. This, by coincidence is approximately also the price you have to pay to get rid of waste  oil.

Sunday, November 13, 2016

Theoretical diver

Besides physics, another hobby of mine is scuba diving. For many reasons, unfortunately, I don't have much time anymore, to get in the water. As partial compensation, I started some time ago to contribute the Subsurface, the open source dive log program. Partly related to that, I also like to theorize about diving. To put that in form, I now started another blog The Theoretical Diver to discuss aspects of diving that I have been thinking about.

OpenAccess: Letter to the editor of Süddeutsche Zeitung

In yesterday's Süddeutsche Zeitung, there is an opinion piece by historian Norbert Frei on the German government's OpenAccess initiative, which prompted me to write a letter to the editor (naturally in German). Here it is:

Zum Meinungsbeitrag „Goldener Zugang“ von Norbert Frei in der SZ vom 12./13. November 2016:

Herr Frei sorgt sich in seinem Beitrag, dass der Wissenschaft unter der Überschrift OpenAccess von außen ein Kulturwandel aufgezwungen werden soll. Er fürchtet, dass ihn die Naturwissenschaftler zusammen mit der Politik zwingen, seine Erkenntnisse nicht mehr in papiernen Büchern darlegen zu können, sondern alles nur noch zerstückelt in kleine Artikel-Happen in teure digitale Archive  einzustellen, wo sie auf die Bitverrottung waren, da schon in kürzester Zeit das Fortschreiten von Hard- und Software dazu führen wird, dass die Datenformate unlesbar werden.

Als Gegenmodell führt er die Gutenberg-Bibel an, von der eine Mehrzahl der Exemplare die Jahrhunderte überdauert haben. Nun weiss ich nicht, wann Herr Frei das letzte Mal in seiner Gutenberg-Bibel geblättert hat, ich habe in meinem Leben nur ein einziges Mal vor einer gestanden: Diese lag in einer Vitrine der Bibliothek von Cambridge und war auf einer Seite aufgeschlagen, keine andere Seite war zugänglich. Dank praktischem OpenAccess ist es aber nicht nur den guten Christenmenschen möglich, eine Kopie zu Hause vorzuhalten. Viel mehr noch, die akademischen Theologen aus meinem Bekanntenkreis arbeiten selbstverständlich mit einer digitalen Version auf ihrem Laptop oder Smartphone, da diese dank Durchsuchbarkeit, Indizierung und Querverweisen in andere Werke für die Forschung viel zugänglicher ist.

Geschenkt, dass es bei der OpenAccess-Initiative eine Ausnahme für Monographien geben soll. Niemand will das Bücherschreiben verbieten. Es geht nur darum, dass, wer Drittmittel von der öffentlichen Hand erhalten will, nicht noch einmal die Hand dafür aufhalten soll, wenn sich dann die vor allem wissenschaftliche Öffentlichkeit über die Ergebnisse informieren will. Professorinnen und Professoren an deutschen Universitäten schreiben ihre wissenschaftlichen Veröffentlichungen nicht zu ihrem Privatvergnügen, es ist Teil ihrer Dienstaufgaben. Warum wollen sie die Früchte ihres bereits entlohnten Schaffens dann noch ein weiteres Mal den öffentlichen Bibliotheken verkaufen? 

Ich kann mich noch gut an meinen Stolz erinnern, als ich das erste Mal meinen Namen gedruckt auf Papier sah, der das Titelblatt meiner ersten Veröffentlichung zierte. Jenseits davon ist es für mich als Wissenschaftler vor allem wichtig, dass das, was ich da herausfinde, von anderen wahrgenommen und weitergetrieben wird. Und das erreiche ich am besten, wenn es so wenig Hürden wie möglich gibt, dieses zu tun.

Ich selber bin theoretischer Hochenergiephysiker, selbstredend gibt es sehr unterschiedliche Fächerkulturen. In meinem Fach ist es seit den frühen Neunzigerjahren üblich, alle seine Veröffentlichungen - vom einseitigen Kommentar zu einem anderen Paper bis zu einem Review von vielen hundert Seiten - in arXiv.org, einem nichtkommerziellen Preprintarchiv einzustellen, wo es von allen Fachkolleginnen und -kollegen ab dem nächsten Morgen gefunden und in Gänze gelesen werden kann, selbst viele hervorragend Lehrbücher gibt es inzwischen dort. Diese globale Verbreitung neben einfachem Zugang (ich habe schon seit mehreren Jahren keinen papiernen Fachartikel in unserer Bibliothek mehr in einem Zeitschriftenband mehr nachschlagen müssen, ich finde alles auf meinem Computer) hat so viele Vorteile, das man gerne auf mögliche Tantiemen verzichtet, zumal diese für Zeitschriftenartikel noch nie existiert haben und, von wenigen Ausnahmen abgesehen, verschwinden gering gegenüber einem W3-Gehalt ausfallen und als Stundenlohn berechnet jeden Supermarktregaleinräumer sofort die Arbeit niederlegen ließen. Wir Naturwissenschaftler sind auf einem guten Weg, uns von parasitären Fachverlagen zu emanzipieren, die es traditionell schafften, jährlich den Bibliotheken Milliardenumsätze für unsere Arbeit abzupressen, wobei sie das Schreiben der Artikel, die Begutachtung, den Textsatz und die Auswahl unbezahlt an von der Öffentlichkeit bezahlte Wissenschaftlerinnen und Wissenschaftler delegiert haben und sie sich ausschliesslich ihre Gatekeeper Funktion bezahlen liessen. 

Und da ich an Leserschaft interessiert bin, werde ich diesen Brief auch in mein Blog einstellen.

Thursday, October 27, 2016

Daylight saving time about to end (end it shouldn't)

Twice  a year, around the last Sunday in March and the last Sunday in October, everybody (in particular newspaper journalists) take a few minutes off to rant about daylight savings time. So, for this first time, I want to join this tradition in writing.

Until I had kids, I could not care less about the question of changing the time twice a year. But at least for our kids (and then secondary also for myself), I realize biorhythm is quite strong and at takes more than a week to adopt to the 1 hour jet lag (in particular in spring when it means getting out of bed "one hour earlier"). I still don't really care about cows that have to deliver their milk at different times since there is no intrinsic reason that the clock on the wall has to show a particular time when it is done and if it were really a problem, the farmers could do it at fixed UTC.

So, obviously, it is a nuisance. So what are the benefit that justify it? Well, obviously, in summer the sun sets at a later hour and we get more sun when being outside in the summer. That sounds reasonable. But why restrict it to the summer?

Which brings me to my point: If you ask me, I want to get rid of changing the offset to UTC twice a year and want to permanently adopt daylight saving time.

But now I hear people cry that this is "unnatural", we have to have the traditional time at least in the winter when it does not matter as it's too cold to be outside (which only holds for people with defective clothing as we know). So how natural is CET (the time zone we set our clocks to in winter), let's take people living in Munich for an example?

First of all: It is not solar time! CET is the "mean solar time" when you live at  a longitude of 15 degrees east, which is (assuming the latitude) close to Neumarkt an der Ypps somewhere in Austria not too far from Vienna. Munich is about 20 minutes behind. So, this time is artificial as well, and Berlin being closer to 15 degrees, it is probably Prussian.

Also a common time zone for Germany was established only in the 1870s when the advent of railways and telegraphs make synchronization between different local times advantageous. So this "natural" time is not that old either.

It is so new, that Christ Church college in Oxford still refuses to fully bow to it: Their clock tower shows Greenwich time. And the cathedral services start according to solar time (about five minutes later) because they don't care about modern shenanigans. ("How many Oxford deans does it take to change a light bulb?" ---- "Change??!??"). Similarly, in Bristol, there is a famous clock with two minute hands.

Plus, even if you live in Neumarkt an der Ybbs, your sun dial does not always show the correct noon! Thanks to the tilt of the earth axis and the fact that the orbit of the earth is elliptic, this varies through the year by a number of minutes:

So, "winter time" is in no way more natural than the other time zone. So we should be free to choose a time zone according to what is convenient. At least for me, noon is not the center of my waking hours (it's more 5,5 : 12). So, aligning those more with the sun seems to be a pretty good idea.

PS: The title was a typo, but looking at it I prefer it the way it is...

Monday, October 24, 2016

Mandatory liability for software is a horrible idea

Over the last few days, a number of prominent web sites including Twitter, Snapchat and Github were effectively unreachable for an extended period of time. As became clear, the problem was that DynDNS, a provider of DNS services for these sites was under a number of very heavy DDoS (distributed denial of service) attack that were mainly coming from compromised internet of things devices, in particular web cams.

Even though I do not see a lot of benefit from being able to change the color of my bedroom light via internet, I love the idea to have lots of cheap devices (I continue to have a lot of fun with C.H.I.P.s, full scale Linux computers with a number of ports for just 5USD, also for Subsurface, in particular those open opportunities for the mobile version), there are of course concerns how one can economically have a stable update cycle for those, in particular once they are build into black-box customer devices.

Now, after some dust settled comes of course the question "Who is to blame?" and should be do anything about this. Of course, the manufacturer of the web cam made this possible through far from perfect firmware. Also, you could blame DynDNS for not being able to withstand the storms that from time to time sweep the internet (a pretty rough place after all) or the services like Twitter to have a single point of failure in DynDNS (but that might be hard to prevent given the nature of the DNS system).

More than once I have now heard a call for new laws that would introduce a liability for the manufacturer of the web cam as they did not provide firmware updates in time that prevent these devices from being owned and then DDoSing around on the internet.

This, I am convinced, would be a terrible idea: It would make many IT businesses totally uneconomic. Let's stick for example with the case at hand. What is the order of magnitude of damages that occurred to the big companies like Twitter? They probably lost ad revenue of about a weekend. Twitter recently made $6\cdot 10^8\$ $ per quarter, which averages to 6.5 million per day. Should the web cam manufacturer (or OEM or distributor) now owe Twitter 13 million dollars? I am sure that would cause immediate bankruptcy. Or just the risk that this could happen would prevent anybody from producing web cams or similar things in the future. As nobody can produce non-trivial software that is free of bugs. You should strive to weed out all known bugs and provide updates, of course, but should you be made responsible if you couldn't? Responsible in a financial sense?

What was the damage cause by the heart bleed bug? I am sure this was much more expensive. Who should pay for this? OpenSSL? Everybody that links against OpenSSL? The person that committed the wrong patch? The person that missed it code review?

Even if you don't call up these astronomic sums and have fixed fine (e.g. an unfixed vulnerability that gives root access to an attacker from the net costs 10000$) that would immediately stop all open source development. If you give away your software for free, do you really want to pay fines if not everything is perfect? I surely wouldn't.

For that reason, the GPL has the clauses (and other open source licenses have similar ones) stating

 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
  12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
(capitalization in the original). Of course, there is "required by applicable law" but I cannot see people giving you software for free if you later make them pay fines.

And for course, it is also almost impossible to make exceptions in the law for this. For example, a "non-commercial" exception does not help as even though you do not charge for open source software a lot of it is actually provided with some sort of commercial interest.

Yes, I can understand the tendency to make creators of defective products that don't give a damn about an update path responsible for the stuff they ship out. And I have the greatest sympathy for consumer protection laws. But here, there collateral damage would be huge (we might well lose the whole open source universe every small software company except the few big one that can afford the herds of lawyers to defend against these fines).

Note that I only argue for mandatory liability. It should of course always be a possibility that a provider of software/hardware give some sort of "fit for purpose" guarantee to its customers or a servicing contract where they promise to fix bugs (maybe so that the customer can fulfill their liabilities to their customers herself). But in most of the cases, the provider will charge for that. And the price might be higher than currently that for a light bulb with an IP address.

The internet is a rough place. If you expose your service to it better make sure you can handle every combination of 0s and 1s that comes in from there or live with it. Don't blame the source of the bits (no matter how brain dead the people at the other end might be).


Friday, October 07, 2016

My two cents on this year's physics Nobel prize

This year's Nobel prize is given for quite abstract concepts. So the popular science outlets struggle in giving good explanations for what it is awarded for. I cannot add anything to this, but over at math overflow, mathematicians asked for a mathematical explanation. So here is my go of an outline for people familiar with topology but not so much physics:

Let me try to give a brief explanation: All this is in the context of Fermi liquid theory, the idea that you can describe the low energy physics of these kinds of systems by pretending they are generated by free fermions in an external potential. So, all you need to do is to solve the single particle problem for the external potential and then fill up the energy levels from the bottom until you reach the total particle number (or actually the density). It is tempting (and conventional) to call these particles electrons, and I will do so here, but of course actual electrons are not free but interacting. This "Fermi Liquid" explanation is just and effective description for long wavelength (the IR end of the renormalization group flow) where it turns out, that at those scales the interactions play no role (they are "irrelevant operators" in the language of the renormalization group).

The upshot is, we are dealing with free "electrons" and the previous paragraph was only essential if you want to connect to the physical world (but this is MATH overflow anyway).

Since the external potential comes from a lattice (crystal) it is invariant under lattice translations. So Bloch theory tells you, you can restrict your attention as far as solving the Schrödinger equation to wave functions living in the unit cell of the lattice. But you need to allow for quasi-periodic boundary conditions, i.e. when you go once around the unit cell you are allowed to pick up a phase. In fact, there is one phase for each generator of the first homotopy group of the unit cell. Each choice of these phases corresponds to one choice of boundary conditions for the wave function and you can compute the eigenvalues of the Hamiltonian for these given boundary conditions (the unit cell is compact so we expect discrete eigenvalues, bounded from below).

But these eigenvalues depend on the boundary conditions and you can think of the as a function of the phases. Each of the phases takes values in U(1) so the space of possible phases is a torus and you can think of the eigenvalues as functions on the torus. Actually, when going once around an irreducible cycle of the torus not all eigenvalues have to come back to themselves, you can end up with a permutation it this is not really a function but a section of a bundle but let's not worry too much about this as generally this "level crossing" does not happen in two dimensions and only at discrete points in 3D (this is Witten's argument with the 2x2 Hamiltonian above).

The torus of possible phases is called the "Brioullin zone" (sp?) by physicists and its elements "inverse lattice vectors" (as you can think of the Brioullin zone as obtained from modding out the dual lattice of the lattice we started with).

Now if your electron density is N electrons per unit cell of the lattice Fermi Liquid theory asks you to think of the lowest N energy levels as occupied. This is the "Fermi level" or more precisely the graph of the N-th eigenvalue over the Bioullin zone. This graph (views as a hyper-surface) can have non-trivial topology and the idea is that by doing small perturbations to the system (like changing the doping of the physical probe or changing the pressure or external magnetic field or whatever) stuff behaves continuously and thus the homotopy class cannot change and is thus robust (or "topological" as the physicist would say).

If we want to inquire about the quantum Hall effect, this picture is also useful: The Hall conductivity can be computed to leading order by linear response theory. This allows us to employ the Kubo formula to compute it as a certain two-point function or retarded Green's function. The relevant operators turn out to be related to the N-th level wave function and how it changes when we move around in the Brioullin zone: If we denote by u the coordinates of the Brioullin zone and by $\psi_u(x)$ the N-th eigenfunction for the boundary conditions implied by u, we can define a 1-form
$$ A = \sum_i \langle \psi_u|\partial_{u_i}|\psi_u\rangle\, du^i = \langle\psi_u|d_u|\psi\rangle.$$
This 1-form is actually the connection of a U(1) bundle and the expression the Kubo-formula asks us to compute turns out to be the first Chern number of that bundle (over the Brioullin zone).

Again that, as in integer, cannot change upon small perturbations of the physical system and this is the explanation of the levels in the QHE.

In modern applications, an important role is played by the (N-dimensional and thus finite dimensional) projector the subspace of Hilbert space spanned by the eigenfunctions corresponding to he N lowest eigenvalues, again fibered over the Brioullin zone. Then one can use K-theory (and KO-theory in fact) related to this projector to classify the possible classes of Fermi surfaces (these are the "topological phases of matter", as eventually, when the perturbation becomes too strong even the discrete invariants can jump which then physically corresponds to a phase transition).

My two cents on this years physics Nobel prize

This year's Nobel prize is given for quite abstract concepts. So the popular science outlets struggle in giving good explanations for what it is awarded for. I cannot add anything to this, but over at math overflow, mathematicians asked for a mathematical explanation. So here is my go of an outline for people familiar with topology but not so much physics:

Let me try to give a brief explanation: All this is in the context of Fermi liquid theory, the idea that you can describe the low energy physics of these kinds of systems by pretending they are generated by free fermions in an external potential. So, all you need to do is to solve the single particle problem for the external potential and then fill up the energy levels from the bottom until you reach the total particle number (or actually the density). It is tempting (and conventional) to call these particles electrons, and I will do so here, but of course actual electrons are not free but interacting. This "Fermi Liquid" explanation is just and effective description for long wavelength (the IR end of the renormalization group flow) where it turns out, that at those scales the interactions play no role (they are "irrelevant operators" in the language of the renormalization group).

The upshot is, we are dealing with free "electrons" and the previous paragraph was only essential if you want to connect to the physical world (but this is MATH overflow anyway).

Since the external potential comes from a lattice (crystal) it is invariant under lattice translations. So Bloch theory tells you, you can restrict your attention as far as solving the Schrödinger equation to wave functions living in the unit cell of the lattice. But you need to allow for quasi-periodic boundary conditions, i.e. when you go once around the unit cell you are allowed to pick up a phase. In fact, there is one phase for each generator of the first homotopy group of the unit cell. Each choice of these phases corresponds to one choice of boundary conditions for the wave function and you can compute the eigenvalues of the Hamiltonian for these given boundary conditions (the unit cell is compact so we expect discrete eigenvalues, bounded from below).

But these eigenvalues depend on the boundary conditions and you can think of the as a function of the phases. Each of the phases takes values in U(1) so the space of possible phases is a torus and you can think of the eigenvalues as functions on the torus. Actually, when going once around an irreducible cycle of the torus not all eigenvalues have to come back to themselves, you can end up with a permutation it this is not really a function but a section of a bundle but let's not worry too much about this as generally this "level crossing" does not happen in two dimensions and only at discrete points in 3D (this is Witten's argument with the 2x2 Hamiltonian above).

The torus of possible phases is called the "Brioullin zone" (sp?) by physicists and its elements "inverse lattice vectors" (as you can think of the Brioullin zone as obtained from modding out the dual lattice of the lattice we started with).

Now if your electron density is N electrons per unit cell of the lattice Fermi Liquid theory asks you to think of the lowest N energy levels as occupied. This is the "Fermi level" or more precisely the graph of the N-th eigenvalue over the Bioullin zone. This graph (views as a hyper-surface) can have non-trivial topology and the idea is that by doing small perturbations to the system (like changing the doping of the physical probe or changing the pressure or external magnetic field or whatever) stuff behaves continuously and thus the homotopy class cannot change and is thus robust (or "topological" as the physicist would say).

If we want to inquire about the quantum Hall effect, this picture is also useful: The Hall conductivity can be computed to leading order by linear response theory. This allows us to employ the Kubo formula to compute it as a certain two-point function or retarded Green's function. The relevant operators turn out to be related to the N-th level wave function and how it changes when we move around in the Brioullin zone: If we denote by u the coordinates of the Brioullin zone and by $\psi_u(x)$ the N-th eigenfunction for the boundary conditions implied by u, we can define a 1-form
$$ A = \sum_i \langle \psi_u|\partial_{u_i}|\psi_u\rangle\, du^i = \langle\psi_u|d_u|\psi\rangle.$$
This 1-form is actually the connection of a U(1) bundle and the expression the Kubo-formula asks us to compute turns out to be the first Chern number of that bundle (over the Brioullin zone).

Again that, as in integer, cannot change upon small perturbations of the physical system and this is the explanation of the levels in the QHE.

In modern applications, an important role is played by the (N-dimensional and thus finite dimensional) projector the subspace of Hilbert space spanned by the eigenfunctions corresponding to he N lowest eigenvalues, again fibered over the Brioullin zone. Then one can use K-theory (and KO-theory in fact) related to this projector to classify the possible classes of Fermi surfaces (these are the "topological phases of matter", as eventually, when the perturbation becomes too strong even the discrete invariants can jump which then physically corresponds to a phase transition).