## Friday, December 09, 2016

### Workshop on IoT Liability at 33c3

After my recent blog post  on the dangers of liability for manufacturers of devices in the times of IoT, I decided I will run a workshop at 33C3, the annual hacker conference of the Chaos Computer club. I am proud I could convince Ulf Buermeyer (well known judge, expert in constitutional law, hacker, podcaster) to host this workshop with me.

The main motivation for me is that I hope that this will be a big issue in the coming year but it might still be early enough to influence policy before everybody commits herself to their favorite (snake oil) solution.

I have started collecting and sorting ideas in a Google document.

## Saturday, November 26, 2016

### Breaking News: Margarine even more toxic!

One of the most popular posts of this blog (as far as resonance goes) was the one on Scaling the Price of Margarine. Today, I did the family weekend shopping and noticed I have to update the calculation:

At our local Rewe branch, they offer the pound of Lätta for 88 cents while they ask 1.19Euro for half the pound. With the ansatz from the old post, this means the price for the actual margarine is now -9,78Euro/kg. This, by coincidence is approximately also the price you have to pay to get rid of waste  oil.

## Sunday, November 13, 2016

### Theoretical diver

Besides physics, another hobby of mine is scuba diving. For many reasons, unfortunately, I don't have much time anymore, to get in the water. As partial compensation, I started some time ago to contribute the Subsurface, the open source dive log program. Partly related to that, I also like to theorize about diving. To put that in form, I now started another blog The Theoretical Diver to discuss aspects of diving that I have been thinking about.

### OpenAccess: Letter to the editor of Süddeutsche Zeitung

In yesterday's Süddeutsche Zeitung, there is an opinion piece by historian Norbert Frei on the German government's OpenAccess initiative, which prompted me to write a letter to the editor (naturally in German). Here it is:

Zum Meinungsbeitrag „Goldener Zugang“ von Norbert Frei in der SZ vom 12./13. November 2016:

Herr Frei sorgt sich in seinem Beitrag, dass der Wissenschaft unter der Überschrift OpenAccess von außen ein Kulturwandel aufgezwungen werden soll. Er fürchtet, dass ihn die Naturwissenschaftler zusammen mit der Politik zwingen, seine Erkenntnisse nicht mehr in papiernen Büchern darlegen zu können, sondern alles nur noch zerstückelt in kleine Artikel-Happen in teure digitale Archive  einzustellen, wo sie auf die Bitverrottung waren, da schon in kürzester Zeit das Fortschreiten von Hard- und Software dazu führen wird, dass die Datenformate unlesbar werden.

Als Gegenmodell führt er die Gutenberg-Bibel an, von der eine Mehrzahl der Exemplare die Jahrhunderte überdauert haben. Nun weiss ich nicht, wann Herr Frei das letzte Mal in seiner Gutenberg-Bibel geblättert hat, ich habe in meinem Leben nur ein einziges Mal vor einer gestanden: Diese lag in einer Vitrine der Bibliothek von Cambridge und war auf einer Seite aufgeschlagen, keine andere Seite war zugänglich. Dank praktischem OpenAccess ist es aber nicht nur den guten Christenmenschen möglich, eine Kopie zu Hause vorzuhalten. Viel mehr noch, die akademischen Theologen aus meinem Bekanntenkreis arbeiten selbstverständlich mit einer digitalen Version auf ihrem Laptop oder Smartphone, da diese dank Durchsuchbarkeit, Indizierung und Querverweisen in andere Werke für die Forschung viel zugänglicher ist.

Geschenkt, dass es bei der OpenAccess-Initiative eine Ausnahme für Monographien geben soll. Niemand will das Bücherschreiben verbieten. Es geht nur darum, dass, wer Drittmittel von der öffentlichen Hand erhalten will, nicht noch einmal die Hand dafür aufhalten soll, wenn sich dann die vor allem wissenschaftliche Öffentlichkeit über die Ergebnisse informieren will. Professorinnen und Professoren an deutschen Universitäten schreiben ihre wissenschaftlichen Veröffentlichungen nicht zu ihrem Privatvergnügen, es ist Teil ihrer Dienstaufgaben. Warum wollen sie die Früchte ihres bereits entlohnten Schaffens dann noch ein weiteres Mal den öffentlichen Bibliotheken verkaufen?

Ich kann mich noch gut an meinen Stolz erinnern, als ich das erste Mal meinen Namen gedruckt auf Papier sah, der das Titelblatt meiner ersten Veröffentlichung zierte. Jenseits davon ist es für mich als Wissenschaftler vor allem wichtig, dass das, was ich da herausfinde, von anderen wahrgenommen und weitergetrieben wird. Und das erreiche ich am besten, wenn es so wenig Hürden wie möglich gibt, dieses zu tun.

Ich selber bin theoretischer Hochenergiephysiker, selbstredend gibt es sehr unterschiedliche Fächerkulturen. In meinem Fach ist es seit den frühen Neunzigerjahren üblich, alle seine Veröffentlichungen - vom einseitigen Kommentar zu einem anderen Paper bis zu einem Review von vielen hundert Seiten - in arXiv.org, einem nichtkommerziellen Preprintarchiv einzustellen, wo es von allen Fachkolleginnen und -kollegen ab dem nächsten Morgen gefunden und in Gänze gelesen werden kann, selbst viele hervorragend Lehrbücher gibt es inzwischen dort. Diese globale Verbreitung neben einfachem Zugang (ich habe schon seit mehreren Jahren keinen papiernen Fachartikel in unserer Bibliothek mehr in einem Zeitschriftenband mehr nachschlagen müssen, ich finde alles auf meinem Computer) hat so viele Vorteile, das man gerne auf mögliche Tantiemen verzichtet, zumal diese für Zeitschriftenartikel noch nie existiert haben und, von wenigen Ausnahmen abgesehen, verschwinden gering gegenüber einem W3-Gehalt ausfallen und als Stundenlohn berechnet jeden Supermarktregaleinräumer sofort die Arbeit niederlegen ließen. Wir Naturwissenschaftler sind auf einem guten Weg, uns von parasitären Fachverlagen zu emanzipieren, die es traditionell schafften, jährlich den Bibliotheken Milliardenumsätze für unsere Arbeit abzupressen, wobei sie das Schreiben der Artikel, die Begutachtung, den Textsatz und die Auswahl unbezahlt an von der Öffentlichkeit bezahlte Wissenschaftlerinnen und Wissenschaftler delegiert haben und sie sich ausschliesslich ihre Gatekeeper Funktion bezahlen liessen.

Und da ich an Leserschaft interessiert bin, werde ich diesen Brief auch in mein Blog einstellen.

## Thursday, October 27, 2016

### Daylight saving time about to end (end it shouldn't)

Twice  a year, around the last Sunday in March and the last Sunday in October, everybody (in particular newspaper journalists) take a few minutes off to rant about daylight savings time. So, for this first time, I want to join this tradition in writing.

Until I had kids, I could not care less about the question of changing the time twice a year. But at least for our kids (and then secondary also for myself), I realize biorhythm is quite strong and at takes more than a week to adopt to the 1 hour jet lag (in particular in spring when it means getting out of bed "one hour earlier"). I still don't really care about cows that have to deliver their milk at different times since there is no intrinsic reason that the clock on the wall has to show a particular time when it is done and if it were really a problem, the farmers could do it at fixed UTC.

So, obviously, it is a nuisance. So what are the benefit that justify it? Well, obviously, in summer the sun sets at a later hour and we get more sun when being outside in the summer. That sounds reasonable. But why restrict it to the summer?

Which brings me to my point: If you ask me, I want to get rid of changing the offset to UTC twice a year and want to permanently adopt daylight saving time.

But now I hear people cry that this is "unnatural", we have to have the traditional time at least in the winter when it does not matter as it's too cold to be outside (which only holds for people with defective clothing as we know). So how natural is CET (the time zone we set our clocks to in winter), let's take people living in Munich for an example?

First of all: It is not solar time! CET is the "mean solar time" when you live at  a longitude of 15 degrees east, which is (assuming the latitude) close to Neumarkt an der Ypps somewhere in Austria not too far from Vienna. Munich is about 20 minutes behind. So, this time is artificial as well, and Berlin being closer to 15 degrees, it is probably Prussian.

Also a common time zone for Germany was established only in the 1870s when the advent of railways and telegraphs make synchronization between different local times advantageous. So this "natural" time is not that old either.

It is so new, that Christ Church college in Oxford still refuses to fully bow to it: Their clock tower shows Greenwich time. And the cathedral services start according to solar time (about five minutes later) because they don't care about modern shenanigans. ("How many Oxford deans does it take to change a light bulb?" ---- "Change??!??"). Similarly, in Bristol, there is a famous clock with two minute hands.

Plus, even if you live in Neumarkt an der Ybbs, your sun dial does not always show the correct noon! Thanks to the tilt of the earth axis and the fact that the orbit of the earth is elliptic, this varies through the year by a number of minutes:

So, "winter time" is in no way more natural than the other time zone. So we should be free to choose a time zone according to what is convenient. At least for me, noon is not the center of my waking hours (it's more 5,5 : 12). So, aligning those more with the sun seems to be a pretty good idea.

PS: The title was a typo, but looking at it I prefer it the way it is...

## Monday, October 24, 2016

### Mandatory liability for software is a horrible idea

Over the last few days, a number of prominent web sites including Twitter, Snapchat and Github were effectively unreachable for an extended period of time. As became clear, the problem was that DynDNS, a provider of DNS services for these sites was under a number of very heavy DDoS (distributed denial of service) attack that were mainly coming from compromised internet of things devices, in particular web cams.

Even though I do not see a lot of benefit from being able to change the color of my bedroom light via internet, I love the idea to have lots of cheap devices (I continue to have a lot of fun with C.H.I.P.s, full scale Linux computers with a number of ports for just 5USD, also for Subsurface, in particular those open opportunities for the mobile version), there are of course concerns how one can economically have a stable update cycle for those, in particular once they are build into black-box customer devices.

Now, after some dust settled comes of course the question "Who is to blame?" and should be do anything about this. Of course, the manufacturer of the web cam made this possible through far from perfect firmware. Also, you could blame DynDNS for not being able to withstand the storms that from time to time sweep the internet (a pretty rough place after all) or the services like Twitter to have a single point of failure in DynDNS (but that might be hard to prevent given the nature of the DNS system).

More than once I have now heard a call for new laws that would introduce a liability for the manufacturer of the web cam as they did not provide firmware updates in time that prevent these devices from being owned and then DDoSing around on the internet.

This, I am convinced, would be a terrible idea: It would make many IT businesses totally uneconomic. Let's stick for example with the case at hand. What is the order of magnitude of damages that occurred to the big companies like Twitter? They probably lost ad revenue of about a weekend. Twitter recently made $6\cdot 10^8\$ $per quarter, which averages to 6.5 million per day. Should the web cam manufacturer (or OEM or distributor) now owe Twitter 13 million dollars? I am sure that would cause immediate bankruptcy. Or just the risk that this could happen would prevent anybody from producing web cams or similar things in the future. As nobody can produce non-trivial software that is free of bugs. You should strive to weed out all known bugs and provide updates, of course, but should you be made responsible if you couldn't? Responsible in a financial sense? What was the damage cause by the heart bleed bug? I am sure this was much more expensive. Who should pay for this? OpenSSL? Everybody that links against OpenSSL? The person that committed the wrong patch? The person that missed it code review? Even if you don't call up these astronomic sums and have fixed fine (e.g. an unfixed vulnerability that gives root access to an attacker from the net costs 10000$) that would immediately stop all open source development. If you give away your software for free, do you really want to pay fines if not everything is perfect? I surely wouldn't.

For that reason, the GPL has the clauses (and other open source licenses have similar ones) stating

11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
(capitalization in the original). Of course, there is "required by applicable law" but I cannot see people giving you software for free if you later make them pay fines.

And for course, it is also almost impossible to make exceptions in the law for this. For example, a "non-commercial" exception does not help as even though you do not charge for open source software a lot of it is actually provided with some sort of commercial interest.

Yes, I can understand the tendency to make creators of defective products that don't give a damn about an update path responsible for the stuff they ship out. And I have the greatest sympathy for consumer protection laws. But here, there collateral damage would be huge (we might well lose the whole open source universe every small software company except the few big one that can afford the herds of lawyers to defend against these fines).

Note that I only argue for mandatory liability. It should of course always be a possibility that a provider of software/hardware give some sort of "fit for purpose" guarantee to its customers or a servicing contract where they promise to fix bugs (maybe so that the customer can fulfill their liabilities to their customers herself). But in most of the cases, the provider will charge for that. And the price might be higher than currently that for a light bulb with an IP address.

The internet is a rough place. If you expose your service to it better make sure you can handle every combination of 0s and 1s that comes in from there or live with it. Don't blame the source of the bits (no matter how brain dead the people at the other end might be).

## Friday, October 07, 2016

### My two cents on this year's physics Nobel prize

This year's Nobel prize is given for quite abstract concepts. So the popular science outlets struggle in giving good explanations for what it is awarded for. I cannot add anything to this, but over at math overflow, mathematicians asked for a mathematical explanation. So here is my go of an outline for people familiar with topology but not so much physics:

Let me try to give a brief explanation: All this is in the context of Fermi liquid theory, the idea that you can describe the low energy physics of these kinds of systems by pretending they are generated by free fermions in an external potential. So, all you need to do is to solve the single particle problem for the external potential and then fill up the energy levels from the bottom until you reach the total particle number (or actually the density). It is tempting (and conventional) to call these particles electrons, and I will do so here, but of course actual electrons are not free but interacting. This "Fermi Liquid" explanation is just and effective description for long wavelength (the IR end of the renormalization group flow) where it turns out, that at those scales the interactions play no role (they are "irrelevant operators" in the language of the renormalization group).

The upshot is, we are dealing with free "electrons" and the previous paragraph was only essential if you want to connect to the physical world (but this is MATH overflow anyway).

Since the external potential comes from a lattice (crystal) it is invariant under lattice translations. So Bloch theory tells you, you can restrict your attention as far as solving the Schrödinger equation to wave functions living in the unit cell of the lattice. But you need to allow for quasi-periodic boundary conditions, i.e. when you go once around the unit cell you are allowed to pick up a phase. In fact, there is one phase for each generator of the first homotopy group of the unit cell. Each choice of these phases corresponds to one choice of boundary conditions for the wave function and you can compute the eigenvalues of the Hamiltonian for these given boundary conditions (the unit cell is compact so we expect discrete eigenvalues, bounded from below).

But these eigenvalues depend on the boundary conditions and you can think of the as a function of the phases. Each of the phases takes values in U(1) so the space of possible phases is a torus and you can think of the eigenvalues as functions on the torus. Actually, when going once around an irreducible cycle of the torus not all eigenvalues have to come back to themselves, you can end up with a permutation it this is not really a function but a section of a bundle but let's not worry too much about this as generally this "level crossing" does not happen in two dimensions and only at discrete points in 3D (this is Witten's argument with the 2x2 Hamiltonian above).

The torus of possible phases is called the "Brioullin zone" (sp?) by physicists and its elements "inverse lattice vectors" (as you can think of the Brioullin zone as obtained from modding out the dual lattice of the lattice we started with).

Now if your electron density is N electrons per unit cell of the lattice Fermi Liquid theory asks you to think of the lowest N energy levels as occupied. This is the "Fermi level" or more precisely the graph of the N-th eigenvalue over the Bioullin zone. This graph (views as a hyper-surface) can have non-trivial topology and the idea is that by doing small perturbations to the system (like changing the doping of the physical probe or changing the pressure or external magnetic field or whatever) stuff behaves continuously and thus the homotopy class cannot change and is thus robust (or "topological" as the physicist would say).

If we want to inquire about the quantum Hall effect, this picture is also useful: The Hall conductivity can be computed to leading order by linear response theory. This allows us to employ the Kubo formula to compute it as a certain two-point function or retarded Green's function. The relevant operators turn out to be related to the N-th level wave function and how it changes when we move around in the Brioullin zone: If we denote by u the coordinates of the Brioullin zone and by $\psi_u(x)$ the N-th eigenfunction for the boundary conditions implied by u, we can define a 1-form
$$A = \sum_i \langle \psi_u|\partial_{u_i}|\psi_u\rangle\, du^i = \langle\psi_u|d_u|\psi\rangle.$$
This 1-form is actually the connection of a U(1) bundle and the expression the Kubo-formula asks us to compute turns out to be the first Chern number of that bundle (over the Brioullin zone).

Again that, as in integer, cannot change upon small perturbations of the physical system and this is the explanation of the levels in the QHE.

In modern applications, an important role is played by the (N-dimensional and thus finite dimensional) projector the subspace of Hilbert space spanned by the eigenfunctions corresponding to he N lowest eigenvalues, again fibered over the Brioullin zone. Then one can use K-theory (and KO-theory in fact) related to this projector to classify the possible classes of Fermi surfaces (these are the "topological phases of matter", as eventually, when the perturbation becomes too strong even the discrete invariants can jump which then physically corresponds to a phase transition).

### My two cents on this years physics Nobel prize

This year's Nobel prize is given for quite abstract concepts. So the popular science outlets struggle in giving good explanations for what it is awarded for. I cannot add anything to this, but over at math overflow, mathematicians asked for a mathematical explanation. So here is my go of an outline for people familiar with topology but not so much physics:

Let me try to give a brief explanation: All this is in the context of Fermi liquid theory, the idea that you can describe the low energy physics of these kinds of systems by pretending they are generated by free fermions in an external potential. So, all you need to do is to solve the single particle problem for the external potential and then fill up the energy levels from the bottom until you reach the total particle number (or actually the density). It is tempting (and conventional) to call these particles electrons, and I will do so here, but of course actual electrons are not free but interacting. This "Fermi Liquid" explanation is just and effective description for long wavelength (the IR end of the renormalization group flow) where it turns out, that at those scales the interactions play no role (they are "irrelevant operators" in the language of the renormalization group).

The upshot is, we are dealing with free "electrons" and the previous paragraph was only essential if you want to connect to the physical world (but this is MATH overflow anyway).

Since the external potential comes from a lattice (crystal) it is invariant under lattice translations. So Bloch theory tells you, you can restrict your attention as far as solving the Schrödinger equation to wave functions living in the unit cell of the lattice. But you need to allow for quasi-periodic boundary conditions, i.e. when you go once around the unit cell you are allowed to pick up a phase. In fact, there is one phase for each generator of the first homotopy group of the unit cell. Each choice of these phases corresponds to one choice of boundary conditions for the wave function and you can compute the eigenvalues of the Hamiltonian for these given boundary conditions (the unit cell is compact so we expect discrete eigenvalues, bounded from below).

But these eigenvalues depend on the boundary conditions and you can think of the as a function of the phases. Each of the phases takes values in U(1) so the space of possible phases is a torus and you can think of the eigenvalues as functions on the torus. Actually, when going once around an irreducible cycle of the torus not all eigenvalues have to come back to themselves, you can end up with a permutation it this is not really a function but a section of a bundle but let's not worry too much about this as generally this "level crossing" does not happen in two dimensions and only at discrete points in 3D (this is Witten's argument with the 2x2 Hamiltonian above).

The torus of possible phases is called the "Brioullin zone" (sp?) by physicists and its elements "inverse lattice vectors" (as you can think of the Brioullin zone as obtained from modding out the dual lattice of the lattice we started with).

Now if your electron density is N electrons per unit cell of the lattice Fermi Liquid theory asks you to think of the lowest N energy levels as occupied. This is the "Fermi level" or more precisely the graph of the N-th eigenvalue over the Bioullin zone. This graph (views as a hyper-surface) can have non-trivial topology and the idea is that by doing small perturbations to the system (like changing the doping of the physical probe or changing the pressure or external magnetic field or whatever) stuff behaves continuously and thus the homotopy class cannot change and is thus robust (or "topological" as the physicist would say).

If we want to inquire about the quantum Hall effect, this picture is also useful: The Hall conductivity can be computed to leading order by linear response theory. This allows us to employ the Kubo formula to compute it as a certain two-point function or retarded Green's function. The relevant operators turn out to be related to the N-th level wave function and how it changes when we move around in the Brioullin zone: If we denote by u the coordinates of the Brioullin zone and by $\psi_u(x)$ the N-th eigenfunction for the boundary conditions implied by u, we can define a 1-form
$$A = \sum_i \langle \psi_u|\partial_{u_i}|\psi_u\rangle\, du^i = \langle\psi_u|d_u|\psi\rangle.$$
This 1-form is actually the connection of a U(1) bundle and the expression the Kubo-formula asks us to compute turns out to be the first Chern number of that bundle (over the Brioullin zone).

Again that, as in integer, cannot change upon small perturbations of the physical system and this is the explanation of the levels in the QHE.

In modern applications, an important role is played by the (N-dimensional and thus finite dimensional) projector the subspace of Hilbert space spanned by the eigenfunctions corresponding to he N lowest eigenvalues, again fibered over the Brioullin zone. Then one can use K-theory (and KO-theory in fact) related to this projector to classify the possible classes of Fermi surfaces (these are the "topological phases of matter", as eventually, when the perturbation becomes too strong even the discrete invariants can jump which then physically corresponds to a phase transition).

## Monday, September 19, 2016

### Brute forcing Crazy Game Puzzles

In the 1980s, as a kid I loved my Crazy Turtles Puzzle ("Das verrückte Schildkrötenspiel"). For a number of variations, see here or here.

I had completely forgotten about those, but a few days ago, I saw a self-made reincarnation when staying at a friends' house:

I tried a few minutes to solve it, unsuccessfully (in case it is not clear: you are supposed to arrange the nine tiles in a square such that they form color matching arrows wherever they meet).

So I took the picture above with the plan to either try a bit more at home or write a program to solve it. Yesterday, I had about an hour and did the latter. I am a bit proud of the implementation I came up with and in particular the fact that I essentially came up with a correct program: It came up with the unique solution the first time I executed it. So, here I share it:

#!/usr/bin/perl

# 1 rot 8
# 2 gelb 7
# 3 gruen 6
# 4 blau 5

@karten = (7151, 6754, 4382, 2835, 5216, 2615, 2348, 8253, 4786);

foreach $karte(0..8) {$farbe[$karte] = [split //,$karten[$karte]]; } &ausprobieren(0); sub ausprobieren { my$pos = shift;

foreach my $karte(0..8) { next if$benutzt[$karte];$benutzt[$karte] = 1; foreach my$dreh(0..3) {
if ($pos % 3) { # Nicht linke Spalte$suche = 9 - $farbe[$gelegt[$pos - 1]]->[(1 -$drehung[$gelegt[$pos - 1]]) % 4];
next if $farbe[$karte]->[(3 - $dreh) % 4] !=$suche;
}
if ($pos >= 3) { # Nicht oberste Zeile$suche = 9 - $farbe[$gelegt[$pos - 3]]->[(2 -$drehung[$gelegt[$pos - 3]]) % 4];
next if $farbe[$karte]->[(4 - $dreh) % 4] !=$suche;
}

$benutzt[$karte] = 1;
$gelegt[$pos] = $karte;$drehung[$karte] =$dreh;
#print @gelegt[0..$pos]," ",@drehung[0..$pos]," ", 9 - $farbe[$gelegt[$pos - 1]]->[(1 -$drehung[$gelegt[$pos - 1]]) % 4],"\n";

if ($pos == 8) { print "Fertig!\n"; for$l(0..8) {
print "$gelegt[$l] $drehung[$gelegt[$l]]\n"; } } else { &ausprobieren($pos + 1);
}
}
$benutzt[$karte] = 0;
}
}

Sorry for variable names in German, but the idea should be clear. Regarding the implementation: red, yellow, green and blue backs of arrows get numbers 1,2,3,4 respectively and pointy sides of arrows 8,7,6,5 (so matching combinations sum to 9).

It implements depth first tree search where tile positions (numbered 0 to 8) are tried left to write top to bottom. So tile $n$ shares a vertical edge with tile $n-1$ unless it's number is 0 mod 3 (leftist column) and it shares a horizontal edge with tile $n-3$ unless $n$ is less than 3, which means it is in the first row.

It tries rotating tiles by 0 to 3 times 90 degrees clock-wise, so finding which arrow to match with a neighboring tile can also be computed with mod 4 arithmetic.

## Monday, June 20, 2016

### Restoring deleted /etc from TimeMachine

Yesterday, I managed to empty the /etc directory on my macbook (don't ask how I did it. I was working on subsurface and had written a perl script to move system files around that had to be run with sudo. And I was still debugging...).

Anyway, once I realized what the problem was I did some googling but did not find the answer. So here, as a service to fellow humans googling for help is how to fix this.

The problem is that in /etc all kinds of system configuration files are stored and without it the system does not know anymore how to do a lot of things. For example it contains /etc/passwd which contains a list of all users, their home directories and similar things. Or /etc/shadow which contains (hashed) passwords or, and this was most relevant in my case, /etc/sudoers which contains a list of users who are allowed to run commands with sudo, i.e. execute commands with administrator privileges (in the GUI this shows as as a modal dialog asking you to type in your password to proceed).

In my case, all was gone. But, luckily enough, I had a time machine backup. So I could go 30 minutes back in time and restore the directory contents.

The problem was that after restoring it, it ended up as a symlink to /private/etc and user helling wasn't allowed to access its contents. And I could not sudo the access since the system could not determine I am allowed to sudo since it could not read /etc/sudoers.

I tried a couple of things including a reboot (as a last resort I figured I could always boot in target disk mode and somehow fix the directory) but it remained in /private/etc and I could not access it.

Finally I found the solution (so here it is): I could look at the folder in Finder (it had a red no entry sign on it meaning that I could not open it). But I could right click and select Information and there I could open the lock by tying in my password (no idea why that worked) and give myself read (and for that matter write) permissions and then everything was fine again.

## Tuesday, May 24, 2016

### Holographic operator ordering?

Believe it or not, at the end of this week I will speak at a workshop on algebraic and constructive quantum field theory. And (I don't know which of these two facts is more surprising) I will advocate holography.

More specifically, I will argue that it seems that holography can be a successful approach to formulate effective low energy theories (similar to other methods like perturbation theory of weakly coupled quasi particles or minimal models). And I will present this as a challenge to the community at the workshop to show that the correlators computed with holographic methods indeed encode a QFT (according to your favorite set of rules, e.g. Whiteman or Osterwalder-Schrader). My [kudos to an anonymous reader for pointing out a typo] guess would be that this has a non-zero chance of being a possible approach to the construction of (new) models in that sense or alternatively to show that the axioms are violated (which would be even more interesting for holography).

In any case, I am currently preparing my slides (I will not be able to post those as I have stolen far too many pictures from the interwebs including the holographic doctor from Star Trek Voyager) and came up with the following question:

In a QFT, the order of insertions in a correlator matters (unless we fix an ordering like time ordering). How is that represented on the bulk side?

## Thursday, April 21, 2016

### The Quantum in Quantum Computing

I am sure, by now, all of you have seen Canada's prime minister  "explain" quantum computers at Perimeter. It's really great that politicians care about these things and he managed to say what is the standard explanation for the speed up of quantum computers compared to their classical cousins: It is because you can have superpositions of initial states and therefore "perform many operations in parallel".

Except of course, that this is bullshit. This is not the reason for the speed up, you can do the same with a classical computer, at least with a  probabilistic one: You can also as step one perform a random process (throw a coin, turn a Roulette wheel, whatever) to determine the initial state you start your computer with. Then looking at it from the outside, the state of the classical computer is mixed and the further time evolution also "does all the computations in parallel". That just follows from the formalism of (classical) statistical mechanics.

Of course, that does not help much since the outcome is likely also probabilistic. But it has the same parallelism. And as the state space of a qubit is all of a Bloch sphere, the state space of a classical bit (allowing mixed states) is also an interval allowing a continuum of intermediate states.

The difference between quantum and classical is elsewhere. And it has to do with non-commuting operators (as those are essential for quantum properties) and those allow for entanglement.

To be more specific, let us consider one of the most famous quantum algorithms, Grover's database lookup, There the problem (at least in its original form) is to figure out which of $N$ possible "boxes" contains the hidden coin. Classically, you cannot do better than opening one after the other (or possibly in a random pattern), which takes $O(N)$ steps (on average).

For the quantum version, you first have to say how to encode the problem. The lore is, that you start with an $N$-dimensional Hilbert space with a basis $|1\rangle\cdots|N\rangle$. The secret is that one of these basis vectors is picked. Let's call it $|\omega\rangle$ and it is given to you in terms of a projection operator $P=|\omega\rangle\langle\omega|$.

Furthermore, you have at your disposal a way to create the flat superposition $|s\rangle = \frac1{\sqrt N}\sum_{i=1}^N |i\rangle$ and a number operator $K$ that act like $K|k\rangle= k|k\rangle$, i.e. is diagonal in the above basis and is able to distinguish the basis elements in terms of its eigenvalues.

Then, what you are supposed to do is the following: You form two unitary operators $U_\omega = 1 - 2P$  (this multiplies $|\omega\rangle$ by -1 while being the identity on the orthogonal subspace, i.e. is a reflection on the plane orthogonal to $|\omega\rangle$) and $U_s = 2|s\rangle\langle s| - 1$ which reflects the vectors orthogonal to $|s\rangle$.

It is not hard to see that both $U_s$ and $U_\omega$ map the two dimensional place spanned by $|s\rangle$ and $|\omega\rangle$ into itself. They are both reflections and thus their product is a rotation by twice the angle between the two planes which is given in terms of the scalar product $\langle s|\omega\rangle =1/\sqrt{N}$ as $\phi =\sin^{-1}\langle s|\omega\rangle$.

But obviously, using a rotation by $\cos^{-1}\langle s|\omega\rangle$, one can rotate $|s\rangle$ onto $\omega$. So all we have to do is to apply the product $(U_sU\omega)^k$ where $k$ is the ratio between these two angles which is $O(\sqrt{N})$. (No need to worry that this is not an integer, the error is $O(1/N)$ and has no influence). Then you have turned your initial state $|s\rangle$ into $|omega\rangle$ and by measuring the observable $K$ above you know which box contained the coin.

Since this took only $O(\sqrt{N})$ steps this is a quadratic speed up compared to the classical case.

So how did we get this? As I said, it's not the superposition. Classically we could prepare the probabilistic state that opens each box with probability $1/N$. But we have to expect that we have to do that $O(N)$ times, so this is essential as fast as systematically opening one box after the other.

To have a better unified classical-quantum language, let us say that we have a state space spanned by $N$ pure states $1,\ldots,N$. What we can do in the quantum case is to turn an initial state which had probability $1/N$ to be in each of these pure states into one that is deterministically in the sought after state.

Classically, this is impossible since no time evolution can turn a mixed state into a pure state. One way to see this is that the entropy of the probabilistic state is $\log(N)$ while it is 0 for the sought after state. If you like classically, we only have the observables given by C*-algebra generated by $K$, i.e. we can only observe which box we are dealing with. Both $P$ and $U_\omega$ are also in this classical algebra (they are diagonal in the special basis) and the strict classical analogue would be that we are given a rank one projector in that algebra and we have to figure out which one.

But quantum mechanically, we have more, we also have $U_s$ which does not commute with $K$ and is thus not in the classical algebra. The trick really is that in this bigger quantum algebra generated by both $K$ and $U_s$, we can form a pure state that becomes the probabilistic state when restricted to the classical algebra. And as a pure state, we can come up with a time evolution that turns it into the pure state $|\omega\rangle$.

So, this is really where the non-commutativity and thus the quantumness comes in. And we shouldn't really expect Trudeau to be able to explain this in a two sentence statement.

PS: The actual speed up in the end comes of course from the fact that probabilities are amplitudes squared and the normalization in $|s\rangle$ is $1/\sqrt{N}$ which makes the angle to be rotated by proportional to $1/\sqrt{N}$.

### One more resuscitation

This blog has been silent for almost two years for a number of reasons. First, I myself stopped reading blogs on a daily basis as in open Google Reader right after the arXiv an checking what's new. I had already stopped doing that due to time constraints before Reader was shut down by Google and I must say I don't miss anything. My focus shifted much more to Twitter and Facebook and from there, I am directed to the occasional blog post, but as I said, I don't check them systematically anymore. And I assume others do the same.

But from time to time I run into things that I would like to discuss on a blog. Where (as my old readers probably know) I am mainly interested in discussions. I don't write here to educate (others) but only myself. I write about something I found interesting and would like to have further input on.

Plus, this should be more permanent than a Facebook post (which is gone once scrolled out of the bottom of the screen) and more than the occasional 160 character remark on Twitter.

Assuming that others have adopted their reading habits in a similar way to the year 2016, I have set up If This Than That to announce new posts to FB and Twitter so others might have a chance to find them.