I got the impression that for many physicists that have not yet spent too much time with this, deep learning and in particular deep neural networks are expected to be some kind of silver bullet that can answer all kinds of questions that humans have not been able to answer despite some effort. I think this hope is at best premature and looking at the (admittedly impressive) examples where it works (playing Go, classifying images, speech recognition, event filtering at LHC) these seem to be more like those problems where humans have at least a rough idea how to solve them (if it is not something that humans do everyday like understanding text) and also roughly how one would code it but that are too messy or vague to be treated by a traditional program.
So, during some of the less entertaining talks I sat down and thought about problems where I would expect neural networks to perform badly. And then, if this approach fails even in simpler cases that are fully under control one should maybe curb the expectations for the more complex cases that one would love to have the answer for. In the case of the workshop that would be guessing some topological (discrete) data (that depends very discontinuously on the model parameters). Here a simple problem would be a 2-torus wrapped by two 1-branes. And the computer is supposed to compute the number of matter generations arising from open strings at the intersections, i.e. given two branes (in terms of their slope w.r.t. the cycles of the torus) how often do they intersect? Of course these numbers depend sensitively on the slope (as a real number) as for rational slopes [latex]p/q[/latex] and [latex]m/n[/latex] the intersection number is the absolute value of [latex]pn-qm[/latex]. My guess would be that this is almost impossible to get right for a neural network, let alone the much more complicated variants of this simple problem.
Related but with the possibility for nicer pictures is the following: Can a neural network learn the shape of the Mandelbrot set? Let me remind those of you who cannot remember the 80ies anymore, for a complex number c you recursively apply the function
[latex]f_c(z)= z^2 +c[/latex]
starting from 0 and ask if this stays bounded (a quick check shows that once you are outside [latex]|z| < 2[/latex] you cannot avoid running to infinity). You color the point c in the complex plane according to the number of times you have to apply f_c to 0 to leave this circle. I decided to do this for complex numbers x+iy in the rectangle -0.74
I have written a small mathematica program to compute this image. Built into mathematica is also a neural network: You can feed training data to the function Predict[], for me these were 1,000,000 points in this rectangle and the number of steps it takes to leave the 2-ball. Then mathematica thinks for about 24 hours and spits out a predictor function. Then you can plot this as well:
There is some similarity but clearly it has no idea about the fractal nature of the Mandelbrot set. If you really believe in magic powers of neural networks, you might even hope that once it learned the function for this rectangle one could extrapolate to outside this rectangle. Well, at least in this case, this hope is not justified: The neural network thinks the correct continuation looks like this:
Ehm. No.
All this of course with the caveat that I am no expert on neural networks and I did not attempt anything to tune the result. I only took the neural network function built into mathematica. Maybe, with a bit of coding and TensorFlow one can do much better. But on the other hand, this is a simple two dimensional problem. At least for traditional approaches this should be much simpler than the other much higher dimensional problems the physicists are really interested in.
82 comments:
https://aboveintelligent.com/what-do-deep-neural-networks-understand-of-fractals-2ae354911601
Post a Comment