Top magazine Nature highlighted it on the cover of the first December issue: there was a newly discovered pattern in knot theory, a branch of mathematics with applications in, among other things, molecular genetics. The news value was not so much the find itself, but how it was made: by machine learning. A first in theoretical mathematics, according to the researchers.
In machine learning, computers are fed large data sets in the hope of revealing patterns. Well-known applications are recognition of faces or tumors. The police’s work is simplified by matching surveillance camera images with faces from a photo database. Doctors get help with early detection of common cancers. They are well-known success stories of machine learning algorithms.
Relatively new, and still relatively unknown, is the implementation of machine learning in pure scientific research. Responsible for the discovery in the theory of mathematical knots is the British company DeepMind, which, like Google, is owned by Alphabet. In 2019, András Juhász and Marc Lackenby, two mathematicians from Oxford University, got in touch with Alex Davies and Nenad Tomasev from DeepMind. Then arose the idea of using artificial intelligence to search for relationships in knot theory, the field of Juhász and Lackenby.
Elastic bands or bicycle tires
A ‘knot’ is a closed curve in three-dimensional space. To a mathematician, a lace knot (or loop) is not a knot: a lace has two loose ends and is therefore not a closed curve. A node does not intersect itself, but it can cross itself multiple times. A simple closed loop with no crossings, such as a rubber band or a bicycle tire, also meets the definition of a knot; mathematicians call such a loop an ‘unfold’.
Two knots are the same if you can manipulate one of them to look like the other without cutting it, of course. On a flat surface, a node is represented as a two-dimensional projection, so that at each intersection it is visible which part is above and which part is below. Such a projection then consists of several ‘strings’.
A knot is not changed by putting one thread over another. If you often perform this type of manipulation, the knot will look completely different. Even a person who deals with knots on a daily basis cannot immediately see that nothing essentially has changed.
Mathematicians have found a solution to this: they use ‘invariants’ to determine whether two nodes are really different. An invariant is an unchanged permanent quantity. An example from geometry: If A and B are two fixed points on a circle, and P is a third point moving on the circle, then the angle that the line through A and P makes with the line through B and P does not change. .
A simple node invariant is ‘tricolorability’. A knot is “tricolorable” if there is a coloring of the threads with three colors, where the three threads around each intersection are either all the same or all differently colored. If you move a few loops so that the knot looks different, the tricolor property is preserved. A node that is not tricolored is still not after deformation.
Although tricolorability is easy to check, this invariant is not very practical. If only one of two nodes is tricolored, the nodes are distinct. But the reverse need not be the case: two different knots can best both be tricolor, or both can’t.
More complicated node invariants make use of ‘polynomials’: each node is labeled with a formula in which certain properties of the node, such as the number of intersections, are encoded. Well known are the Conway polynomial, named after John Conway, and the Jones polynomial, discovered by Vaughan Jones, who received a Field Medal for it in 1990. These invariants are much more refined than tricolorism. In particular, the Jones polynomial can distinguish extremely many nodes from each other.
The Oxford and DeepMind researchers used the ’tilt’ and ‘signature’ of the nodes in their research. The slope of a node is a number, not necessarily an integer, that is calculated from the geometric properties of the node. The signature is an algebraically defined entity related to four-dimensional properties of the node. Abstract quantities, not easy to imagine but very useful: they are invariants. So if two nodes are the same, so are their slopes and signatures. In other words, if two notes have different slopes or different signatures, it is clear that they are different notes.
The researchers calculated the slope and signature of millions of different knots. For example, the computer was trained to look for a relationship between these invariants. It worked: after a while, the computer was able to predict the signature with great precision based on the slope. The researchers then poured the found ratio into a formula.
The work was not finished, because the cross-connection discovered was only a conjecture. Mathematics is a subject of absolute certainty, and a rigorous mathematical proof was still lacking. However, the Oxford and DeepMind mathematicians managed to deliver the proof in the foreseeable future. They published the proof on the preprint server arXiv. When the peer review process is complete – in mathematics it can take months or even several years – and no errors have been identified, it will be published in a mathematical journal.
Computer as a tool
Mathematical research has always been a combination of intuition and formalism. A good intuition is needed to see the big picture, strict formalism to achieve certainty. Computers have been valuable tools for decades. Simulations and visualizations help mathematicians discover patterns and develop their intuition. What makes the new study so special is that the computer was not used by humans to spot a connection, but that the computer itself discovered a connection.
Machine learning is developing at a breakneck pace. Will it significantly improve our understanding of mathematics in the future? Not everyone is convinced of that yet. Juhász is naturally excited about his own result, but at the same time acknowledges that machine learning is not (yet) capable of studying mathematical problems for which big data cannot be generated.
Ernest Davis, New York University computer scientist and co-author of the popular science book Restarts the AIspoke critically in a review he received in response Naturepublication issued: “Deep learning left me in the dark about the overall structure of knots. It only grasped two invariants as single numbers, completely outside the context of knots.”
Mathematician Roland van der Veen from the University of Groningen also believes that it is too early to talk about ‘impact’. “It’s great that they found a cross connection by checking the data, but I wonder if something similar could have been found without machine learning,” he emails. Of course, he cannot rule out that artificial intelligence will contribute more in the future, but according to him, this is not yet evident from this result. He adds that this does not change the fact that the role of computers is great: “There is a large number of tools in the node software. Great progress has been made this century in systematizing our knot knowledge.”
Also read: AlphaFold predicts a 3D image of the structure of human proteins
Also read: AlphaStar reaches professional level in StarCraft II video game
A version of this article is also published in NRC Handelsblad on 19 February 2022