Deep learning and toposes

Judging from this and that paper, deep learning is the string theory of the 2020s for geometers and representation theorists.

If you want to know quickly what neural networks really are, I can recommend the post demystifying deep learning.

The typical layout of a deep neural network has an input layer $L_0$ allowing you to feed $N_0$ numbers to the system (a vector $\vec{v_0} \in \mathbb{R}^{N_0}$), an output layer $L_p$ spitting $N_p$ numbers back (a vector $\vec{v_p} \in \mathbb{R}^{N_p}$), and $p-1$ hidden layers $L_1,\dots,L_{p-1}$ where all the magic happens. The hidden layer $L_i$ has $N_i$ virtual neurons, their states giving a vector $\vec{v_i} \in \mathbb{R}^{N_i}$.



Picture taken from Logical informations cells I

For simplicity let’s assume all neurons in layer $L_i$ are wired to every neuron in layer $L_{i+1}$, the relevance of these connections given by a matrix of weights $W_i \in M_{N_{i+1} \times N_i}(\mathbb{R})$.

If at any given moment the ‘state’ of the neural network is described by the state-vectors $\vec{v_1},\dots,\vec{v_{p-1}}$ and the weight-matrices $W_0,\dots,W_p$, then an input $\vec{v_0}$ will typically result in new states of the neurons in layer $L_1$ given by

\[
\vec{v_1}’ = c_0(W_0.\vec{v_0}+\vec{v_1}) \]

which will then give new states in layer $L_2$

\[
\vec{v_2}’ = c_1(W_1.\vec{v_1}’+\vec{v_2}) \]

and so on, rippling through the network, until we get as the output

\[
\vec{v_p} = c_{p-1}(W_{p-1}.\vec{v_{p-1}}’) \]

where all the $c_i$ are fixed smooth activation functions $c_i : \mathbb{R}^{N_{i+1}} \rightarrow \mathbb{R}^{N_{i+1}}$.

This is just the dynamic, or forward working of the network.

The learning happens by comparing the computed output with the expected output, and working backwards through the network to alter slightly the state-vectors in all layers, and the weight-matrices between them. This process is called back-propagation, and involves the gradient descent procedure.

Even from this (over)simplified picture it seems doubtful that set valued (!) toposes are suitable to describe deep neural networks, as the Paris-Huawei-topos-team claims in their recent paper Topos and Stacks of Deep Neural Networks.

Still, there is a vast generalisation of neural networks: learners, developed by Brendan Fong, David Spivak and Remy Tuyeras in their paper Backprop as Functor: A compositional perspective on supervised learning (which btw is an excellent introduction for mathematicians to neural networks).

For any two sets $A$ and $B$, a learner $A \rightarrow B$ is a tuple $(P,I,U,R)$ where

  • $P$ is a set, a parameter space of some functions from $A$ to $B$.
  • $I$ is the interpretation map $I : P \times A \rightarrow B$ describing the functions in $P$.
  • $U$ is the update map $U : P \times A \times B \rightarrow P$, part of the learning procedure. The idea is that $U(p,a,b)$ is a map which sends $a$ closer to $b$ than the map $p$ did.
  • $R$ is the request map $R : P \times A \times B \rightarrow A$, the other part of the learning procedure. The idea is that the new element $R(p,a,b)=a’$ in $A$ is such that $p(a’)$ will be closer to $b$ than $p(a)$ was.

The request map is also crucial is defining the composition of two learners $A \rightarrow B$ and $B \rightarrow C$. $\mathbf{Learn}$ is the (symmetric, monoidal) category with objects all sets and morphisms equivalence classes of learners (defined in the natural way).

In this way we can view a deep neural network with $p$ layers as before to be the composition of $p$ learners
\[
\mathbb{R}^{N_0} \rightarrow \mathbb{R}^{N_1} \rightarrow \mathbb{R}^{N_2} \rightarrow \dots \rightarrow \mathbb{R}^{N_p} \]
where the learner describing the transition from the $i$-th to the $i+1$-th layer is given by the equivalence class of data $(A_i,B_i,P_i,I_i,U_i,R_i)$ with
\[
A_i = \mathbb{R}^{N_i},~B_i = \mathbb{R}^{N_{i+1}},~P_i = M_{N_{i+1} \times N_i}(\mathbb{R}) \times \mathbb{R}^{N_{i+1}} \]
and interpretation map for $p = (W_i,\vec{v}_{i+1}) \in P_i$
\[
I_i(p,\vec{v_i}) = c_i(W_i.\vec{v_i}+\vec{v}_{i+1}) \]
The update and request maps (encoding back-propagation and gradient-descent in this case) are explicitly given in theorem III.2 of the paper, and they behave functorial (whence the title of the paper).

More generally, we will now associate objects of a topos (actually just sheaves over a simple topological space) to a network op $p$ learners
\[
A_0 \rightarrow A_1 \rightarrow A_2 \rightarrow \dots \rightarrow A_p \]
inspired by section I.2 of Topos and Stacks of Deep Neural Networks.

The underlying category will be the poset-category (the opposite of the ordering of the layers)
\[
0 \leftarrow 1 \leftarrow 2 \leftarrow \dots \leftarrow p \]
The presheaf on a poset is a locale and in this case even the topos of sheaves on the topological space with $p+1$ nested open sets.
\[
X = U_0 \supseteq U_1 \supseteq U_2 \supseteq \dots \supseteq U_p = \emptyset \]
If the learner $A_i \rightarrow A_{i+1}$ is (the equivalence class) of the tuple $(A_i,A_{i+1},P_i,I_i,U_i,R_i)$ we will now describe two sheaves $\mathcal{W}$ and $\mathcal{X}$ on the topological space $X$.

$\mathcal{W}$ has as sections $\Gamma(\mathcal{W},U_i) = \prod_{j=i}^{p-1} P_i$ and the obvious projection maps as the restriction maps.

$\mathcal{X}$ has as sections $\Gamma(\mathcal{X},U_i) = A_i \times \Gamma(\mathcal{W},U_i)$ and restriction map to the next smaller open
\[
\rho^i_{i+1}~:~\Gamma(\mathcal{X},U_i) \rightarrow \Gamma(\mathcal{X},U_{i+1}) \qquad (a_i,(p_i,p’)) \mapsto (p_i(a_i),p’) \]
and other retriction maps by composition.

A major result in Topos and Stacks of Deep Neural Networks is that back-propagation is a natural transformation, that is, a sheaf-morphism $\mathcal{X} \rightarrow \mathcal{X}$.

In this general setting of layered learners we can always define a map on the sections of $\mathcal{X}$ (for every open $U_i$), $\Gamma(\mathcal{X},U_i) \rightarrow \Gamma(\mathcal{X},U_i)$
\[
(a_,(p_i,p’)) \mapsto (R(p_i,a_i,p_i(a_i)),(U(p_i,a_i,p_i(a_i)),p’) \]
But, in order for this to define a sheaf-morphism, compatible with the restrictions, we will have to impose restrictions on the update and restriction maps of the learners, in general.

Still, in the special case of deep neural networks, this compatibility follows from the functoriality property of Backprop as Functor: A compositional perspective on supervised learning.

To be continued.

Grothendieck talks

In 2017-18, the seminar Lectures grothendieckiennes took place at the ENS in Paris. Among the speakers were Alain Connes, Pierre Cartier, Laurent Laforgue and Georges Maltsiniotis.

Olivia Caramello, who also contributed to the seminar, posts on her blog Around Toposes that the proceedings of this lectures series is now available from the SMF.

Olivia’s blogpost links also to the YouTube channel of the seminar. Several of these talks are well worth your time watching.

If you are at all interested in toposes and their history, and if you have 90 minutes to kill, I strongly recommend watching Colin McLarthy’s talk Grothendieck’s 1973 topos lectures:

In 1973, Grothendieck gave three lectures series at the Department of Mathematics of SUNY at Buffalo, the first on ‘Algebraic Geometry’, the second on ‘The Theory of Algebraic Groups’ and the third one on ‘Topos Theory’.

All of these Grothendieck talks were audio(!)-taped by John (Jack) Duskin, who kept and preserved them with the help of William Lawvere. They constitute more than 100 hours of rare recordings of Grothendieck.

This MathOverflow (soft) question links to this page stating:

“The copyright of all these recordings is that of the Department of Mathematics of SUNY at Buffalo to whose representatives, in particular Professors Emeritus Jack DUSKIN and Bill LAWVERE exceptional thanks are due both for the preservation and transmission of this historic archive, the only substantial archive of recordings of courses given by one of the greatest mathematicians of all time, whose work and ideas exercised arguably the most profound influence of any individual figure in shaping the mathematics of the second half od the 20th Century. The material which it is proposed to make available here, with their agreement, will form a mirror site to the principal site entitled “Grothendieck at Buffalo” (url: ).”

Sadly, the URL is still missing.

Fortunately, another answer links to the Grothendieck project Thèmes pour une Harmonie by Mateo Carmona. If you scroll down to the 1973-section, you’ll find there all of the recordings of these three Grothendieck series of talks!

To whet your appetite, here’s the first part of his talk on topos theory on April 4th, 1973:

For all subsequent recordings of his talks in the Topos Theory series on May 11th, May 18th, May 25th, May 30th, June 4th, June 6th, June 20th, June 27th, July 2nd, July 10th, July 11th and July 12th, please consult Mateo’s website (under section 1973).

The $\mathbb{F}_1$ World Seminar

For some time I knew it was in the making, now they are ready to launch it:

The $\mathbb{F}_1$ World Seminar, an online seminar dedicated to the “field with one element”, and its many connections to areas in mathematics such as arithmetic, geometry, representation theory and combinatorics. The organisers are Jaiung Jun, Oliver Lorscheid, Yuri Manin, Matt Szczesny, Koen Thas and Matt Young.

From the announcement:

“While the origins of the “$\mathbb{F}_1$-story” go back to attempts to transfer Weil’s proof of the Riemann Hypothesis from the function field case to that of number fields on one hand, and Tits’s Dream of realizing Weyl groups as the $\mathbb{F}_1$ points of algebraic groups on the other, the “$\mathbb{F}_1$” moniker has come to encompass a wide variety of phenomena and analogies spanning algebraic geometry, algebraic topology, arithmetic, combinatorics, representation theory, non-commutative geometry etc. It is therefore impossible to compile an exhaustive list of topics that might be discussed. The following is but a small sample of topics that may be covered:

Algebraic geometry in non-additive contexts – monoid schemes, lambda-schemes, blue schemes, semiring and hyperfield schemes, etc.
Arithmetic – connections with motives, non-archimedean and analytic geometry
Tropical geometry and geometric matroid theory
Algebraic topology – K-theory of monoid and other “non-additive” schemes/categories, higher Segal spaces
Representation theory – Hall algebras, degenerations of quantum groups, quivers
Combinatorics – finite field and incidence geometry, and various generalizations”

The seminar takes place on alternating Wednesdays from 15:00 PM – 16:00 PM European Standard Time (=GMT+1). There will be room for mathematical discussion after each lecture.

The first meeting takes place Wednesday, January 19th 2022. If you want to receive abstracts of the talks and their Zoom-links, you should sign up for the mailing list.

Perhaps I’ll start posting about $\mathbb{F}_1$ again, either here, or on the dormant $\mathbb{F}_1$ mathematics blog. (see this post for its history).

Huawei and topos theory

Apart from the initiatives I mentioned last time, Huawei set up a long term collaboration with the IHES, the Huawei Young Talents Program.

“Every year, the Huawei Young Talents Program will fund on average 7 postdoctoral fellowships that will be awarded by the Institute’s Scientific Council, only on the basis of scientific excellence. The fellows will collaborate with the Institute’s permanent professors and work on topics of their interest.”

Over the next ten years, Huawei will invest 5 million euros in this program, and an additional 1 million euros goes into the creation of the ‘Huawei Chair in Algebraic Geometry’. It comes as no particular surprise that the first chairholder is Laurent Lafforgue.

At the launch of this Young Talents Program in November 2020, Lafforgue gave a talk on The creative power of categories: History and some new perspectives.

The latter part of the talk (starting at 47:50) clarifies somewhat Huawei’s interest in topos theory, and what Lafforgue (and others) hope to get out of their collaboration with the telecom company.

Clearly, Huawei is interested in deep neural networks, and if you can convince them your expertise is useful in that area, perhaps they’ll trow some money at you.

Jean-Claude Belfiore, another mathematician turned Huaweian, is convinced topos theory is the correct tool to study DNNs. Here’s his Huawei-clip from which it is clear he was originally hired to improve Huawei’s polar code.

At the 2018 IHES-Topos conference he gave the talk Toposes for Wireless Networks: An idea whose time has come, and recently he arXived the paper Topos and Stacks of Deep Neural Networks, written jointly with Daniel Bennequin. Probably, I’ll come back to this paper another time, for now, the nForum has this page on it.

Towards the end of his talk, Lafforgue suggests the idea of creating an institute devoted to toposes and their applications, endorsed by IHES and supported by Huawei. Surely he knows that the Topos Institute already exists.

And, if you wonder why Huawei trows money at IHES rather than your university, I leave you with Lafforgue’s parting words:

“IHES professors are able to think and evaluate for themselves, whereas most mathematicians just follow ‘group thinking'”

Ouch!

Huawei and French mathematics

Huawei, the Chinese telecom giant, appears to support (and divide) the French mathematical community.

I was surprised to see that Laurent Lafforgue’s affiliation recently changed from ‘IHES’ to ‘Huawei’, for example here as one of the organisers of the Lake Como conference on ‘Unifying themes in geometry’.

Judging from this short Huawei-clip (in French) he thoroughly enjoys his new position.

Huawei claims that ‘Three more winners of the highest mathematics award have now joined Huawei’:

Maxim Kontsevich, (IHES) Fields medal 1998

Pierre-Louis Lions (College de France) Fields medal 1994

Alessio Figalli (ETH) Fields medal 2018

These news-stories seem to have been G-translated from the Chinese, resulting in misspellings and perhaps other inaccuracies. Maxim’s research field is described as ‘kink theory’ (LoL).

Apart from luring away Fields medallist, Huawei set up last year the brand new Huawei Lagrange Research Center in the posh 7th arrondissement de Paris. (This ‘Lagrange Center’ is different from the Lagrange Institute in Paris devoted to astronomy and physics.)



It aims to host about 30 researchers in mathematics and computer science, giving them the opportunity to live in the ‘unique eco-system of Paris, having the largest group of mathematicians in the world, as well as the best universities’.

Last May, Michel Broué authored an open letter to the French mathematical community Dans un hotel particulier du 7eme arrondissement de Paris (in French). A G-translation of the final part of this open letter:

“In the context of a very insufficient research and development effort in France, and bleak prospects for our young researchers, it is tempting to welcome the creation of the Lagrange center. We welcome the rise of Chinese mathematics to the highest level, and we are obviously in favour of scientific cooperation with our Chinese colleagues.

But in view of the role played by Huawei in the repression in Xinjiang and potentially everywhere in China, we call on mathematicians and computer scientists already engaged to withdraw from this project. We ask all researchers not to participate in the activities of this center, as we ourselves are committed to doing.”

Among the mathematicians signing the letter are Pierre Cartier and Pierre Schapira.

To be continued.