Skip to content →

Tag: geometry

Grothendieck’s gribouillis (6)

After the death of Grothendieck in November 2014, about 30.000 pages of his writings were found in Lasserre.



Since then I’ve been trying to follow what happened to them:

So, what’s new?

Well, finally we have closure!

Last Friday, Grothendieck’s children donated the 30.000 Laserre pages to the Bibliotheque Nationale de France.

Via Des manuscrits inédits du génie des maths Grothendieck entrent à la BnF (and Google-translate):

“The singularity of these manuscripts is that they “cover many areas at the same time” to form “a whole, a + cathedral work +, with undeniable literary qualities”, analyzes Jocelyn Monchamp, curator in the manuscripts department of the BnF.

More than in “Récoltes et semailles”, very autobiographical, the author is “in a metaphysical retreat”, explains the curator, who has been going through the texts with passion for a month. A long-term task as the writing, in fountain pen, is dense and difficult to decipher. “I got used to it… And the advantage for us was that the author had methodically paginated and dated the texts.” One of the parts, entitled “Structures of the psyche”, a book of enigmatic diagrams translating psychology into algebraic language. In another, “The Problem of Evil”, he unfolds over 15,000 pages metaphysical meditations and thoughts on Satan. We sense a man “caught up by the ghosts of his past”, with an adolescence marked by the Shoah, underlines Johanna Grothendieck whose grandfather, a Russian Jew who fled Germany during the war, died at Auschwitz.

The deciphering work will take a long time to understand everything this genius wanted to say.

On Friday, the collection joined the manuscripts department of the Richelieu site, the historic cradle of the BnF, alongside the writings of Pierre and Marie Curie and Louis Pasteur. It will only be viewable by researchers.“This is a unique testimony in the history of science in the 20th century, of major importance for research,” believes Jocelyn Monchamp.

During the ceremony, one of the volumes was placed in a glass case next to a manuscript by the ancient Greek mathematician Euclid.”

Probably, the recent publication of Récoltes et Semailles clinched the deal.

Also, it is unclear at this moment whether the Istituto Grothendieck, which harbours The centre for Grothendieck studies coordinated by Mateo Carmona (see this post) played a role in the decision making, nor what role the Centre will play in the further studies of Grothendieck’s gribouillis.

For other coverage on this, see Hermit ‘scribblings’ of eccentric French math genius unveiled.

3 Comments

A newish toy in town

In a recent post I recalled Claude Levy-Strauss’ observation “In Paris, intellectuals need a new toy every 15 years”, and gave a couple of links showing that the most recent IHES-toy has been spreading to other Parisian intellectual circles in recent years.

At the time (late sixties), Levy-Strauss was criticising the ongoing Foucault-hype. It appears that, since then, the frequency of a hype cycle is getting substantially shorter.

Ten days ago, the IHES announced that Dustin Clausen (of condensed math fame) is now joining the IHES as a permanent professor.

To me, this seems like a sensible decision, moving away from (too?) general topos theory towards explicit examples having potential applications to arithmetic geometry.

On the relation between condensed sets and toposes, here’s Dustin Clausen talking about “Toposes generated by compact projectives, and the example of condensed sets”, at the “Toposes online” conference, organised by Alain Connes, Olivia Caramello and Laurent Lafforgue in 2021.

Two days ago, Clausen gave another interesting (inaugural?) talk at the IHES on “A Conjectural Reciprocity Law for Realizations of Motives”.

Leave a Comment

The forests of the unconscious

We start from a large data-set $V=\{ k,l,m,n,\dots \}$ (texts, events, DNA-samples, …) with a suitable distance-function ($d(m,n) \geq 0~d(k,l)+d(l,m) \geq d(k.m)$) which measures the (dis)similarity between individual samples.

We’re after a set of unknown events $\{ p,q,r,s,\dots \}$ to explain the distances between the observed data. An example: let’s assume we’ve sequenced the DNA of a set of species, and computed a Hamming-like distance to measures the differences between these sequences.



(From Geometry of the space of phylogenetic trees by Billera, Holmes and Vogtmann)

Biology explains these differences from the fact that certain species may have had more recent common ancestors than others. Ideally, the measured distances between DNA-samples are a tree metric. That is, if we can determine the full ancestor-tree of these species, there should be numbers between ancestor-nodes (measuring their difference in DNA) such that the distance between two existing species is the sum of distances over the edges of the unique path in this phylogenetic tree connecting the two species.

Last time we’ve see that a necessary and sufficient condition for a tree-metric is that for every quadruple $k,l,m,n \in V$ we have that the maximum of the sum-distances

$$\{ d(k,l)+d(m,n),~d(k,m)+d(l,n),~d(k,n)+d(l,m) \}$$

is attained at least twice.

In practice, it rarely happens that the measured distances between DNA-samples are a perfect fit to this condition, but still we would like to compute the most probable phylogenetic tree. In the above example, there will be two such likely trees:



(From Geometry of the space of phylogenetic trees by Billera, Holmes and Vogtmann)

How can we find them? And, if the distances in our data-set do not have such a direct biological explanation, is it still possible to find such trees of events (or perhaps, a forest of event-trees) explaining our distance function?

Well, tracking back these ancestor nodes looks a lot like trying to construct colimits.

By now, every child knows that if their toy category $T$ does not allow them to construct all colimits, they can always beg for an upgrade to the presheaf topos $\widehat{T}$ of all contravariant functors from $T$ to $Sets$.

But then, the child can cobble together too many crazy constructions, and the parents have to call in the Grothendieck police who will impose one of their topologies to keep things under control.

Can we fall back on this standard topos philosophy in order to find these forests of the unconscious?



(Image credit)

We have a data-set $V$ with a distance function $d$, and it is fashionable to call this setting a $[0,\infty]$-‘enriched’ category. This is a misnomer, there’s not much ‘category’ in a $[0,\infty]$-enriched category. The only way to define an underlying category from it is to turn $V$ into a poset via $n \geq m$ iff $d(n,m)=0$.

Still, we can define the set $\widehat{V}$ of $[0,\infty]$-enriched presheaves, consisting of all maps
$$p : V \rightarrow [0,\infty] \quad \text{satisfying} \quad \forall m,n \in V : d(m,n)+p(n) \geq p(m)$$
which is again a $[0,\infty]$-enriched category with distance function
$$\hat{d}(p,q) = \underset{m \in V}{max} (q(m) \overset{.}{-} p(m)) \quad \text{with} \quad a \overset{.}{-} b = max(a-b,0)$$
so $\widehat{V}$ is a poset via $p \geq q$ iff $\forall m \in V : p(m) \geq q(m)$.

The good news is that $\widehat{V}$ contains all limits and colimits (because $[0,\infty]$ has sup’s and inf’s) and that $V$ embeds isometrically in $\widehat{V}$ via the Yoneda-map
$$m \mapsto y_m \quad \text{with} \quad y_m(n)=d(n,m)$$
The mental picture of a $[0,\infty]$-enriched presheaf $p$ is that of an additional ‘point’ with $p(m)$ the distance from $y_m$ to $p$.

But there’s hardly a subobject classifier to speak of, and so no Grothendieck topologies nor internal logic. So, how can we select from the abundance of enriched presheaves, the nodes of our event-forest?

We can look for special properties of the ancestor-nodes in a phylogenetic tree.



For any ancestor node $p$ and any $m \in V$ there is a unique branch from $p$ having $m$ as a leaf (picture above,left). Take another branch in $p$ and a leaf vertex $n$ of it, then the combination of these two paths gives the unique path from $m$ to $n$ in the phylogenetic tree, and therefore
$$\hat{d}(y_m,y_n) = d(m,n) = p(m)+p(n) = \hat{d}(p,y_m) + \hat{d}(p,y_n)$$
In other words, for every $m \in V$ there is another $n \in V$ such that $p$ lies on the geodesic from $m$ to $n$ (identifying elements of $V$ with their Yoneda images in $\widehat{V}$).

Compare this to Stephen Wolfram’s belief that if we looked properly at “what ChatGPT is doing inside, we’d immediately see that ChatGPT is doing something “mathematical-physics-simple” like following geodesics”.

Even if the distance on $V$ is symmetric, the extended distance function on $\widehat{V}$ is usually far from symmetric. But here, as we’re dealing with a tree-distance, we have for all ancestor-nodes $p$ and $q$ that $\hat{d}(p,q)=\hat{d}(q,p)$ as this is just the sum of the weights of the edges on the unique path from $p$ and $q$ (picture above, on the right).

Right, now let’s look at a non-tree distance function on $V$, and let’s look at those elements in $\widehat{V}$ having similar properties as the ancestor-nodes:

$$T_V = \{ p \in \widehat{V}~:~\forall n \in V~:~p(n) = \underset{m \in V}{max} (d(m,n) \overset{.}{-} p(m)) \}$$

Then again, for every $p \in T_V$ and every $n \in V$ there is an $m \in V$ such that $p$ lies on a geodesic from $n$ to $m$.

The simplest non-tree example is $V = \{ a,b,c,d \}$ with say

$$d(a,c)+d(b,d) > max(d(a,b)+d(c,d),d(a,d)+d(b,c))$$

In this case, $T_V$ was calculated by Andreas Dress in Trees, Tight Extensions of Metric Spaces, and the Cohomological Dimension of Certain Groups: A Note on Combinatorial Properties of Metric Spaces. Note that Dress writes $mn$ for $d(m,n)$.



If this were a tree-metric, $T_V$ would be the tree, but now we have a $2$-dimensional cell $T_0$ consisting of those presheaves lying on a geodesic between $a$ and $c$, and on the one between $b$ and $d$. Let’s denote this by $T_0 = \{ a—c,b—d \}$.

$T_V$ has eight $1$-dimensional cells, and with the same notation we have



Let’s say that $V= \{ a,b,c,d \}$ are four DNA-samples of species but failed to satisfy the tree-metric condition by an error in the measurements, how can we determine likely phylogenetic trees for them? Well, given the shape of the cell-complex $T_V$ there are four spanning trees (with root in $f_a,f_b,f_c$ or $f_d$) having the elements of $V$ as their only leaf-nodes. Which of these is most likely the ancestor-tree will depend on the precise distances.

For an arbitrary data-set $V$, the structure of $T_V$ has been studied extensively, under a variety of names such as ‘Isbell’s injective hull’, ‘tight span’ or ‘tropical convex hull’, in slightly different settings. So, in order to use results one sometimes have to intersect with some (un)bounded polyhedron.

It is known that $T_V$ is always a cell-complex with dimension of the largest cell bounded by half the number of elements of $V$. In this generality it will no longer be the case that there is a rooted spanning tree of teh complex having the elements of $V$ as its only leaves, but we can opt for the best forest of rooted trees in the $1$-skeleton having all of $V$ as their leaf-nodes. Theses are the ‘forests of the unconscious’ explaining the distance function on the data-set $V$.

Apart from the Dress-paper mentioned above, I’ve found these papers informative:

So far, we started from a data-set $V$ with a symmetric distance function, but for applications in LLMs one might want to drop that condition. In that case, Willerton proved that there is a suitable replacement of $T_V$, which is now called the ‘directed tight span’ and which coincides with the Isbell completion.

Recently, Simon Willerton gave a talk at the African Mathematical Seminar called ‘Looking at metric spaces as enriched categories’:

Willerton also posts a series(?) on this at the n-category cafe, starting with Metric spaces as enriched categories I.

(tbc?)

Previously in this series:

Leave a Comment

The tropical brain-forest

If machine learning, AI, and large language models are here to stay, there’s this inevitable conclusion:


At the start of this series, the hope was to find the topos of the unconscious. Pretty soon, attention turned to the shape of languages and LLMs.

In large language models all syntactic and semantic information is encoded is huge arrays of numbers and weights. It seems unlikely that $\mathbf{Set}$-valued presheaves will be useful in machine learning, but surely Huawei will prove me wrong.

$[0,\infty]$-enriched categories (aka generalised metric spaces) and associated $[0,\infty]$-enriched presheaves may be better suited to understand existing models.

But, as with ordinary presheaves, there are just too many $[0,\infty]$-enriched ones, So, how can we weed out the irrelevant ones?

For inspiration, let’s turn to evolutionary biology and their theory of phylogenetic trees. They want to trace back common (extinguished) ancestors of existing species by studying overlaps in the DNA.



(A tree of life, based on completely sequenced genomes, from Wikipedia)

The connection between phylogenetic trees and tropical geometry is nicely explained in the paper Tropical mathematics by David Speyer and Bernd Sturmfels.

The tropical semi-ring is the set $(-\infty,\infty]$, equipped with a new addition $\oplus$ and multiplication $\odot$

$$a \oplus b = min(a,b), \quad \text{and} \quad a \odot b = a+b$$

Because tropical multiplication is ordinary addition, a tropical monomial in $n$ variables

$$\underbrace{x_1 \odot \dots \odot x_1}_{j_1} \odot \underbrace{x_2 \odot \dots \odot x_2}_{j_2} \odot \dots$$

corresponds to the linear polynomial $j_1 x_1 + j_2 x_2 + \dots \in \mathbb{Z}[x_1,\dots,x_n]$. But then, a tropical polynomial in $n$ variables

$$p(x_1,\dots,x_n)=a \odot x_1^{i_1}\dots x_n^{i_n} \oplus b \odot x_1^{j_1} \dots x_n^{j_n} \oplus \dots$$

gives the piece-wise linear function on $p : \mathbb{R}^n \rightarrow \mathbb{R}$

$$p(x_1,\dots,x_n)=min(a+i_1 x_1 + \dots + i_n x_n,b+j_1 x_1 + \dots + j_n x_n, \dots)$$

The tropical hypersurface $\mathcal{H}(p)$ then consists of all points of $v \in \mathbb{R}^n$ where $p$ is not linear, that is, the value of $p(v)$ is attained in at least two linear terms in the description of $p$.

Now, for the relation to phylogenetic trees: let’s sequence the genomes of human, mouse, rat and chicken and compute the values of a suitable (necessarily symmetric) distance function between them:




From these distances we want to trace back common ancestors and their difference in DNA-profile in a consistent manner, that is, such that the distance between two nodes in the tree is the sum of the distances of the edges connecting them.

In this example, such a tree is easily found (only the weights of the two edges leaving the root can be different, with sum $0.8$):



In general, let’s sequence the genomes of $n$ species and determine their distance matrix $D=(d_{ij})_{i,j}$. Biology asserts that this distance must be a tree-distance, and those can be characterised by the condition that for all $1 \leq i,j,k,l \leq n$, among the three numbers

$$d_{ij}+d_{kl},~d_{ik}+d_{jl},~d_{il}+d_{jk}$$

the maximum is attained at least twice.

What has this to do with tropical geometry? Well, $D$ is a tree distance if and only if $-D$ is a point in the tropical Grassmannian $Gr(2,n)$.

Here’s why: let $e_{ij}=-d_{ij}$ then the above condition is that the minimum of

$$e_{ij}+e_{kl},~e_{ik}+e_{jl},~e_{il}+e_{jk}$$

is attained at least twice, or that $(e_{ij})_{i,j}$ is a point of the tropical hypersurface

$$\mathcal{H}(x_{ij} \odot x_{kl} \oplus x_{ik} \odot x_{jl} \oplus x_{il} \odot x_{jk})$$

and we recognise this as one of the defining quadratic Plucker relations of the Grassmannian $Gr(2,n)$.

More on this can be found in another paper by Speyer and Sturmfels The tropical Grassmannian, and the paper Geometry of the space of phylogenetic trees by Louis Billera, Susan Holmes and Karen Vogtmann.

What’s the connection with $[0,\infty]$-enriched presheaves?

The set of all species $V=\{ m,n,\dots \}$ , together with the distance function $d(m,n)$ between their DNA-sequences is a $[0,\infty]$-category. Recall that a $[0,\infty]$-enriched presheaf on $V$ is a function $p : V \rightarrow [0,\infty]$ satisfying for all $m,n \in V$

$$d(m,n)+p(n) \geq p(m)$$

For an ancestor node $p$ we can take for every $m \in V$ as $p(m)$ the tree distance from $p$ to $m$, so every ancestor is a $[0,\infty]$-enriched presheaf.

We also defined the distance between such $[0,\infty]$-enriched presheaves $p$ and $q$ to be

$$\hat{d}(p,q) = sup_{m \in V}~max(q(m)-p(m),0)$$

and this distance coincides with the tree distance between the nodes.

So, all ancestors nodes in a phylogenetic tree are very special $[0,\infty]$-enriched presheaves, optimal for the connection with the underlying $[0,\infty]$-enriched category (the species and their differences in genome).

We would like to garden out such exceptional $[0,\infty]$-enriched presheaves in general, but clearly the underlying distance of a generalised metric space, even when it is symmetric, is not a tree metric.

Still, there might be regions in the space where we can do the above. So, in general we might expect not one tree, but a forest of trees formed by the $[0,\infty]$-enriched presheaves, optimal for the metric we’re exploring.

If we think of the underlying $[0,\infty]$-category as the conscious manifestations, then this forest of presheaves are the underlying brain-states (or, if you want, the unconscious) leading up to these.

That’s why I like to call this mental picture the tropical brain-forest.



(Image credit)

Where’s the tropical coming from?

Well, I think that in order to pinpoint these ‘optimal’ $[0,\infty]$-enriched presheaves a tropical-like structure on these, already mentioned by Simon Willerton in Tight spans, Isbell completions and semi-tropical modules, will be relevant.

For any two $[0,\infty]$-enriched presheaves we can take $p \oplus q = p \wedge q$, and for every $s \in [0,\infty]$ we can define

$$s \odot p : V \rightarrow [0,\infty] \qquad m \mapsto max(p(m)-s,0)$$

and check that this is again a $[0,\infty]$-presheaf. The mental idea of $s \odot p$ is that of a fat point centered at $p$ with size $s$.

(tbc)

Previously in this series:

Leave a Comment

Stephen Wolfram on ChatGPT

A month ago, Stephen Wolfram put out a little booklet (140 pages) What Is ChatGPT Doing … and Why Does It Work?.



It gives a gentle introduction to large language models and the architecture and training of neural networks.

The entire book is freely available:

The advantage of these online texts is that you can click on any of the images, copy their content into a Mathematica notebook, and play with the code.

This really gives a good idea of how an extremely simplified version of ChatGPT (based on GPT-2) works.

Downloading the model (within Mathematica) uses about 500Mb, but afterwards you can complete any prompt quickly, and see how the results change if you turn up the ‘temperature’.

You should’t expect too much from this model. Here’s what it came up with from the prompt “The major results obtained by non-commutative geometry include …” after 20 steps, at temperature 0.8:


NestList[StringJoin[#, model[#, {"RandomSample", "Temperature" -> 0.8}]] &,
"The major results obtained by non-commutative geometry include ", 20]

The major results obtained by non-commutative geometry include vernacular accuracy of math and arithmetic, a stable balance between simplicity and complexity and a relatively low level of violence.

Lol.

In the more philosophical sections of the book, Wolfram speculates about the secret rules of language that ChatGPT must have found if we want to explain its apparent succes. One of these rules, he argues, must be the ‘logic’ of languages:

But is there a general way to tell if a sentence is meaningful? There’s no traditional overall theory for that. But it’s something that one can think of ChatGPT as having implicitly “developed a theory for” after being trained with billions of (presumably meaningful) sentences from the web, etc.

What might this theory be like? Well, there’s one tiny corner that’s basically been known for two millennia, and that’s logic. And certainly in the syllogistic form in which Aristotle discovered it, logic is basically a way of saying that sentences that follow certain patterns are reasonable, while others are not.

Something else ChatGPT may have discovered are language’s ‘semantic laws of motion’, being able to complete sentences by following ‘geodesics’:

And, yes, this seems like a mess—and doesn’t do anything to particularly encourage the idea that one can expect to identify “mathematical-physics-like” “semantic laws of motion” by empirically studying “what ChatGPT is doing inside”. But perhaps we’re just looking at the “wrong variables” (or wrong coordinate system) and if only we looked at the right one, we’d immediately see that ChatGPT is doing something “mathematical-physics-simple” like following geodesics. But as of now, we’re not ready to “empirically decode” from its “internal behavior” what ChatGPT has “discovered” about how human language is “put together”.

So, the ‘hidden secret’ of successful large language models may very well be a combination of logic and geometry. Does this sound familiar?

If you prefer watching YouTube over reading a book, or if you want to see the examples in action, here’s a video by Stephen Wolfram. The stream starts about 10 minutes into the clip, and the whole lecture is pretty long, well over 3 hours (about as long as it takes to read What Is ChatGPT Doing … and Why Does It Work?).

Leave a Comment

an einStein

On March 20th, David Smith, Joseph Myers, Craig Kaplan and Chaim Goodman-Strauss announced on the arXiv that they’d found an ein-Stein (a stone), that is, one piece to tile the entire plane, in uncountably many different ways, all of them non-periodic (that is, the pattern does not even allow a translation symmetry).

This einStein, called the ‘hat’ (some prefer ‘t-shirt’), has a very simple form : you take the most symmetric of all plane tessellations, $\ast 632$ in Conway’s notation, and glue sixteen copies of its orbifold (or if you so prefer, eight ‘kites’) to form the gray region below:



(all images copied from the aperiodic monotile paper)

Surprisingly, you do not even need to impose gluing conditions (unlike in the two-piece aperiodic kite and dart Penrose tilings), but you’ll need flipped hats to fill up the gaps left.

A few years ago, I wrote some posts on Penrose tilings, including details on inflation and deflation, aperiodicity, uncountability, Conway worms, and more:

To prove that hats tile the plane, and do so aperiodically, the authors do not apply inflation and deflation directly on the hats, but rather on associated tilings by ‘meta-tiles’ (rough outlines of blocks of hats). To understand these meta-tiles it is best to look at a large patch of hats:



Here, the dark-blue hats are the ‘flipped’ ones, and the thickened outline around the central one gives the boundary of the ’empire’ of a flipped hat, that is, the collection of all forced tiles around it. So, around each flipped hat we find such an empire, possibly with different orientation. Also note that most of the white hats (there are also isolated white hats at the centers of triangles of dark-blue hats) make up ‘lines’ similar to the Conway worms in the case of the Penrose tilings. We can break up these ‘worms’ into ‘propeller-blades’ (gray) and ‘parallelograms’ (white). This gives us four types of blocks, the ‘meta-tiles’:



The empire of a flipped hat consists of an H-block (for Hexagon) made of one dark-blue (flipped) and three light-blue (ordinary) hats, one P-block (for Parallelogram), one F-block (for Fylfot, a propellor blade), and one T-block (for Triangle) for the remaining hat.



The H,T and P blocks have rotational symmetries, whereas the underlying block of hats does not. So we mark the intended orientation of the hats by an arrow, pointing to the side having two or three hat-pieces sticking out.

Any hat-tiling gives us a tiling with the meta-tile pieces H,T,P and F. Conversely, not every tiling by meta-tiles has an underlying hat-tiling, so we have to impose gluing conditions on the H,T,P and F-pieces. We can do this by using the boundary of the underlying hat-block, cutting away and adding hat-parts. Then, any H,T,P and F-tiling satisfying these gluing conditions will come from an underlying hat-tiling.

The idea is now to devise ‘inflation’- and ‘deflation’-rules for the H,T,P and F-pieces. For ‘inflation’ start from a tiling satisfying the gluing (and orientation) conditions, and look for the central points of the propellors (the thick red points in the middle picture).



These points will determine the shape of the larger H,T,P and F-pieces, together with their orientations. The authors provide an applet to see these inflations in action.

Choose your meta-tile (H,T,P or F), then click on ‘Build Supertiles’ a number of times to get larger and larger tilings, and finally unmark the ‘Draw Supertiles’ button to get a hat-tiling.

For ‘deflation’ we can cut up H,T,P and F-pieces into smaller ones as in the pictures below:



Clearly, the hard part is to verify that these ‘inflated’ and ‘deflated’ tilings still satisfy the gluing conditions, so that they will have an underlying hat-tiling with larger (resp. smaller) hats.

This calls for a lengthy case-by-case analysis which is the core-part of the paper and depends on computer-verification.

Once this is verified, aperiodicity follows as in the case of Penrose tilings. Suppose a tiling is preserved under translation by a vector $\vec{v}$. As ‘inflation’ and ‘deflation’ only depend on the direct vicinity of a tile, translation by $\vec{v}$ is also a symmetry of the inflated tiling. Now, iterate this process until the diameter of the large tiles becomes larger than the length of $\vec{v}$ to obtain a contradiction.

Siobhan Roberts wrote a fine article Elusive ‘Einstein’ Solves a Longstanding Math Problem for the NY-times on this einStein.

It would be nice to try this strategy on other symmetric tilings: break the symmetry by gluing together a small number of its orbifolds in such a way that this extended tile (possibly with its reversed image) tile the plane, and find out whether you discovered a new einStein!

Leave a Comment

The super-vault of missing notes

Last time we’ve constructed a wide variety of Jaccard-like distance functions $d(m,n)$ on the set of all notes in our vault $V = \{ k,l,m,n,\dots \}$. That is, $d(m,n) \geq 0$ and for each triple of notes we have a triangle inequality

$$d(k,l)+d(l,m) \geq d(k,m)$$

By construction we had $d(m,n)=d(n,m)$, but we can modify any of these distances by setting $d'(m,n)= \infty$ if there is no path of internal links from note $m$ to note $n$, and $d'(m,n)=d(m,n)$ otherwise. This new generalised distance is no longer symmetric, but still satisfies the triangle inequality, and turns $V$ into a Lawvere space.

$V$ becomes an enriched category over the monoidal category $[0,\infty]=\mathbb{R}_+ \cup \{ \infty \}$ (the poset-category for the reverse ordering ($a \rightarrow b$ iff $a \geq b$) with $+$ as ‘tensor product’ and $0$ as unit). The ‘enrichment’ is the map

$$V \times V \rightarrow [0,\infty] \qquad (m,n) \mapsto d(m,n)$$

Writers (just like children) have always loved colimits. They want to morph their notes into a compelling story. Sadly, such colimits do not always exist yet in our vault category. They are among too many notes still missing from it.



(Image credit)

For ordinary categories, the way forward is to ‘upgrade’ your category to the presheaf category. In it, ‘the child can cobble together crazy constructions to his heart’s content’. For our ‘enriched’ vault $V_d$ we should look at the (enriched) category of enriched presheaves $\widehat{V_d}$. In it, the writer will find inspiration on how to cobble together her texts.

An enriched presheaf is a map $p : V \rightarrow [0,\infty]$ such that for all notes $m,n \in V$ we have

$$d(m,n) + p(n) \geq p(m)$$

Think of $p(n)$ as the distance (or similarity) of the virtual note $p$ to the existing note $n$, then this condition is just an extension of the triangle inequality. The lower the value of $p(n)$ the closer $p$ resembles $n$.

Each note $n \in V$ determines its Yoneda presheaf $y_n : V \rightarrow [0,\infty]$ by $m \mapsto d(m,n)$. By the triangle inequality this is indeed an enriched presheaf in $\widehat{V_d}$.

The set of all enriched presheaves $\widehat{V_d}$ has a lot of extra structure. It is a poset

$$p \leq q \qquad \text{iff} \qquad \forall n \in V : p(n) \leq q(n)$$

with minimal element $0 : \forall n \in V, 0(n)=0$, and maximal element $1 : \forall n \in V, 1(n)=\infty$.

It is even a lattice with $p \vee q(n) = max(p(n),q(n))$ and $p \wedge q(n)=min(p(n),q(n))$. It is easy to check that $p \wedge q$ and $p \vee q$ are again enriched presheaves.

Here’s $\widehat{V_d}$ when the vault consists of just two notes $V=\{ m,n \}$ of non-zero distance to each other (whether symmetric or not) as a subset of $[0,\infty] \times [0,\infty]$.



This vault $\widehat{V_d}$ of all missing (and existing) notes is again enriched over $[0,\infty]$ via

$$\widehat{d} : \widehat{V_d} \times \widehat{V_d} \rightarrow [0,\infty] \qquad \widehat{d}(p,q) = max(0,\underset{n \in V}{sup} (q(n)-p(n)))$$

The triangle inequality follows because the definition of $\widehat{d}(p,q)$ is equivalent to $\forall m \in V : \widehat{d}(p,q)+p(m) \geq q(m)$. Even if we start from a symmetric distance function $d$ on $V$, it is clear that this extended distance $\widehat{d}$ on $\widehat{V_d}$ is far from symmetric. The Yoneda map

$$y : V_d \rightarrow \widehat{V_d} \qquad n \mapsto y_n$$

is an isometry and the enriched version of the Yoneda lemma says that for all $p \in \widehat{V_d}$

$$p(n) = \widehat{d}(y_n,p)$$

Indeed, taking $m=n$ in $\widehat{d}(y_n,f)+y_n(m) \geq p(m)$ gives $\widehat{d}(y_n,p) \geq p(n)$. Conversely,
from the presheaf condition $d(m,n)+p(n) \geq p(m)$ for all $m,n$ follows

$$p(n) \geq max(0,\underset{m \in V}{sup}(p(m)-d(m,n)) = \widehat{d}(y_n,p)$$

In his paper Taking categories seriously, Bill Lawvere suggested to consider enriched presheaves $p \in \widehat{V_d}$ as ‘refined’ closed set of the vault-space $V_d$.

For every subset of notes $X \subset V$ we can consider the presheaf (use triangle inequality)

$$p_X : V \rightarrow [0,\infty] \qquad m \mapsto \underset{n \in X}{inf}~d(m,n)$$

then its zero set $Z(p_X) = \{ m \in V~:~p_X(m)=0 \}$ can be thought of as the closure of $X$, and the collection of all such closed subsets define a topology on $V$.

In our simple example of the two note vault $V=\{ m,n \}$ this is just the discrete topology, but we can get more interesting spaces. If $d(n,m)=0$ but $d(m,n) > 0$



we get the Sierpinski space: $n$ is the only closed point, and lies in the closure of $m$. Of course, if your vault contains thousands of notes, you might get more interesting topologies.

In the special case when $V_d$ is a poset-category, as was the case in the shape of languages post, this topology is the down-set (or up-set) topology.

Now, what is this topology when you start with the Lawvere-space $\widehat{V_d}$? From the definitions we see that

$$\widehat{d}(p,q) = 0 \quad \text{iff} \quad \forall n \in V~:~p(n) \geq q(n) \quad \text{iff} \quad p \geq q$$

So, all presheaves in the up-set $\uparrow_p$ lie in the closure of $p$, and $p$ lies in the closure of all everything in the down-set $\downarrow_p$ of $p$. So, this time the topology has as its closed sets all down-sets of the poset $\widehat{V_d}$.



What’s missing is a good definition for the implication $p \Rightarrow q$ between two enriched presheaves $p,q \in \widehat{V_d}$. In An enriched category theory of language: from syntax to semantics it is said that this should be, perhaps only to be used in their special poset situation (with adapted notations)

$$p \Rightarrow q : V \rightarrow [0,\infty] \qquad \text{where} \quad (p \Rightarrow q)(n) = \widehat{d}(y_n \wedge p,q)$$

but I can’t even show that this is a presheaf. I may be horribly wrong, but in their proof of this (lemma 5) they seem to use their lemma 4, but with the two factors swapped.

If you have suggestions, please let me know. And if you trow Kelly’s Basic concepts of enriched category theory at me, please add some guidelines on how to use it. I’m just a passer-by.

Probably, I should also read up on Isbell duality, as suggested by Lawvere in his paper Taking categories seriously, and worked out by Simon Willerton in Tight spans, Isbell completions and semi-tropical modules

(tbc)

Previously in this series:

Next

The tropical brain forest

Leave a Comment

Stella Maris (Cormac McCarthy)

This week, I was hit hard by synchronicity.

Lately, I’ve been reading up a bit on psycho-analysis, tried to get through Grothendieck’s La clef des songes (the key to dreams) and I’m in the process of writing a series of blogposts on how to construct a topos of the unconscious.

And then I read Cormac McCarthy‘s novels The passenger and Stella Maris, and got hit.



Stella Maris is set in 1972, when the math-prodigy Alicia Western, suffering from hallucinations, admits herself to a psychiatric hospital, carrying a plastic bag containing forty thousand dollars. The book consists entirely of dialogues, the transcripts of seven sessions with her psychiatrist Dr. Cohen (nomen est omen).

Alicia is a doctoral candidate at the University Of Chicago who got a scholarship to visit the IHES to work with Grothendieck on toposes.

During the psychiatric sessions, they talk on a wide variety of topics, including the nature of mathematics, quantum mechanics, music theory, dreams, and the unconscious (and its role in doing mathematics).

The core question is not how you do math but how does the unconscious do it. How it is that it’s demonstrably better at it than you are? You work on a problem and then you put it away for a while. But it doesnt go away. It reappears at lunch. Or while you’re taking a shower. It says: Take a look at this. What do you think? Then you wonder why the shower is cold. Or the soup. Is this doing math? I’m afraid it is. How is it doing it? We dont know. How does the unconscious do math? (page 99)

Before going to the IHES she had to send Grothendieck a paper (‘It was an explication of topos theory that I thought he probably hadn’t considered.’ page 136, and ‘while it proved three problems in topos theory it then set about dismantling the mechanism of the proofs.’ page 151). At the IHES ‘I met three men that I could talk to: Grothendieck, Deligne, and Oscar Zariski.’ (page 136).

I don’t know whether Zariski visited the IHES in the early 70ties, and while most historical allusions (to Grothendieck’s life, his role in Bourbaki etc.) are correct, Alicia mentions the ‘Langlands project’ (page 66) which may very well have been the talk of town at the IHES in 1972, but the mention of Witten ‘Grothendieck writes everything down. Witten nothing.’ (page 100) raised an eyebrow.

The book also contains these two nice attempts to capture some of the essence of topos theory:

When you get to topos theory you are at the edge of another universe.
You have found a place to stand where you can look back at the world from nowhere. It’s not just some gestalt. It’s fundamental. (page 13)

You asked me about Grothendieck. The topos theory he came up with is a witches’ brew of topology and algebra and mathematical logic.
It doesnt even have a clear identity. The power of the theory is still speculative. But it’s there.
You have a sense that it is waiting quietly with answers to questions that nobody has asked yet. (page 68)

I did read ‘The passenger’ first, which is probably better as then you’d know already some of the ghosts haunting Alicia, but it’s not a must if you are only interested in their discussions about the nature of mathematics. Be warned that it is a pretty dark book, better not read when you’re already feeling low, and it should come with a link to a suicide prevention line.

Here’s a more considered take on Stella Maris:

3 Comments

The enriched vault

In the shape of languages we started from a collection of notes, made a poset of text-snippets from them, and turned this into an enriched category over the unit interval $[0,1]$, following the paper paper An enriched category theory of language: from syntax to semantics by Tai-Danae Bradley, John Terilla and Yiannis Vlassopoulos.

This allowed us to view the text-snippets as points in a Lawvere pseudoquasi metric space, and to define a ‘topos’ of enriched presheaves on it, including the Yoneda-presheaves containing semantic information of the snippets.

In the previous post we looked at ‘building a second brain’ apps, such as LogSeq and Obsidian, and hoped to use them to test the conjectured ‘topos of the unconscious’.

In Obsidian, a vault is a collection of notes (with their tags and other meta-data), together with all links between them.

The vault of the language-poset will have one note for every text-snipped, and have a link from note $n$ to note $m$ if $m$ is a text-fragment in $n$.

In their paper, Bradley, Terilla and Vlassopoulos use the enrichment structure where $\mu(n,m) \in [0,1]$ is the conditional probablity of the fragment $m$ to be extended to the larger text $n$.

Most Obsidian vaults are a lot more complicated, possibly having oriented cycles in their internal link structure.



Still, it is always possible to turn the notes of the vault into a category enriched over $[0,1]$, in multiple ways, depending on whether we want to focus on the internal link-structure or rather on the semantic similarity between notes, or any combination of these.

Let $X$ be a set of searchable data from your vault. Elements of $X$ may be

  • words contained in notes
  • in- or out-going links between notes
  • tags used
  • YAML-frontmatter

Assign a positive real number $r_x \geq 0$ to every $x \in X$. We see $r_x$ as the ‘relevance’ we attach to the search term $x$. So, it is possible to emphasise certain key-words or tags, find certain links more important than others, and so on.

For this relevance function $r : X \rightarrow \mathbb{R}_+$, we have a function defined on all subsets $Y$ of $X$

$$f_r~:~\mathcal{P}(X) \rightarrow \mathbb{R}_+ \qquad Y \mapsto f_r(Y) = \sum_{x \in Y} r_x$$

Take a note $n$ from the vault $V$ and let $X_n$ be the set of search terms from $X$ contained in $n$.

We can then define a (generalised) Jaccard distance for any pair of notes $n$ and $m$ in $V$:

$$ d_r(n,m) = \begin{cases}
0~\text{if $f_r(X_n \cup X_m)=0$} \\ 1-\frac{f_r(X_n \cap X_m)}{f_r(X_n \cup X_m)}~\text{otherwise} \end{cases}$$

This distance is symmetric, $d_r(n,n)=0$ for all notes $n$, and the crucial property is that it satisfies the triangle inequality, that is, for all triples of notes $l$, $m$ and $n$ we have

$$d_r(l,n) \leq d_r(l,m)+d_r(m,n)$$

For a proof in this generality see the paper A note on the triangle inequality for the Jaccard distance by Sven Kosub.

How does this help to make the vault $V$ into a category enriched over $[0,1]$?

The poset $([0,1],\leq)$ is the category with objects all numbers $a \in [0,1]$, and a unique morphism $a \rightarrow b$ between two numbers iff $a \leq b$. This category has limits (infs) and colimits (sups), has a monoidal structure $a \otimes b = a \times b$ with unit object $1$, and an internal hom

$$Hom_{[0,1]}(a,b) = (a,b) = \begin{cases} \frac{b}{a}~\text{if $b \leq a$} \\ 1~\text{otherwise} \end{cases}$$



We say that the vault is an enriched category over $[0,1]$ if for every pair of notes $n$ and $m$ we have a number $\mu(n,m) \in [0,1]$ satisfying for all notes $n$

$$\mu(n,n)=1~\quad~\text{and}~\quad~\mu(m,l) \times \mu(n,m) \leq \mu(n,l)$$

for all triples of notes $l,m$ and $n$.

Starting from any relevance function $r : X \rightarrow \mathbb{R}_+$ we define for every pair $n$ and $m$ of notes the distance function $d_r(m,n)$ satisfying the triangle inequality. If we now take

$$\mu_r(m,n) = e^{-d_r(m,n)}$$

then the triangle inequality translates for every triple of notes $l,m$ and $n$ into

$$\mu_r(m,l) \times \mu_r(n,m) \leq \mu_r(n,l)$$

That is, every relevance function makes $V$ into a category enriched over $[0,1]$.

Two simple relevance functions, and their corresponding distance and enrichment functions are available from Obsidian’s Graph Analysis community plugin.

To get structural information on the link-structure take as $X$ the set of all incoming and outgoing links in your vault, with relevance function the constant function $1$.

‘Jaccard’ in Graph Analysis computes for the current note $n$ the value of $1-d_r(n,m)$ for all notes $m$, so if this value is $a \in [0,1]$, then the corresponding enrichment value is $\mu_r(m,n)=e^{a-1}$.



To get semantic information on the similarity between notes, let $X$ be the set of all words in all notes and take again as relevance function the constant function $1$.

To access ‘BoW’ (Bags of Words) in Graph Analysis, you must first install the (non-community) NLP plugin which enables various types of natural language processing in the vault. The install is best done via the BRAT plugin (perhaps I’ll do a couple of posts on Obsidian someday).

If it gives for the current note $n$ the value $a$ for a note $m$, then again we can take as the enrichment structure $\mu_r(n,m)=e^{a-1}$.



Graph Analysis offers more functionality, and a good introduction is given in this clip:

Calculating the enrichment data for custom designed relevance functions takes a lot more work, but is doable. Perhaps I’ll return to this later.

Mathematically, it is probably more interesting to start with a given enrichment structure $\mu$ on the vault $V$, describe the category of all enriched presheaves $\widehat{V_{\mu}}$ and find out what we can do with it.

(tbc)

Previously in this series:

Next:

The super-vault of missing notes

Leave a Comment

The shape of languages

In the topology of dreams we looked at Sibony’s idea to view dream-interpretations as sections in a fibered space.

The ‘points’ in the base-space and fibers consisting of chunks of text, perhaps connected by links. The topology and shape of this fibered space is still shrouded in mystery.

Let’s look at a simple approach to turn a large number of texts into a topos, and define a loose metric on it.

There’s this paper An enriched category theory of language: from syntax to semantics by Tai-Danae Bradley, John Terilla and Yiannis Vlassopoulos.

Tai-Danae Bradley is an excellent communicator of everything category related, so probably it is more fun to read her own blogposts on this paper:

or to watch her Categories for AI talk: ‘Category Theory Inspired by LLMs’:

Let’s start with a collection of notes. In the paper, they consider all possible texts written in some language, but it may be a set of webpages to train a language model, or a set of recollections by someone.

Next, shred these notes into chunks of text, and point one of these to all the texts obtained by deleting some words at the start and/or end of it. For example, the note ‘a red rose’ will point to ‘a red’, ‘red rose’, ‘a’, ‘red’ and ‘rose’ (but not to ‘a rose’).

You may call this a category, to me it is just as a poset $(\mathcal{L},\leq)$. The maximal elements are the individual words, the minimal elements are the notes, or websites, we started from.



A down-set $A$ of this poset $(\mathcal{L},\leq)$ is a subset of $\mathcal{L}$ closed under taking smaller elements, that is, if $a \in A$ and $b \leq a$, then $b \in A$.

The intersection of two down-sets is again a down-set (or empty), and the union of down-sets is again a downset. That is, down-sets define a topology on our collection of text-snippets, or if you want, on language-fragments.

For example, the open determined by the word ‘red’ is the collection of all text-fragments containing this word.

The corresponding presheaf topos $\widehat{\mathcal{L}}$ is then just the category of all (set-valued) presheaves on this topological space.
As an example, the Yoneda-presheaf $\mathcal{Y}(p)$ of a text-snippet $p$ is the contra-variant functor

$$(\mathcal{L},\leq) \rightarrow \mathbf{Sets}$$

sending any $q \leq p$ to the unique map $\ast$ from $q$ to $p$, and if $q \not\leq p$ then we map it to $\emptyset$. If $A$ is a down-set (an open of over topological space) then the sections of $\mathcal{Y}(p)$ over $A$ are $\{ \ast \}$ if for all $a \in A$ we have $a \leq p$, and $\emptyset$ otherwise.

The presheaf $\mathcal{Y}(p)$ already contains some semantic information about the snippet $p$ as it gives all contexts in which $p$ appears.

Perhaps interesting is that the ‘points’ of the topos $\widehat{\mathcal{L}}$ are the notes we started from.

Recall that Connes and Gauthier-Lafaey want to construct a topos describing someone’s unconscious, and points of that topos should be the connection with that person’s consciousness.

Suppose you want to unravel your unconscious. You start by writing down a large set of notes containing all relevant facts of your life. Then you construct from these notes the above collection of snippets and its corresponding pre-sheaf topos. Clearly, you wrote your notes consciously, but probably the exact phrasing of these notes, or recurrent themes in them, or some text-combinations are ruled by your unconscious.

Ok, it’s not much, but perhaps it’s a germ of an potential approach…



(Image credit)

Now we come to the interesting part of the paper, the ‘enrichment’ of this poset.

Surely, some of these text-snippets will occur more frequently than others. For example, in your starting notes the snippet ‘red rose’ may appear ten time more than the snippet ‘red dwarf’, but this is not visible in the poset-structure. So how can we bring in this extra information?

If we have two text-snippets $p$ and $q$ and $q \leq p$, that is, $p$ is a connected sub-string of $q$. We can compute the conditional probability $\pi(q|p)$ which tells us how likely it is that if we spot an occurrence of $p$ in our starting notes, it is part of the larger sentence $q$. These numbers can be easily computed and from the rules of probability we get that for snippets $r \leq q \leq p$ we have that

$$\pi(r|p) = \pi(r|q) \times \pi(q|r)$$

so these numbers (all between $0$ and $1$) behave multiplicative along paths in the poset.

Nice in theory, but it requires an awful lot of computation. From the paper:

The reader might think of these probabilities $\pi(q|p)$ as being most well defined when $q$ is a short extension of $p$. While one may be skeptical about assigning a probability distribution on the set of all possible texts, it’s reasonable to say there is a nonzero probability that cat food will follow I am going to the store to buy a can of and, practically speaking, that probability can be estimated.

Indeed, existing LLMs successfully learn these conditional probabilities $\pi(q|p)$ using standard machine learning tools trained on large corpora of texts, which may be viewed as providing a wealth of samples drawn from these conditional probability distributions.

It may be easier to have an estimate $\mu(q|p)$ of this conditional probability for immediate successors (that is, if $q$ is obtained from $p$ by adding one word at the beginning or end of it), and then extend this measure to all arrows in the poset by taking the maximum of products along paths. In this way we have for all $r \leq q \leq p$ that

$$\mu(r|p) \geq \mu(r|q) \times \mu(q|p)$$

The upshot is that this measure $\mu$ turns our poset (or category) $(\mathcal{L},\leq)$ into a category ‘enriched’ over the unit interval $[ 0,1 ]$ (suitably made into a monoidal category).

I’ll spare you the details, just want to flash out the corresponding notion of ‘enriched presheaves’ which are the objects of the semantic category $\widehat{\mathcal{L}}^s$ in the paper, which is the enriched version of the presheaf category $\widehat{\mathcal{L}}$.

An enriched presheaf is a function (not functor)

$$F~:~\mathcal{L} \rightarrow [0,1]$$

satisfying the condition that for all text-snippets $r,q \in \mathcal{L}$ we have that

$$\mu(r|q) \leq [F(q),F(r)] = \begin{cases} \frac{F(r)}{F(q)}~\text{if $F(r) \leq F(q)$} \\ 1~\text{otherwise} \end{cases}$$

Note that the enriched (or semantic) Yoneda presheaf $\mathcal{Y}^s(p)(q) = \mu(q|p)$ satisfies this condition, and now this data not only records the contexts in which $p$ appears, but also measures how likely it is for $p$ to appear in a certain context.

Another cute application of the condition on the measure $\mu$ is that it allows us to define a ‘distance function’ (satisfying the triangle inequality) on all text-snippets in $\mathcal{L}$ by

$$d(q,p) = \begin{cases} -ln(\mu(q|p))~\text{if $q \leq p$} \\
\infty~\text{otherwise} \end{cases}$$

So, the higher $\mu(q|p)$ the closer $q$ lies to $p$, and now the snippet $p$ (example ‘red’) not only defines the open set in $\mathcal{L}$ of all texts containing $p$, but now we can structure the snippets in this open set with respect to this ‘distance’.



In this way we can turn any language, or a collection of texts in a given language, into what Lawvere called a ‘generalized metric space’.

It looks as if we are progressing slowly in our, probably futile, attempt to understand Alain Connes’ and Patrick Gauthier-Lafaye’s claim that ‘the unconscious is structured like a topos’.

Even if we accept the fact that we can start from a collection of notes, there are a number of changes we need to make to the above approach:

  • there will be contextual links between these notes
  • we only want to retain the relevant snippets, not all of them
  • between these ‘highlights’ there may also be contextual links
  • texts can be related without having to be concatenations
  • we need to implement changes when new notes are added
  • … (much more)

Perhaps, we should try to work on a specific ‘case’, and explore all technical tools that may help us to make progress.

(tbc)

Previously in this series:

Next:

Loading a second brain

Leave a Comment