# neverendingbooks Posts

On March 20th, David Smith, Joseph Myers, Craig Kaplan and Chaim Goodman-Strauss announced on the arXiv that they’d found an ein-Stein (a stone), that is, one piece to tile the entire plane, in uncountably many different ways, all of them non-periodic (that is, the pattern does not even allow a translation symmetry).

This einStein, called the ‘hat’ (some prefer ‘t-shirt’), has a very simple form : you take the most symmetric of all plane tessellations, $\ast 632$ in Conway’s notation, and glue sixteen copies of its orbifold (or if you so prefer, eight ‘kites’) to form the gray region below:

(all images copied from the aperiodic monotile paper)

Surprisingly, you do not even need to impose gluing conditions (unlike in the two-piece aperiodic kite and dart Penrose tilings), but you’ll need flipped hats to fill up the gaps left.

A few years ago, I wrote some posts on Penrose tilings, including details on inflation and deflation, aperiodicity, uncountability, Conway worms, and more:

To prove that hats tile the plane, and do so aperiodically, the authors do not apply inflation and deflation directly on the hats, but rather on associated tilings by ‘meta-tiles’ (rough outlines of blocks of hats). To understand these meta-tiles it is best to look at a large patch of hats:

Here, the dark-blue hats are the ‘flipped’ ones, and the thickened outline around the central one gives the boundary of the ’empire’ of a flipped hat, that is, the collection of all forced tiles around it. So, around each flipped hat we find such an empire, possibly with different orientation. Also note that most of the white hats (there are also isolated white hats at the centers of triangles of dark-blue hats) make up ‘lines’ similar to the Conway worms in the case of the Penrose tilings. We can break up these ‘worms’ into ‘propeller-blades’ (gray) and ‘parallelograms’ (white). This gives us four types of blocks, the ‘meta-tiles’:

The empire of a flipped hat consists of an H-block (for Hexagon) made of one dark-blue (flipped) and three light-blue (ordinary) hats, one P-block (for Parallelogram), one F-block (for Fylfot, a propellor blade), and one T-block (for Triangle) for the remaining hat.

The H,T and P blocks have rotational symmetries, whereas the underlying block of hats does not. So we mark the intended orientation of the hats by an arrow, pointing to the side having two or three hat-pieces sticking out.

Any hat-tiling gives us a tiling with the meta-tile pieces H,T,P and F. Conversely, not every tiling by meta-tiles has an underlying hat-tiling, so we have to impose gluing conditions on the H,T,P and F-pieces. We can do this by using the boundary of the underlying hat-block, cutting away and adding hat-parts. Then, any H,T,P and F-tiling satisfying these gluing conditions will come from an underlying hat-tiling.

The idea is now to devise ‘inflation’- and ‘deflation’-rules for the H,T,P and F-pieces. For ‘inflation’ start from a tiling satisfying the gluing (and orientation) conditions, and look for the central points of the propellors (the thick red points in the middle picture).

These points will determine the shape of the larger H,T,P and F-pieces, together with their orientations. The authors provide an applet to see these inflations in action.

Choose your meta-tile (H,T,P or F), then click on ‘Build Supertiles’ a number of times to get larger and larger tilings, and finally unmark the ‘Draw Supertiles’ button to get a hat-tiling.

For ‘deflation’ we can cut up H,T,P and F-pieces into smaller ones as in the pictures below:

Clearly, the hard part is to verify that these ‘inflated’ and ‘deflated’ tilings still satisfy the gluing conditions, so that they will have an underlying hat-tiling with larger (resp. smaller) hats.

This calls for a lengthy case-by-case analysis which is the core-part of the paper and depends on computer-verification.

Once this is verified, aperiodicity follows as in the case of Penrose tilings. Suppose a tiling is preserved under translation by a vector $\vec{v}$. As ‘inflation’ and ‘deflation’ only depend on the direct vicinity of a tile, translation by $\vec{v}$ is also a symmetry of the inflated tiling. Now, iterate this process until the diameter of the large tiles becomes larger than the length of $\vec{v}$ to obtain a contradiction.

Siobhan Roberts wrote a fine article Elusive ‘Einstein’ Solves a Longstanding Math Problem for the NY-times on this einStein.

It would be nice to try this strategy on other symmetric tilings: break the symmetry by gluing together a small number of its orbifolds in such a way that this extended tile (possibly with its reversed image) tile the plane, and find out whether you discovered a new einStein!

Last time we’ve constructed a wide variety of Jaccard-like distance functions $d(m,n)$ on the set of all notes in our vault $V = \{ k,l,m,n,\dots \}$. That is, $d(m,n) \geq 0$ and for each triple of notes we have a triangle inequality

$$d(k,l)+d(l,m) \geq d(k,m)$$

By construction we had $d(m,n)=d(n,m)$, but we can modify any of these distances by setting $d'(m,n)= \infty$ if there is no path of internal links from note $m$ to note $n$, and $d'(m,n)=d(m,n)$ otherwise. This new generalised distance is no longer symmetric, but still satisfies the triangle inequality, and turns $V$ into a Lawvere space.

$V$ becomes an enriched category over the monoidal category $[0,\infty]=\mathbb{R}_+ \cup \{ \infty \}$ (the poset-category for the reverse ordering ($a \rightarrow b$ iff $a \geq b$) with $+$ as ‘tensor product’ and $0$ as unit). The ‘enrichment’ is the map

$$V \times V \rightarrow [0,\infty] \qquad (m,n) \mapsto d(m,n)$$

Writers (just like children) have always loved colimits. They want to morph their notes into a compelling story. Sadly, such colimits do not always exist yet in our vault category. They are among too many notes still missing from it.

(Image credit)

For ordinary categories, the way forward is to ‘upgrade’ your category to the presheaf category. In it, ‘the child can cobble together crazy constructions to his heart’s content’. For our ‘enriched’ vault $V_d$ we should look at the (enriched) category of enriched presheaves $\widehat{V_d}$. In it, the writer will find inspiration on how to cobble together her texts.

An enriched presheaf is a map $p : V \rightarrow [0,\infty]$ such that for all notes $m,n \in V$ we have

$$d(m,n) + p(n) \geq p(m)$$

Think of $p(n)$ as the distance (or similarity) of the virtual note $p$ to the existing note $n$, then this condition is just an extension of the triangle inequality. The lower the value of $p(n)$ the closer $p$ resembles $n$.

Each note $n \in V$ determines its Yoneda presheaf $y_n : V \rightarrow [0,\infty]$ by $m \mapsto d(m,n)$. By the triangle inequality this is indeed an enriched presheaf in $\widehat{V_d}$.

The set of all enriched presheaves $\widehat{V_d}$ has a lot of extra structure. It is a poset (note the reversal of ordering due to the poset structure on $[0,\infty]$)

$$p \leq q \qquad \text{iff} \qquad \forall n \in V : q(n) \leq p(n)$$

with minimal element $0 : \forall n \in V, 0(n)=\infty$, and maximal element $1 : \forall n \in V, 1(n)=0$.

It is even a lattice with $p \wedge q(n) = max(p(n),q(n))$ and $p \vee q(n)=min(p(n),q(n))$. It is easy to check that $p \wedge q$ and $p \vee q$ are again enriched presheaves.

Here’s $\widehat{V_d}$ when the vault consists of just two notes $V=\{ m,n \}$ of non-zero distance to each other (whether symmetric or not) as a subset of $[0,\infty] \times [0,\infty]$.

This vault $\widehat{V_d}$ of all missing (and existing) notes is again enriched over $[0,\infty]$ via

$$\widehat{d} : \widehat{V_d} \times \widehat{V_d} \rightarrow [0,\infty] \qquad \widehat{d}(p,q) = max(0,\underset{n \in V}{sup} (q(n)-p(n)))$$

The triangle inequality follows because the definition of $\widehat{d}(p,q)$ is equivalent to $\forall m \in V : \widehat{d}(p,q)+p(m) \geq q(m)$. Even if we start from a symmetric distance function $d$ on $V$, it is clear that this extended distance $\widehat{d}$ on $\widehat{V_d}$ is far from symmetric. The Yoneda map

$$y : V_d \rightarrow \widehat{V_d} \qquad n \mapsto y_n$$

is an isometry and the enriched version of the Yoneda lemma says that for all $p \in \widehat{V_d}$

$$p(n) = \widehat{d}(y_n,p)$$

Indeed, taking $m=n$ in $\widehat{d}(y_n,f)+y_n(m) \geq p(m)$ gives $\widehat{d}(y_n,p) \geq p(n)$. Conversely,
from the presheaf condition $d(m,n)+p(n) \geq p(m)$ for all $m,n$ follows

$$p(n) \geq max(0,\underset{m \in V}{sup}(p(m)-d(m,n)) = \widehat{d}(y_n,p)$$

In his paper Taking categories seriously, Bill Lawvere suggested to consider enriched presheaves $p \in \widehat{V_d}$ as ‘refined’ closed set of the vault-space $V_d$.

For every subset of notes $X \subset V$ we can consider the presheaf (use triangle inequality)

$$p_X : V \rightarrow [0,\infty] \qquad m \mapsto \underset{n \in X}{inf}~d(m,n)$$

then its zero set $Z(p_X) = \{ m \in V~:~p_X(m)=0 \}$ can be thought of as the closure of $X$, and the collection of all such closed subsets define a topology on $V$.

In our simple example of the two note vault $V=\{ m,n \}$ this is just the discrete topology, but we can get more interesting spaces. If $d(n,m)=0$ but $d(m,n) > 0$

we get the Sierpinski space: $n$ is the only closed point, and lies in the closure of $m$. Of course, if your vault contains thousands of notes, you might get more interesting topologies.

In the special case when $V_d$ is a poset-category, as was the case in the shape of languages post, this topology is the down-set (or up-set) topology.

Now, what is this topology when you start with the Lawvere-space $\widehat{V_d}$? From the definitions we see that

$$\widehat{d}(p,q) = 0 \quad \text{iff} \quad \forall n \in V~:~p(n) \geq q(n) \quad \text{iff} \quad p \leq q$$

So, all presheaves in the down-set $\downarrow_p$ lie in the closure of $p$, and $p$ lies in the closure of all everything in the up-set $\uparrow_p$ of $p$. So, this time the topology has as its closed sets all up-sets of the poset $\widehat{V_d}$.

What’s missing is a good definition for the implication $p \Rightarrow q$ between two enriched presheaves $p,q \in \widehat{V_d}$. In An enriched category theory of language: from syntax to semantics it is said that this should be, perhaps only to be used in their special poset situation (with adapted notations)

$$p \Rightarrow q : V \rightarrow [0,\infty] \qquad \text{where} \quad (p \Rightarrow q)(n) = \widehat{d}(y_n \wedge p,q)$$

but I can’t even show that this is a presheaf. I may be horribly wrong, but in their proof of this (lemma 5) they seem to use their lemma 4, but with the two factors swapped.

If you have suggestions, please let me know. And if you trow Kelly’s Basic concepts of enriched category theory at me, please add some guidelines on how to use it. I’m just a passer-by.

Probably, I should also read up on Isbell duality, as suggested by Lawvere in his paper Taking categories seriously, and worked out by Simon Willerton in Tight spans, Isbell completions and semi-tropical modules

(tbc)

Previously in this series:

This week, I was hit hard by synchronicity.

Lately, I’ve been reading up a bit on psycho-analysis, tried to get through Grothendieck’s La clef des songes (the key to dreams) and I’m in the process of writing a series of blogposts on how to construct a topos of the unconscious.

And then I read Cormac McCarthy‘s novels The passenger and Stella Maris, and got hit.

Stella Maris is set in 1972, when the math-prodigy Alicia Western, suffering from hallucinations, admits herself to a psychiatric hospital, carrying a plastic bag containing forty thousand dollars. The book consists entirely of dialogues, the transcripts of seven sessions with her psychiatrist Dr. Cohen (nomen est omen).

Alicia is a doctoral candidate at the University Of Chicago who got a scholarship to visit the IHES to work with Grothendieck on toposes.

During the psychiatric sessions, they talk on a wide variety of topics, including the nature of mathematics, quantum mechanics, music theory, dreams, and the unconscious (and its role in doing mathematics).

The core question is not how you do math but how does the unconscious do it. How it is that it’s demonstrably better at it than you are? You work on a problem and then you put it away for a while. But it doesnt go away. It reappears at lunch. Or while you’re taking a shower. It says: Take a look at this. What do you think? Then you wonder why the shower is cold. Or the soup. Is this doing math? I’m afraid it is. How is it doing it? We dont know. How does the unconscious do math? (page 99)

Before going to the IHES she had to send Grothendieck a paper (‘It was an explication of topos theory that I thought he probably hadn’t considered.’ page 136, and ‘while it proved three problems in topos theory it then set about dismantling the mechanism of the proofs.’ page 151). At the IHES ‘I met three men that I could talk to: Grothendieck, Deligne, and Oscar Zariski.’ (page 136).

I don’t know whether Zariski visited the IHES in the early 70ties, and while most historical allusions (to Grothendieck’s life, his role in Bourbaki etc.) are correct, Alicia mentions the ‘Langlands project’ (page 66) which may very well have been the talk of town at the IHES in 1972, but the mention of Witten ‘Grothendieck writes everything down. Witten nothing.’ (page 100) raised an eyebrow.

The book also contains these two nice attempts to capture some of the essence of topos theory:

When you get to topos theory you are at the edge of another universe.
You have found a place to stand where you can look back at the world from nowhere. It’s not just some gestalt. It’s fundamental. (page 13)

You asked me about Grothendieck. The topos theory he came up with is a witches’ brew of topology and algebra and mathematical logic.
It doesnt even have a clear identity. The power of the theory is still speculative. But it’s there.
You have a sense that it is waiting quietly with answers to questions that nobody has asked yet. (page 68)

I did read ‘The passenger’ first, which is probably better as then you’d know already some of the ghosts haunting Alicia, but it’s not a must if you are only interested in their discussions about the nature of mathematics. Be warned that it is a pretty dark book, better not read when you’re already feeling low, and it should come with a link to a suicide prevention line.

Here’s a more considered take on Stella Maris:

In the shape of languages we started from a collection of notes, made a poset of text-snippets from them, and turned this into an enriched category over the unit interval $[0,1]$, following the paper paper An enriched category theory of language: from syntax to semantics by Tai-Danae Bradley, John Terilla and Yiannis Vlassopoulos.

This allowed us to view the text-snippets as points in a Lawvere pseudoquasi metric space, and to define a ‘topos’ of enriched presheaves on it, including the Yoneda-presheaves containing semantic information of the snippets.

In the previous post we looked at ‘building a second brain’ apps, such as LogSeq and Obsidian, and hoped to use them to test the conjectured ‘topos of the unconscious’.

In Obsidian, a vault is a collection of notes (with their tags and other meta-data), together with all links between them.

The vault of the language-poset will have one note for every text-snipped, and have a link from note $n$ to note $m$ if $m$ is a text-fragment in $n$.

In their paper, Bradley, Terilla and Vlassopoulos use the enrichment structure where $\mu(n,m) \in [0,1]$ is the conditional probablity of the fragment $m$ to be extended to the larger text $n$.

Most Obsidian vaults are a lot more complicated, possibly having oriented cycles in their internal link structure.

Still, it is always possible to turn the notes of the vault into a category enriched over $[0,1]$, in multiple ways, depending on whether we want to focus on the internal link-structure or rather on the semantic similarity between notes, or any combination of these.

Let $X$ be a set of searchable data from your vault. Elements of $X$ may be

• words contained in notes
• in- or out-going links between notes
• tags used
• YAML-frontmatter

Assign a positive real number $r_x \geq 0$ to every $x \in X$. We see $r_x$ as the ‘relevance’ we attach to the search term $x$. So, it is possible to emphasise certain key-words or tags, find certain links more important than others, and so on.

For this relevance function $r : X \rightarrow \mathbb{R}_+$, we have a function defined on all subsets $Y$ of $X$

$$f_r~:~\mathcal{P}(X) \rightarrow \mathbb{R}_+ \qquad Y \mapsto f_r(Y) = \sum_{x \in Y} r_x$$

Take a note $n$ from the vault $V$ and let $X_n$ be the set of search terms from $X$ contained in $n$.

We can then define a (generalised) Jaccard distance for any pair of notes $n$ and $m$ in $V$:

$$d_r(n,m) = \begin{cases} 0~\text{if f_r(X_n \cup X_m)=0} \\ 1-\frac{f_r(X_n \cap X_m)}{f_r(X_n \cup X_m)}~\text{otherwise} \end{cases}$$

This distance is symmetric, $d_r(n,n)=0$ for all notes $n$, and the crucial property is that it satisfies the triangle inequality, that is, for all triples of notes $l$, $m$ and $n$ we have

$$d_r(l,n) \leq d_r(l,m)+d_r(m,n)$$

For a proof in this generality see the paper A note on the triangle inequality for the Jaccard distance by Sven Kosub.

How does this help to make the vault $V$ into a category enriched over $[0,1]$?

The poset $([0,1],\leq)$ is the category with objects all numbers $a \in [0,1]$, and a unique morphism $a \rightarrow b$ between two numbers iff $a \leq b$. This category has limits (infs) and colimits (sups), has a monoidal structure $a \otimes b = a \times b$ with unit object $1$, and an internal hom

$$Hom_{[0,1]}(a,b) = (a,b) = \begin{cases} \frac{b}{a}~\text{if b \leq a} \\ 1~\text{otherwise} \end{cases}$$

We say that the vault is an enriched category over $[0,1]$ if for every pair of notes $n$ and $m$ we have a number $\mu(n,m) \in [0,1]$ satisfying for all notes $n$

$$\mu(n,n)=1~\quad~\text{and}~\quad~\mu(m,l) \times \mu(n,m) \leq \mu(n,l)$$

for all triples of notes $l,m$ and $n$.

Starting from any relevance function $r : X \rightarrow \mathbb{R}_+$ we define for every pair $n$ and $m$ of notes the distance function $d_r(m,n)$ satisfying the triangle inequality. If we now take

$$\mu_r(m,n) = e^{-d_r(m,n)}$$

then the triangle inequality translates for every triple of notes $l,m$ and $n$ into

$$\mu_r(m,l) \times \mu_r(n,m) \leq \mu_r(n,l)$$

That is, every relevance function makes $V$ into a category enriched over $[0,1]$.

Two simple relevance functions, and their corresponding distance and enrichment functions are available from Obsidian’s Graph Analysis community plugin.

To get structural information on the link-structure take as $X$ the set of all incoming and outgoing links in your vault, with relevance function the constant function $1$.

‘Jaccard’ in Graph Analysis computes for the current note $n$ the value of $1-d_r(n,m)$ for all notes $m$, so if this value is $a \in [0,1]$, then the corresponding enrichment value is $\mu_r(m,n)=e^{a-1}$.

To get semantic information on the similarity between notes, let $X$ be the set of all words in all notes and take again as relevance function the constant function $1$.

To access ‘BoW’ (Bags of Words) in Graph Analysis, you must first install the (non-community) NLP plugin which enables various types of natural language processing in the vault. The install is best done via the BRAT plugin (perhaps I’ll do a couple of posts on Obsidian someday).

If it gives for the current note $n$ the value $a$ for a note $m$, then again we can take as the enrichment structure $\mu_r(n,m)=e^{a-1}$.

Graph Analysis offers more functionality, and a good introduction is given in this clip:

Calculating the enrichment data for custom designed relevance functions takes a lot more work, but is doable. Perhaps I’ll return to this later.

Mathematically, it is probably more interesting to start with a given enrichment structure $\mu$ on the vault $V$, describe the category of all enriched presheaves $\widehat{V_{\mu}}$ and find out what we can do with it.

(tbc)

Previously in this series:

Before ChatGPT, the hype among productivity boosters was a PKMs or Personal knowledge management system.

It gained popularity through Tiago Forte’s book ‘Building a second brain’, and (for academics perhaps a more useful read) ‘How to take smart notes’ by Sönke Ahrens.

These books promote new techniques for note-taking (and for storing these notes) such as the PARA-method, the CODE-system, and Zettelkasten.

Unmistakable Creative has some posts on the principles behing the ‘second brain’ approach.

Your brain isn’t like a hard drive or a dropbox, where information is stored in folders and subfolders. None of our thoughts or ideas exist in isolation. Information is organized in a series of non-linear associative networks in the brain.

Networked thinking is not just a more efficient way to organize information. It frees your brain to do what it does best: Imagine, invent, innovate, and create. The less you have to remember where information is, the more you can use it to summarize that information and turn knowledge into action.

and

A network has no “correct” orientation and thus no bottom and no top. Each individual, or “node,” in a network functions autonomously, negotiating its own relationships and coalescing into groups. Examples of networks include a flock of birds, the World Wide Web, and the social ties in a neighborhood. Networks are inherently “bottom-up” in that the structure emerges organically from small interactions without direction from a central authority.

-Tiago Forte, Tagging for Personal Knowledge Management

There are several apps you can use to start building your second brain, the more popular seem to be Roam Research, LogSeq, and Obsidian.

These systems allow you to store, link and manipulate a large collection of notes, query them as a database, modify them in various ways via plugins or scripts, and navigate the network created via graph-views.

Exactly the kind of things we need to modify the simple system from the shape of languages-post into a proper topos of the unconscious.

I’ve been playing around with Obsidian which I like because it has good LaTeX plugins, powerful database tools via the Dataview plugin, and one can execute codeblocks within notes in almost any programming language (python, haskell, lean, Mathematica, ruby, javascript, R, …).

Most of all it has a vibrant community of users, an excellent forum, and a well-documented Obsidian hub.

There’s just one problem, I’m a terrible note-taker, so how can I begin to load my ‘second brain’?

Obsidian has several plugins to import data, such as your Kindle highlights, your Twitter feed, your Readwise-data, and many others, but having been too lazy in the past, I cannot use any of them.

In fact, the only useful collection of notes I have are my blog-posts. So, I’ve uploaded NeverEndingBooks into Obsidian, one note per post (admittedly, not very Zettelkasten-like), half a million words in total.

Fortunately, I did tag most of these posts at the time. Together with other meta-data this results in the Graph view below (under ‘Files’ toggled tags, under ‘Groups’ three tag-colours, and under ‘Display’ toggled arrows). One can add colour-groups based on tags or other information (here, red dots are posts tagged ‘Grothendieck’, the blue ones are tagged ‘Conway’, the purple ones tagged ‘Connes’, just for the sake of illustration). In Obsidian you can zoom into this graph, place a pointer on a node to highlight the connecting dots, and much more.

Because I tend to forget such things, and as it may be useful to other people running a WordPress-blog making heavy use of MathJax, here’s the procedure I followed:

1. Follow the instructions from Convert wordpress articles to markdown.

In the wizard I’ve opted to go only for yearly folders, to prefix posts with the date, and to save all images.

2. This gives you a directory with one folder per year containing markdown versions of your posts, and in each year-folder a subfolder ‘img’ containing all images.

Turn this directory into an Obsidian-vault by opening Obsidian, click on the ‘open another vault’ icon (third from bottom-left), select ‘Open folder as vault’ and navigate to your directory.

3. You will notice that most of your LaTeX cannot be parsed because during the markdown-process backslashes are treaded as special character, resulting in two backslashes for every LaTeX-command…

A remark before trying to solve this: another option might be to use the wordpress-to-hugo-exporter, resulting in clean LaTeX, but lacking the possibility to opt for yearly-folders (it dumps all posts into one folder), and it makes a mess of the image-files.

4. So, we will need to do a lot of search&replaces in all files, and need a convenient tool for this.

First option was the Sublime Text app, which is free and does the search&replaces quickly. The problem is that you have to save each of the files, one at a time! This may take hours.

I’ve done it using the Search and Replace app (3$) which allows you to make several searches/replaces at the same time (I messed up LaTeX code in previous exports, so needed to do many more changes). It warns you that it is dangerous to replace strings in all files (which is the reason why Sublime Text makes it difficult), you can ignore it, but only after you put the ‘img’ folders away in a safe place. Otherwise it will also try to make the changes to these files, recognise that they are not text-files, and drop them altogether… That’s it. I now have a backup network-version of this blog. As we mentioned in the previous post a first attempt to construct the ‘topos of the unconscious’ might be to start with a collection of notes (the ‘conscious’) and work on the semantics of text-snippets to unravel (a part of) the unconscious underpinning of these notes. We also mentioned that the poset-structure in that post should be replaced by a more involved network structure. What interests me most is whether such an approach might be doable ‘in practice’, and Obsidian looks like the perfect tool to try this out. What we need is a sufficiently large set of notes, of independent interest, to inject into Obsidian. The more meta it is, the better… (tbc) Previously in this series: Next: The enriched vault In the topology of dreams we looked at Sibony’s idea to view dream-interpretations as sections in a fibered space. The ‘points’ in the base-space and fibers consisting of chunks of text, perhaps connected by links. The topology and shape of this fibered space is still shrouded in mystery. Let’s look at a simple approach to turn a large number of texts into a topos, and define a loose metric on it. There’s this paper An enriched category theory of language: from syntax to semantics by Tai-Danae Bradley, John Terilla and Yiannis Vlassopoulos. Tai-Danae Bradley is an excellent communicator of everything category related, so probably it is more fun to read her own blogposts on this paper: or to watch her Categories for AI talk: ‘Category Theory Inspired by LLMs’: Let’s start with a collection of notes. In the paper, they consider all possible texts written in some language, but it may be a set of webpages to train a language model, or a set of recollections by someone. Next, shred these notes into chunks of text, and point one of these to all the texts obtained by deleting some words at the start and/or end of it. For example, the note ‘a red rose’ will point to ‘a red’, ‘red rose’, ‘a’, ‘red’ and ‘rose’ (but not to ‘a rose’). You may call this a category, to me it is just as a poset$(\mathcal{L},\leq)$. The maximal elements are the individual words, the minimal elements are the notes, or websites, we started from. A down-set$A$of this poset$(\mathcal{L},\leq)$is a subset of$\mathcal{L}$closed under taking smaller elements, that is, if$a \in A$and$b \leq a$, then$b \in A$. The intersection of two down-sets is again a down-set (or empty), and the union of down-sets is again a downset. That is, down-sets define a topology on our collection of text-snippets, or if you want, on language-fragments. For example, the open determined by the word ‘red’ is the collection of all text-fragments containing this word. The corresponding presheaf topos$\widehat{\mathcal{L}}$is then just the category of all (set-valued) presheaves on this topological space. As an example, the Yoneda-presheaf$\mathcal{Y}(p)$of a text-snippet$p$is the contra-variant functor $$(\mathcal{L},\leq) \rightarrow \mathbf{Sets}$$ sending any$q \leq p$to the unique map$\ast$from$q$to$p$, and if$q \not\leq p$then we map it to$\emptyset$. If$A$is a down-set (an open of over topological space) then the sections of$\mathcal{Y}(p)$over$A$are$\{ \ast \}$if for all$a \in A$we have$a \leq p$, and$\emptyset$otherwise. The presheaf$\mathcal{Y}(p)$already contains some semantic information about the snippet$p$as it gives all contexts in which$p$appears. Perhaps interesting is that the ‘points’ of the topos$\widehat{\mathcal{L}}$are the notes we started from. Recall that Connes and Gauthier-Lafaey want to construct a topos describing someone’s unconscious, and points of that topos should be the connection with that person’s consciousness. Suppose you want to unravel your unconscious. You start by writing down a large set of notes containing all relevant facts of your life. Then you construct from these notes the above collection of snippets and its corresponding pre-sheaf topos. Clearly, you wrote your notes consciously, but probably the exact phrasing of these notes, or recurrent themes in them, or some text-combinations are ruled by your unconscious. Ok, it’s not much, but perhaps it’s a germ of an potential approach… (Image credit) Now we come to the interesting part of the paper, the ‘enrichment’ of this poset. Surely, some of these text-snippets will occur more frequently than others. For example, in your starting notes the snippet ‘red rose’ may appear ten time more than the snippet ‘red dwarf’, but this is not visible in the poset-structure. So how can we bring in this extra information? If we have two text-snippets$p$and$q$and$q \leq p$, that is,$p$is a connected sub-string of$q$. We can compute the conditional probability$\pi(q|p)$which tells us how likely it is that if we spot an occurrence of$p$in our starting notes, it is part of the larger sentence$q$. These numbers can be easily computed and from the rules of probability we get that for snippets$r \leq q \leq p$we have that $$\pi(r|p) = \pi(r|q) \times \pi(q|r)$$ so these numbers (all between$0$and$1$) behave multiplicative along paths in the poset. Nice in theory, but it requires an awful lot of computation. From the paper: The reader might think of these probabilities$\pi(q|p)$as being most well defined when$q$is a short extension of$p$. While one may be skeptical about assigning a probability distribution on the set of all possible texts, it’s reasonable to say there is a nonzero probability that cat food will follow I am going to the store to buy a can of and, practically speaking, that probability can be estimated. Indeed, existing LLMs successfully learn these conditional probabilities$\pi(q|p)$using standard machine learning tools trained on large corpora of texts, which may be viewed as providing a wealth of samples drawn from these conditional probability distributions. It may be easier to have an estimate$\mu(q|p)$of this conditional probability for immediate successors (that is, if$q$is obtained from$p$by adding one word at the beginning or end of it), and then extend this measure to all arrows in the poset by taking the maximum of products along paths. In this way we have for all$r \leq q \leq p$that $$\mu(r|p) \geq \mu(r|q) \times \mu(q|p)$$ The upshot is that this measure$\mu$turns our poset (or category)$(\mathcal{L},\leq)$into a category ‘enriched’ over the unit interval$[ 0,1 ]$(suitably made into a monoidal category). I’ll spare you the details, just want to flash out the corresponding notion of ‘enriched presheaves’ which are the objects of the semantic category$\widehat{\mathcal{L}}^s$in the paper, which is the enriched version of the presheaf category$\widehat{\mathcal{L}}$. An enriched presheaf is a function (not functor) $$F~:~\mathcal{L} \rightarrow [0,1]$$ satisfying the condition that for all text-snippets$r,q \in \mathcal{L}$we have that $$\mu(r|q) \leq [F(q),F(r)] = \begin{cases} \frac{F(r)}{F(q)}~\text{if F(r) \leq F(q)} \\ 1~\text{otherwise} \end{cases}$$ Note that the enriched (or semantic) Yoneda presheaf$\mathcal{Y}^s(p)(q) = \mu(q|p)$satisfies this condition, and now this data not only records the contexts in which$p$appears, but also measures how likely it is for$p$to appear in a certain context. Another cute application of the condition on the measure$\mu$is that it allows us to define a ‘distance function’ (satisfying the triangle inequality) on all text-snippets in$\mathcal{L}$by $$d(q,p) = \begin{cases} -ln(\mu(q|p))~\text{if q \leq p} \\ \infty~\text{otherwise} \end{cases}$$ So, the higher$\mu(q|p)$the closer$q$lies to$p$, and now the snippet$p$(example ‘red’) not only defines the open set in$\mathcal{L}$of all texts containing$p\$, but now we can structure the snippets in this open set with respect to this ‘distance’.

In this way we can turn any language, or a collection of texts in a given language, into what Lawvere called a ‘generalized metric space’.

It looks as if we are progressing slowly in our, probably futile, attempt to understand Alain Connes’ and Patrick Gauthier-Lafaye’s claim that ‘the unconscious is structured like a topos’.

Even if we accept the fact that we can start from a collection of notes, there are a number of changes we need to make to the above approach:

• there will be contextual links between these notes
• we only want to retain the relevant snippets, not all of them
• between these ‘highlights’ there may also be contextual links
• texts can be related without having to be concatenations
• we need to implement changes when new notes are added
• … (much more)

Perhaps, we should try to work on a specific ‘case’, and explore all technical tools that may help us to make progress.

(tbc)

Previously in this series:

Next:

Last May, the meeting Lacan et Grothendieck, l’impossible rencontre? took place in Paris (see this post). Video’s of that meeting are now available online.

Here’s the talk by Alain Connes and Patrick Gauthier-Lafaye on their book A l’ombre de Grothendieck et de Lacan : un topos sur l’inconscient ? (see this post ).

Let’s quickly recall their main ideas:

1. The unconscious is structured as a topos (Jacques Lacan argued it was structured as a language), because we need a framework allowing logic without the law of the excluded middle for Lacan’s formulas of sexuation to make some sense at all.

2. This topos may differs from person to person, so we do not all share the same rules of logic (as observed in real life).

3. Consciousness is related to the points of the topos (they are not precise on this, neither in the talk, nor the book).

4. All these individual toposes are ruled by a classifying topos, and they see Lacan’s work as the very first steps towards trying to describe the unconscious by a geometrical theory (though his formulas are not first order).

Surely these are intriguing ideas, if only we would know how to construct the topos of someone’s unconscious.

Let’s go looking for clues.

At the same meeting, there was a talk by Daniel Sibony: “Mathématiques et inconscient”

Sibony started out as mathematician, then turned to psychiatry in the early 70ties. He was acquainted with both Grothendieck and Lacan, and even brought them together once, over lunch, some day in 1973. He makes a one-line appearance in Grothendieck’s Récoltes et Semailles, when G discribes his friends in ‘Survivre et Vivre’:

“Daniel Sibony (who stayed away from this group, while pursuing its evolution out of the corner of a semi-disdainful, smirking eye)”

In his talk, Sibony said he had a similar idea, 50 years before Connes and Gauthier-Lafaye (3.04 into the clip):

“At the same time (early 70ties) I did a seminar in Vincennes, where I was a math professor, on the topology of dreams. At the time I didn’t have categories at my disposal, but I used fibered spaces instead. I showed how we could interpret dreams with a fibered space. This is consistent with the Freudian idea, except that Freud says we should take the list of words from the story of the dream and look for associations. For me, these associations were in the fibers, and these thoughts on fibers and sheaves have always followed me. And now, after 50 years I find this pretty book by Alain Connes and Patrick Gauthier-Lafaye on toposes, and see that my thoughts on dreams as sheaves and fibered spaces are but a special case of theirs.”

This looks interesting. After all, Freud called dream interpretation the ‘royal road’ to the unconscious. “It is the ‘King’s highway’ along which everyone can travel to discover the truth of unconscious processes for themselves.”

Sibony clarifies his idea in the interview L’utilisation des rêves en psychothérapie with Maryse Siksou.

“The dream brings blocks of words, of “compacted” meanings, and we question, according to the good old method, each of these blocks, each of these points and which we associate around (we “unblock” around…), we let each point unfold according to the “fiber” which is its own.

I introduced this notion of the dream as fibered space in an article in the review Scilicet in 1972, and in a seminar that I gave at the University of Vincennes in 1973 under the title “Topologie et interpretation des rêves”, to which Jacques Lacan and his close retinue attended throughout the year.

The idea is that the dream is a sheaf, a bundle of fibers, each of which is associated with a “word” of the dream; interpretation makes the fibers appear, and one can pick an element from each, which is of course “displaced” in relation to the word that “produced” the fiber, and these elements are articulated with other elements taken in other fibers, to finally create a message which, once again, does not necessarily say the meaning of the dream because a dream has as many meanings as recipients to whom it is told, but which produces a strong statement, a relevant statement, which can restart the work.”

Key images in the dream (the ‘points’ of the base-space) can stand for entirely different situations in someone’s life (the points in the ‘fiber’ over an image). The therapist’s job is to find a suitable ‘section’ in this ‘sheaf’ to further the theraphy.

It’s a bit like translating a sentence from one language to another. Every word (point of the base-space) can have several possible translations with subtle differences (the points in the fiber over the word). It’s the translator’s job to find the best ‘section’ in this sheaf of possibilities.

This translation-analogy is used by Daniel Sibony in his paper Traduire la passe:

“It therefore operates just like the dream through articulated choices, from one fiber to another, in a bundle of speaking fibers; it articulates them by seeking the optimal section. In fact, the translation takes place between two fiber bundles, each in a language, but in the starting bundle the choice seems fixed by the initial text. However, more or less consciously, the translator “bursts” each word into a larger fiber, he therefore has a bundle of fibers where the given text seems after the fact a singular choice, which will produce another choice in the bundle of the other language.”

This paper also contains a pre-ChatGPT story (we’re in 1998), in which the language model fails because it has far too few alternatives in its fibers:

I felt it during a “humor festival” where I was approached by someone (who seemed to have some humor) and who was a robot. We had a brief conversation, very acceptable, beyond the conventional witticisms and knowing sighs he uttered from time to time to complain about the lack of atmosphere, repeating that after all we are not robots.

I thought at first that it must be a walking walkie-talkie and that in fact I was talking to a guy who was remote control from his cabin. But the object was programmed; the unforeseen effects of meaning were all the more striking. To my question: “Who created you?” he answered with a strange word, a kind of technical god.

I went on to ask him who he thought created me; his answer was immediate: “Oedipus”. (He knew, having questioned me, that I was a psychoanalyst.) The piquancy of his answer pleased me (without Oedipus, at least on a first level, no analyst). These bursts of meaning that we know in children, psychotics, to whom we attribute divinatory gifts — when they only exist, save their skin, questioning us about our being to defend theirs — , these random strokes of meaning shed light on the classic aftermaths where when a tile arrives, we hook it up to other tiles from the past, it ties up the pain by chaining the meaning.

Anyway, the conversation continuing, the robot asked me to psychoanalyse him; I asked him what he was suffering from. His answer was immediate: “Oedipus”.

Disappointing and enlightening: it shows that with each “word” of the interlocutor, the robot makes correspond a signifying constellation, a fiber of elements; choosing a word in each fiber, he then articulates the whole with obvious sequence constraints: a bit of readability and a certain phrasal push that leaves open the game of exchange. And now, in the fiber concerning the “psy” field, chance or constraint had fixed him on the same word, “Oedipus”, which, by repeating itself, closed the scene heavily.

Okay, we have a first potential approximation to Connes and Gauthier-Lafaye’s elusive topos, a sheaf of possible interpretation of base-words in a language.

But, the base-space is still rather discrete, or at best linearly ordered. And also in the fibers, and among the sections, there’s not much of a topology at work.

Perhaps, we should have a look at applications of topology and/or topos theory in large language models?

(tbc)

Next:

The shape of languages

If you have neither the time nor energy to watch more than one interview or talk about Grothendieck’s life and mathematics, may I suggest to spare that privilege for Leila Schneps’ talk on ‘Le génie de Grothendieck’ in the ‘Thé & Sciences’ series at the Salon Nun in Paris.

I was going to add some ‘relevant’ time slots after the embedded YouTube-clip below, but I really think it is better to watch Leila’s interview in its entirety. Enjoy!

In the Grothendieck meets Lacan-post we did mention that Alain Connes wrote a book together with Patrick Gauthier-Lafaye “A l’ombre de Grothendieck et de Lacan, un topos sur l’inconscient”, on the potential use of Grothendieck’s toposes for the theory of unconsciousness, proposed by the French psychoanalyst Jacques Lacan.

A bit more on that book you can read in the topos of unconsciousness. For another take on this you can visit the blog of l’homme quantique – Sur les traces de Lévi-Strauss, Lacan et Foucault, filant comme le sable au vent marin…. There is a series of posts dedicated to the reading of ‘A l’ombre de Grothendieck et de Lacan’:

Alain Connes isn’t the first (former) Bourbaki-member to write a book together with a Lacan-disciple.

In 1984, Henri Cartan (one of the founding fathers of Bourbaki) teamed up with the French psychoanalyst (and student of Lacan) Jean-Francois Chabaud for “Le Nœud dit du fantasme – Topologie de Jacques Lacan”.

(Chabaud on the left, Cartan on the right, Cartan’s wife Nicole in the mddle)

“Dans cet ouvrage Jean François Chabaud, psychanalyste, effectue la monstration de l’interchangeabilité des consistances de la chaîne de Whitehead (communément nommée « Noeud dit du fantasme » ou du « Non rapport sexuel » dans l’aire analytique), et peut ainsi se risquer à proposer, en s’appuyant sur les remarques essentielles de Jacques Lacan, une écriture du virage, autre nom de la passe. Henri Cartan (1904-2008), l’un des Membres-fondateur de N. Bourbaki, a contribué à ce travail avec deux réflexions : la première, considère cette monstration et l’augmente d’une présentation ; la seconde, traite tout particulièrement de l’orientation des consistances. Une suite de traces d’une séquence de la chaîne précède ce cahier qui s’achève par : « L’en-plus-de-trait », une contribution à l’écriture nodale.”

Lacan was not only fascinated by the topology of surfaces such as the crosscap (see the topos of unconsciousness), but also by the theory of knots and links.

The Borromean link figures in Lacan’s world for the Real, the Imaginary and the Symbolic. The Whitehead link (that is, two unknots linked together) is thought to be the knot (sic) of phantasy.

In 1986, there was the exposition “La Chaine de J.H.C. Whitehead” in the
Palais de la découverte in Paris (from which also the Chabaud-Cartan picture above is taken), where la Salle de Mathématiques was filled with different models of the Whitehead link.

In 1988, the exposition was held in the Deutches Museum in Munich and was called “Wandlung – Darstellung der topologischen Transformationen der Whitehead-Kette”

The set-up in Munich was mathematically more interesting as one could see the link-projection on the floor, and use it to compute the link-number. It might have been even more interesting if the difference in these projections between two subsequent models was exactly one Reidemeister move

You can view more pictures of these and subsequent expositions on the page dedicated to the work of Jean-Francois Chabaud: La Chaîne de Whitehead ou Le Nœud dit du fantasme Livre et Expositions 1980/1997.

Part of the first picture featured also in the Hommage to Henri Cartan (1904-2008) by Michele Audin in the Notices of the AMS. She writes (about the 1986 exposition):

“At the time, Henri Cartan was 82 years old and retired, but he continued to be interested in mathematics and, as one sees, its popularization.”