# From the Da Vinci code to Habiro

The Fibonacci sequence reappears a bit later in Dan Brown’s book ‘The Da Vinci Code’ where it is used to login to the bank account of Jacques Sauniere at the fictitious Parisian branch of the Depository Bank of Zurich.

Last time we saw that the Hankel matrix of the Fibonacci series $F=(1,1,2,3,5,\dots)$ is invertible over $\mathbb{Z}$
$H(F) = \begin{bmatrix} 1 & 1 \\ 1 & 2 \end{bmatrix} \in SL_2(\mathbb{Z})$
and we can use the rule for the co-multiplication $\Delta$ on $\Re(\mathbb{Q})$, the algebra of rational linear recursive sequences, to determine $\Delta(F)$.

For a general integral linear recursive sequence the corresponding Hankel matrix is invertible over $\mathbb{Q}$, but rarely over $\mathbb{Z}$. So we need another approach to compute the co-multiplication on $\Re(\mathbb{Z})$.

Any integral sequence $a = (a_0,a_1,a_2,\dots)$ can be seen as defining a $\mathbb{Z}$-linear map $\lambda_a$ from the integral polynomial ring $\mathbb{Z}[x]$ to $\mathbb{Z}$ itself via the rule $\lambda_a(x^n) = a_n$.

If $a \in \Re(\mathbb{Z})$, then there is a monic polynomial with integral coefficients of a certain degree $n$

$f(x) = x^n + b_1 x^{n-1} + b_2 x^{n-2} + \dots + b_{n-1} x + b_n$

such that for every integer $m$ we have that

$a_{m+n} + b_1 a_{m+n-1} + b_2 a_{m+n-2} + \dots + b_{n-1} a_{m+1} + a_m = 0$

Alternatively, we can look at $a$ as defining a $\mathbb{Z}$-linear map $\lambda_a$ from the quotient ring $\mathbb{Z}[x]/(f(x))$ to $\mathbb{Z}$.

The multiplicative structure on $\mathbb{Z}[x]/(f(x))$ dualizes to a co-multiplication $\Delta_f$ on the set of all such linear maps $(\mathbb{Z}[x]/(f(x)))^{\ast}$ and we can compute $\Delta_f(a)$.

We see that the set of all integral linear recursive sequences can be identified with the direct limit
$\Re(\mathbb{Z}) = \underset{\underset{f|g}{\rightarrow}}{lim}~(\frac{\mathbb{Z}[x]}{(f(x))})^{\ast}$
(where the directed system is ordered via division of monic integral polynomials) and so is equipped with a co-multiplication $\Delta = \underset{\rightarrow}{lim}~\Delta_f$.

Btw. the ring structure on $\Re(\mathbb{Z}) \subset (\mathbb{Z}[x])^{\ast}$ comes from restricting to $\Re(\mathbb{Z})$ the dual structures of the co-ring structure on $\mathbb{Z}[x]$ given by
$\Delta(x) = x \otimes x \quad \text{and} \quad \epsilon(x) = 1$

From this description it is clear that you need to know a hell of a lot number theory to describe this co-multiplication explicitly.

As most of us prefer to work with rings rather than co-rings it is a good idea to begin to study this co-multiplication $\Delta$ by looking at the dual ring structure of
$\Re(\mathbb{Z})^{\ast} = \underset{\underset{ f | g}{\leftarrow}}{lim}~\frac{\mathbb{Z}[x]}{(f(x))}$
This is the completion of $\mathbb{Z}[x]$ at the multiplicative set of all monic integral polynomials.

This is a horrible ring and very little is known about it. Some general remarks were proved by Kazuo Habiro in his paper Cyclotomic completions of polynomial rings.

In fact, Habiro got interested is a certain subring of $\Re(\mathbb{Z})^{\ast}$ which we now know as the Habiro ring and which seems to be a red herring is all stuff about the field with one element, $\mathbb{F}_1$ (more on this another time). Habiro’s ring is

$\widehat{\mathbb{Z}[q]} = \underset{\underset{n|m}{\leftarrow}}{lim}~\frac{\mathbb{Z}[q]}{(q^n-1)}$

and its elements are all formal power series of the form
$a_0 + a_1 (q-1) + a_2 (q^2-1)(q-1) + \dots + a_n (q^n-1)(q^{n-1}-1) \dots (q-1) + \dots$
with all coefficients $a_n \in \mathbb{Z}$.

Here’s a funny property of such series. If you evaluate them at $q \in \mathbb{C}$ these series are likely to diverge almost everywhere, but they do converge in all roots of unity!

Some people say that these functions are ‘leaking out of the roots of unity’.

If the ring $\Re(\mathbb{Z})^{\ast}$ is controlled by the absolute Galois group $Gal(\overline{\mathbb{Q}}/\mathbb{Q})$, then Habiro’s ring is controlled by the abelianzation $Gal(\overline{\mathbb{Q}}/\mathbb{Q})^{ab} \simeq \hat{\mathbb{Z}}^{\ast}$.

# A forgotten type and roots of unity (again)

The monstrous moonshine picture is the finite piece of Conway’s Big Picture needed to understand the 171 moonshine groups associated to conjugacy classes of the monster.

Last time I claimed that there were exactly 7 types of local behaviour, but I missed one. The forgotten type is centered at the number lattice $84$.

Locally around it the moonshine picture looks like this
$\xymatrix{42 \ar@{-}[dr] & 28 \frac{1}{3} \ar@[red]@{-}[d] & 41 \frac{1}{2} \ar@{-}[ld] \\ 28 \ar@[red]@{-}[r] & \color{grey}{84} \ar@[red]@{-}[r] \ar@[red]@{-}[d] \ar@{-}[rd] & 28 \frac{2}{3} \\ & 252 & 168}$

and it involves all square roots of unity ($42$, $42 \frac{1}{2}$ and $168$) and $3$-rd roots of unity ($28$, $28 \frac{1}{3}$, $28 \frac{2}{3}$ and $252$) centered at $84$.

No, I’m not hallucinating, there are indeed $3$ square roots of unity and $4$ third roots of unity as they come in two families, depending on which of the two canonical forms to express a lattice is chosen.

In the ‘normal’ expression $M \frac{g}{h}$ the two square roots are $42$ and $42 \frac{1}{2}$ and the three third roots are $28, 28 \frac{1}{3}$ and $28 \frac{2}{3}$. But in the ‘other’ expression
$M \frac{g}{h} = (\frac{g’}{h},\frac{1}{h^2M})$
(with $g.g’ \equiv 1~mod~h$) the families of $2$-nd and $3$-rd roots of unity are
$\{ 42 \frac{1}{2} = (\frac{1}{2},\frac{1}{168}), 168 = (0,\frac{1}{168}) \}$
and
$\{ 28 \frac{1}{3} = (\frac{1}{3},\frac{1}{252}), 28 \frac{2}{3} = (\frac{2}{3},\frac{1}{252}), 252 = (0 , \frac{1}{252}) \}$
As in the tetrahedral snake post, it is best to view the four $3$-rd roots of unity centered at $84$ as the vertices of a tetrahedron with center of gravity at $84$. Power maps in the first family correspond to rotations along the axis through $252$ and power maps in the second family are rotations along the axis through $28$.

In the ‘normal’ expression of lattices there’s then a total of 8 different local types, but two of them consist of just one number lattice: in $8$ the local picture contains all square, $4$-th and $8$-th roots of unity centered at $8$, and in $84$ the square and $3$-rd roots.

Perhaps surprisingly, if we redo everything in the ‘other’ expression (and use the other families of roots of unity), then the moonshine picture has only 7 types of local behaviour. The forgotten type $84$ appears to split into two occurrences of other types (one with only square roots of unity, and one with only $3$-rd roots).

I wonder what all this has to do with the action of the Bost-Connes algebra on the big picture or with Plazas’ approach to moonshine via non-commutative geometry.

# From the Da Vinci code to Galois

In The Da Vinci Code, Dan Brown feels he need to bring in a French cryptologist, Sophie Neveu, to explain the mystery behind this series of numbers:

13 – 3 – 2 – 21 – 1 – 1 – 8 – 5

The Fibonacci sequence, 1-1-2-3-5-8-13-21-34-55-89-144-… is such that any number in it is the sum of the two previous numbers.

It is the most famous of all integral linear recursive sequences, that is, a sequence of integers

$a = (a_0,a_1,a_2,a_3,\dots)$

such that there is a monic polynomial with integral coefficients of a certain degree $n$

$f(x) = x^n + b_1 x^{n-1} + b_2 x^{n-2} + \dots + b_{n-1} x + b_n$

such that for every integer $m$ we have that

$a_{m+n} + b_1 a_{m+n-1} + b_2 a_{m+n-2} + \dots + b_{n-1} a_{m+1} + a_m = 0$

For the Fibonacci series $F=(F_0,F_1,F_2,\dots)$, this polynomial can be taken to be $x^2-x-1$ because
$F_{m+2} = F_{m+1}+F_m$

The set of all integral linear recursive sequences, let’s call it $\Re(\mathbb{Z})$, is a beautiful object of great complexity.

For starters, it is a ring. That is, we can add and multiply such sequences. If

$a=(a_0,a_1,a_2,\dots),~\quad \text{and}~\quad a’=(a’_0,a’_1,a’_2,\dots)~\quad \in \Re(\mathbb{Z})$

then the sequences

$a+a’ = (a_0+a’_0,a_1+a’_1,a_2+a’_2,\dots) \quad \text{and} \quad a \times a’ = (a_0.a’_0,a_1.a’_1,a_2.a’_2,\dots)$

are again linear recursive. The zero and unit in this ring are the constant sequences $0=(0,0,\dots)$ and $1=(1,1,\dots)$.

So far, nothing terribly difficult or exciting.

It follows that $\Re(\mathbb{Z})$ has a co-unit, that is, a ring morphism

$\epsilon~:~\Re(\mathbb{Z}) \rightarrow \mathbb{Z}$

sending a sequence $a = (a_0,a_1,\dots)$ to its first entry $a_0$.

It’s a bit more difficult to see that $\Re(\mathbb{Z})$ also has a co-multiplication

$\Delta~:~\Re(\mathbb{Z}) \rightarrow \Re(\mathbb{Z}) \otimes_{\mathbb{Z}} \Re(\mathbb{Z})$
with properties dual to those of usual multiplication.

To describe this co-multiplication in general will have to await another post. For now, we will describe it on the easier ring $\Re(\mathbb{Q})$ of all rational linear recursive sequences.

For such a sequence $q = (q_0,q_1,q_2,\dots) \in \Re(\mathbb{Q})$ we consider its Hankel matrix. From the sequence $q$ we can form symmetric $k \times k$ matrices such that the opposite $i+1$-th diagonal consists of entries all equal to $q_i$
$H_k(q) = \begin{bmatrix} q_0 & q_1 & q_2 & \dots & q_{k-1} \\ q_1 & q_2 & & & q_k \\ q_2 & & & & q_{k+1} \\ \vdots & & & & \vdots \\ q_{k-1} & q_k & q_{k+1} & \dots & q_{2k-2} \end{bmatrix}$
The Hankel matrix of $q$, $H(q)$ is $H_k(q)$ where $k$ is maximal such that $det~H_k(q) \not= 0$, that is, $H_k(q) \in GL_k(\mathbb{Q})$.

Let $S(q)=(s_{ij})$ be the inverse of $H(q)$, then the co-multiplication map
$\Delta~:~\Re(\mathbb{Q}) \rightarrow \Re(\mathbb{Q}) \otimes \Re(\mathbb{Q})$
sends the sequence $q = (q_0,q_1,\dots)$ to
$\Delta(q) = \sum_{i,j=0}^{k-1} s_{ij} (D^i q) \otimes (D^j q)$
where $D$ is the shift operator on sequence
$D(a_0,a_1,a_2,\dots) = (a_1,a_2,\dots)$

If $a \in \Re(\mathbb{Z})$ is such that $H(a) \in GL_k(\mathbb{Z})$ then the same formula gives $\Delta(a)$ in $\Re(\mathbb{Z})$.

For the Fibonacci sequences $F$ the Hankel matrix is
$H(F) = \begin{bmatrix} 1 & 1 \\ 1& 2 \end{bmatrix} \in GL_2(\mathbb{Z}) \quad \text{with inverse} \quad S(F) = \begin{bmatrix} 2 & -1 \\ -1 & 1 \end{bmatrix}$
and therefore
$\Delta(F) = 2 F \otimes ~F – DF \otimes F – F \otimes DF + DF \otimes DF$
There’s a lot of number theoretic and Galois-information encoded into the co-multiplication on $\Re(\mathbb{Q})$.

To see this we will describe the co-multiplication on $\Re(\overline{\mathbb{Q}})$ where $\overline{\mathbb{Q}}$ is the field of all algebraic numbers. One can show that

$\Re(\overline{\mathbb{Q}}) \simeq (\overline{\mathbb{Q}}[ \overline{\mathbb{Q}}_{\times}^{\ast}] \otimes \overline{\mathbb{Q}}[d]) \oplus \sum_{i=0}^{\infty} \overline{\mathbb{Q}} S_i$

Here, $\overline{\mathbb{Q}}[ \overline{\mathbb{Q}}_{\times}^{\ast}]$ is the group-algebra of the multiplicative group of non-zero elements $x \in \overline{\mathbb{Q}}^{\ast}_{\times}$ and each $x$, which corresponds to the geometric sequence $x=(1,x,x^2,x^3,\dots)$, is a group-like element
$\Delta(x) = x \otimes x \quad \text{and} \quad \epsilon(x) = 1$

$\overline{\mathbb{Q}}[d]$ is the universal Lie algebra of the $1$-dimensional Lie algebra on the primitive element $d = (0,1,2,3,\dots)$, that is
$\Delta(d) = d \otimes 1 + 1 \otimes d \quad \text{and} \quad \epsilon(d) = 0$

Finally, the co-algebra maps on the elements $S_i$ are given by
$\Delta(S_i) = \sum_{j=0}^i S_j \otimes S_{i-j} \quad \text{and} \quad \epsilon(S_i) = \delta_{0i}$

That is, the co-multiplication on $\Re(\overline{\mathbb{Q}})$ is completely known. To deduce from it the co-multiplication on $\Re(\mathbb{Q})$ we have to consider the invariants under the action of the absolute Galois group $Gal(\overline{\mathbb{Q}}/\mathbb{Q})$ as
$\Re(\overline{\mathbb{Q}})^{Gal(\overline{\mathbb{Q}}/\mathbb{Q})} \simeq \Re(\mathbb{Q})$

Unlike the Fibonacci sequence, not every integral linear recursive sequence has an Hankel matrix with determinant $\pm 1$, so to determine the co-multiplication on $\Re(\mathbb{Z})$ is even a lot harder, as we will see another time.

Reference: Richard G. Larson, Earl J. Taft, ‘The algebraic structure of linearly recursive sequences under Hadamard product’

# What we (don’t) know

Do we know why the monster exists and why there’s moonshine around it?

The answer depends on whether or not you believe that vertex operator algebras are natural, elegant and inescapable objects.

the monster

Simple groups often arise from symmetries of exceptionally nice mathematical objects.

The smallest of them all, $A_5$, gives us the rotation symmetries of the icosahedron. The next one, Klein’s simple group $L_2(7)$, comes from the Klein quartic.

The smallest sporadic groups, the Mathieu groups, come from Steiner systems, and the Conway groups from the 24-dimensional Leech lattice.

What about the largest sporadic simple, the monster $\mathbb{M}$?

In his paper What is … the monster? Richard Borcherds writes (among other characterisations of $\mathbb{M}$):

“3. It is the automorphism group of the monster vertex algebra. (This is probably the best answer.)”

“Unfortunately none of these definitions is completely satisfactory. At the moment all constructions of the algebraic structures above seem artificial; they are constructed as sums of two or more apparently unrelated spaces, and it takes a lot of effort to define the algebraic structure on the sum of these spaces and to check that the monster acts on the resulting structure.
It is still an open problem to find a really simple and natural construction of the monster vertex algebra.

Here’s 2 minutes of John Conway on the “one thing” he really wants to know before he dies: why the monster group exists.

moonshine

Moonshine started off with McKay’s observation that 196884 (the first coefficient in the normalized j-function) is the sum 1+196883 of the dimensions of the two smallest simple representations of $\mathbb{M}$.

Soon it was realised that every conjugacy class of the monster has a genus zero group (or ‘moonshine group’) associated to it.

Borcherds proved the ‘monstrous moonshine conjectures’ asserting that the associated main modular function of such a group is the character series of the action of the element on the monster vertex algebra.

Here’s Borcherds’ ICM talk in Berlin on this: What is … Moonshine?.

Once again, the monster vertex algebra appears to be the final answer.

However, in characterising the 171 moonshine groups among all possible genus zero groups one has proved that they are all of the form:

(ii) : $(n|h)+e,g,\dots$

In his book Moonshine beyond the Monster, Terry Gannon writes:

“We now understand the significance, in the VOA or CFT framework, of transformations in $SL_2(\mathbb{Z})$, but (ii) emphasises that many modular transformations relevant to Moonshine are more general (called the Atkin-Lehner involutions).
Monstrous moonshine will remain mysterious until we can understand its Atkin-Lehner symmetries.

# the moonshine picture – at last

The monstrous moonshine picture is the subgraph of Conway’s big picture consisting of all lattices needed to describe the 171 moonshine groups.

It consists of:

– exactly 218 vertices (that is, lattices), out of which

– 97 are number-lattices (that is of the form $M$ with $M$ a positive integer), and

– 121 are proper number-like lattices (that is of the form $M \frac{g}{h}$ with $M$ a positive integer, $h$ a divisor of $24$ and $1 \leq g \leq h$ with $(g,h)=1$).

The $97$ number lattices are closed under taking divisors, and the corresponding Hasse diagram has the following shape

Here, number-lattices have the same colour if they have the same local structure in the moonshine picture (that is, have a similar neighbourhood of proper number-like lattices).

There are 7 different types of local behaviour:

The white numbered lattices have no proper number-like neighbours in the picture.

The yellow number lattices (2,10,14,18,22,26,32,34,40,68,80,88,90,112,126,144,180,208 = 2M) have local structure

$\xymatrix{M \ar@{-}[r] & \color{yellow}{2M} \ar@{-}[r] & M \frac{1}{2}}$

which involves all $2$-nd (square) roots of unity centered at the lattice.

The green number lattices (3,15,21,39,57,93,96,120 = 3M) have local structure

$\xymatrix{& M \ar@[red]@{-}[d] & \\ M \frac{1}{3} \ar@[red]@{-}[r] & \color{green}{3M} \ar@[red]@{-}[r] & M \frac{2}{3}}$

which involve all $3$-rd roots of unity centered at the lattice.

The blue number lattices (4,16,20,28,36,44,52,56,72,104 = 4M) have as local structure

$\xymatrix{M \frac{1}{2} \ar@{-}[d] & & M \frac{1}{4} \ar@{-}[d] \\ 2M \ar@{-}[r] & \color{blue}{4M} \ar@{-}[r] & 2M \frac{1}{2} \ar@{-}[d] \\ M \ar@{-}[u] & & M \frac{3}{4}}$

and involve the $2$-nd and $4$-th root of unity centered at the lattice.

The purple number lattices (6,30,42,48,60 = 6M) have local structure

$\xymatrix{& M \frac{1}{3} \ar@[red]@{-}[d] & 2M \frac{1}{3} & M \frac{1}{6} \ar@[red]@{-}[d] & \\ M \ar@[red]@{-}[r] & 3M \ar@{-}[r] \ar@[red]@{-}[d] & \color{purple}{6M} \ar@{-}[r] \ar@[red]@{-}[u] \ar@[red]@{-}[d] & 3M \frac{1}{2} \ar@[red]@{-}[r] \ar@[red]@{-}[d] & M \frac{5}{6} \\ & M \frac{2}{3} & 2M \frac{2}{3} & M \frac{1}{2} & }$

and involve all $2$-nd, $3$-rd and $6$-th roots of unity centered at the lattice.

The unique brown number lattice 8 has local structure

$\xymatrix{& & 1 \frac{1}{4} \ar@{-}[d] & & 1 \frac{1}{8} \ar@{-}[d] & \\ & 1 \frac{1}{2} \ar@{-}[d] & 2 \frac{1}{2} \ar@{-}[r] \ar@{-}[d] & 1 \frac{3}{4} & 2 \frac{1}{4} \ar@{-}[r] & 1 \frac{5}{8} \\ 1 \ar@{-}[r] & 2 \ar@{-}[r] & 4 \ar@{-}[r] & \color{brown}{8} \ar@{-}[r] & 4 \frac{1}{2} \ar@{-}[d] \ar@{-}[u] & \\ & & & 1 \frac{7}{8} \ar@{-}[r] & 2 \frac{3}{4} \ar@{-}[r] & 1 \frac{3}{8}}$

which involves all $2$-nd, $4$-th and $8$-th roots of unity centered at $8$.

Finally, the local structure for the central red lattices $12,24 = 12M$ is

$\xymatrix{ M \frac{1}{12} \ar@[red]@{-}[dr] & M \frac{5}{12} \ar@[red]@{-}[d] & M \frac{3}{4} \ar@[red]@{-}[dl] & & M \frac{1}{6} \ar@[red]@{-}[dr] & M \frac{1}{2} \ar@[red]@{-}[d] & M \frac{5}{6} \ar@[red]@{-}[dl] \\ & 3M \frac{1}{4} \ar@{-}[dr] & 2M \frac{1}{6} \ar@[red]@{-}[d] & 4M \frac{1}{3} \ar@[red]@{-}[d] & 2M \frac{1}{3} \ar@[red]@{-}[d] & 3M \frac{1}{2} \ar@{-}[dl] & \\ & 2M \frac{1}{2} \ar@[red]@{-}[r] & 6M \frac{1}{2} \ar@{-}[dl] \ar@[red]@{-}[d] \ar@{-}[r] & \color{red}{12M} \ar@[red]@{-}[d] \ar@{-}[r] & 6M \ar@[red]@{-}[d] \ar@{-}[dr] \ar@[red]@{-}[r] & 2M & \\ & 3M \frac{3}{4} \ar@[red]@{-}[dl] \ar@[red]@{-}[d] \ar@[red]@{-}[dr] & 2M \frac{5}{6} & 4M \frac{2}{3} & 2M \frac{2}{3} & 3M \ar@[red]@{-}[dl] \ar@[red]@{-}[d] \ar@[red]@{-}[dr] & \\ M \frac{1}{4} & M \frac{7}{12} & M \frac{11}{12} & & M \frac{1}{3} & M \frac{2}{3} & M}$

It involves all $2$-nd, $3$-rd, $4$-th, $6$-th and $12$-th roots of unity with center $12M$.

No doubt this will be relevant in connecting moonshine with non-commutative geometry and issues of replicability as in Plazas’ paper Noncommutative Geometry of Groups like $\Gamma_0(N)$.

Another of my pet follow-up projects is to determine whether or not the monster group $\mathbb{M}$ dictates the shape of the moonshine picture.

That is, can one recover the 97 number lattices and their partition in 7 families starting from the set of element orders of $\mathbb{M}$, applying some set of simple rules?

One of these rules will follow from the two equivalent notations for lattices, and the two different sets of roots of unities centered at a given lattice. This will imply that if a number lattice belongs to a given family, certain divisors and multiples of it must belong to related families.

If this works out, it may be a first step towards a possibly new understanding of moonshine.