what have quivers done to students?

A few years ago a student entered my office asking suggestions for his master thesis.

“I’m open to any topic as long as it has nothing to do with those silly quivers!”

At that time not the best of opening-lines to address me and, inevitably, the most disastrous teacher-student-conversation-ever followed (also on my part, i’m sorry to say).

This week, Markus Reineke had a similar, though less confrontational, experience. Markus gave a mini-course on ‘moduli spaces of representations’ in our advanced master class. Students loved the way he introduced representation varieties and constructed the space of irreducible representations as a GIT-quotient. In fact, his course was probably the first in that program having an increasing (rather than decreasing) number of students attending throughout the week…

In his third lecture he wanted to illustrate these general constructions and what better concrete example to take than representations of quivers? Result : students’ eyes staring blankly at infinity…

What is it that quivers do to have this effect on students?

Perhaps quiver-representations cause them an information-overload.

Perhaps we should take plenty of time to explain that in going from the quiver (the directed graph) to the path algebra, vertices become idempotents and arrows the remaining generators. These idempotents split a representation space into smaller vertex-spaces, the dimensions of which we collect in a dimension-vector, the big basechange group splits therefore into a product of small vertex-basechanges and the action of this product on an matrix corresponding to an arrow is merely usual conjugation by the big basechange-group, etc. etc. Blatant trivialities to someone breathing quivers, but probably we too had to take plenty of time once to disentangle this information-package…

But then, perhaps they consider quivers and their representations as too-concrete-old-math-stuff, when there’s so much high-profile-fancy-math still left to taste.

When given the option, students prefer you to tell them monstrous-moonshine stories even though they can barely prove simplicity of $A_5$, they want you to give them a short-cut to the Langlands programme but have never had the patience nor the interest to investigate the splitting of primes in quadratic number fields, they want to be taught schemes and their structure sheaves when they still struggle with the notion of a dominant map between varieties…

In short, students often like to run before they can crawl.

Working through the classification of some simple quiver-settings would force their agile feet firmly on the ground. They probably experience this as a waste of time.

Perhaps, it is time to promote slow math…

Noncommutative algebra and geometry master-degree

The lecturers, topics and dates of the 6 mini-courses in our ‘advanced master degree 2011 in noncommutative algebra and geometry’ are :

February 21-25
Vladimir Bavula (University of Sheffield) :
Localization Theory of Rings and Modules

March 7-11
Hans-Jürgen Schneider (University of München) :
Nichols Algebra and Root Systems

April 11-12
Bernhard Keller (Université Paris VII):
Cluster Algebra and Quantum Cluster Algebras

April 18-22
Jacques Alev (Université Reims):
Automorphisms of some Basic Algebras

May 3-8
Goro Kato (Cal Poly University, San Luis Obispo, US):
Sheaf Cohomology and Zeta – Functions

May 9-13
Markus Reineke (University of Wuppertal):
Moduli Spaces of Representatives

More information can be found here. I’ve been told that some limited support is available for foreign graduate students wanting to attend this programme.

Monstrous frustrations

Thanks for clicking through… I guess.

If nothing else, it shows that just as much as the stock market is fueled by greed, mathematical reasearch is driven by frustration (or the pleasure gained from knowing others to be frustrated).

I did spend the better part of the day doing a lengthy, if not laborious, calculation, I’ve been postponing for several years now. Partly, because I didn’t know how to start performing it (though the basic strategy was clear), partly, because I knew beforehand the final answer would probably offer me no further insight.

Still, it gives the final answer to a problem that may be of interest to anyone vaguely interested in Moonshine :

What does the Monster see of the modular group?

I know at least two of you, occasionally reading this blog, understand what I was trying to do and may now wonder how to repeat the straightforward calculation. Well the simple answer is : Google for the number 97239461142009186000 and, no doubt, you will be able to do the computation overnight.

One word of advice : don’t! Get some sleep instead, or make love to your partner, because all you’ll get is a quiver on nine vertices (which is pretty good for the Monster) but having an horrible amount of loops and arrows…

If someone wants the details on all of this, just ask. But, if you really want to get me exited : find a moonshine reason for one of the following two numbers :

$791616381395932409265430144165764500492= 2^2 * 11 * 293 * 61403690769153925633371869699485301 $

(the dimension of the monster-singularity upto smooth equivalence), or,

$1575918800531316887592467826675348205163= 523 * 1655089391 * 15982020053213 * 113914503502907 $

(the dimension of the moduli space).

neverendingbooks-geometry (2)

Here pdf-files of older NeverEndingBooks-posts on geometry. For more recent posts go here.

more… neverendingbooks-geometry (2)

down with determinants

The categorical cafe has a guest post by Tom Leinster Linear Algebra Done Right on the book with the same title by Sheldon Axler. I haven’t read the book but glanced through his online paper Down with determinants!. Here is ‘his’ proof of the fact that any n by n matrix A has at least one eigenvector. Take a vector $v \in \mathbb{C}^n $, then as the collection of vectors ${ v,A.v,A^2.v,\ldots,A^n.v } $ must be linearly dependent, there are complex numbers $a_i \in \mathbb{C} $ such that $~(a_0 + a_1 A + a_2 A^2 + \ldots + a_n A^n).v = \vec{0} \in \mathbb{C}^n $ But then as $\mathbb{C} $ is algebraically closed the polynomial on the left factors into linear factors $a_0 + a_1 x + a_2 x^2 + \ldots + a_n x^n = c (x-r_1)(x-r_2) \ldots (x-r_n) $ and therefore as $c(A-r_1I_n)(A-r_2I_n) \ldots (A-r_nI_n).v = \vec{0} $ from which it follows that at least one of the linear transformations $A-r_j I_n $ has a non-trivial kernel, whence A has an eigenvector with eigenvalue $r_j $. Okay, fine, nice even, but does this simple minded observation warrant the extreme conclusion of his paper (on page 18) ?

As mathematicians, we often read a nice new proof of a known theorem, enjoy the different approach, but continue to derive our internal understanding from the method we originally learned. This paper aims to change drastically the way mathematicians think about and teach crucial aspects of linear algebra.

The simple proof of the existence of eigenvalues given in Theorem 2.1 should be the one imprinted in our minds, written on our blackboards, and published in our textbooks. Generalized eigenvectors should become a central tool for the understanding of linear operators. As we have seen, their use leads to natural definitions of multiplicity and the characteristic polynomial. Every mathematician and every linear algebra student should at least remember that the generalized eigenvectors of an operator always span the domain (Proposition 3.4)—this crucial result leads to easy proofs of upper-triangular form (Theorem 6.2) and the Spectral Theorem (Theorems 7.5 and 8.3).

Determinants appear in many proofs not discussed here. If you scrutinize such proofs, you’ll often discover better alternatives without determinants. Down with Determinants!

I welcome all new proofs of known results as they allow instructors to choose the one best suited to their students (and preferable giving more than one proof showing that there is no such thing as ‘the best way’ to prove a mathematical result). What worries me is Axler’s attitude shared by extremists and dogmatics world-wide : they are so blinded by their own right that they impoverish their own lifes (and if they had their way, also that of others) by not willing to consider other alternatives. A few other comments :

  1. I would be far more impressed if he had given a short argument for the one line he skates over in his proof, that of $\mathbb{C} $ being algebraically closed. Does anyone give a proof of this fact anymore or is this one of the few facts we expect first year students to accept on faith?

    1. I dont understand this aversity to the determinant (probably because of its nonlinear character) but at the same time not having any problems with successive powers of matrices. Surely he knows that the determinant is a fixed $~\mathbb{Q}~ $-polynomial in the traces (which are linear!) of powers of the matrix.

    2. The essense of linear algebra is that by choosing a basis cleverly one can express a linear operator in a extremely nice matrix form (a canonical form) so that all computations become much more easy. This crucial idea of considering different bases and their basechange seems to be missing from Axler’s approach. Moreover, I would have thought that everyone would know these days that ‘linear algebra done right’ is a well developed topic called ‘representation theory of quivers’ but I realize this might be viewed as a dogmatic statement. Fortunately someone else is giving the basic linear algebra courses here in Antwerp so students are spared my private obsessions (at least the first few years…). In [his post](http://golem.ph.utexas.edu/category/2007/05/ linear_algebra_done_right.html) Leistner askes “What are determinants good for?” I cannot resist mentioning a trivial observation I made last week when thinking once again about THE rationality problem and which may be well known to others. Recall from the previous post that rationality of the quotient variety of matrix-couples $~(A,B) \in M_n(\mathbb{C}) \oplus M_n(\mathbb{C}) / GL_n $ under _simultaneous conjugation_ is a very hard problem. On the other hand, the ‘near miss’ problem of the quotient variety of matrix-couples $ { (A,B)~|~det(A)=0~} / GL_n $ is completely trivial. It is rational for all n. Here is a one-line proof. Consider the quiver $\xymatrix{\vtx{} \ar@/^2ex/[rr] & & \vtx{} \ar@(ur,dr) \ar@/^2ex/[ll]} $ then the dimension vector (n-1,n) is a Schur root and the first fundamental theorem of $GL_n $ (see for example Hanspeter Krafts excellent book on invariant theory) asserts that the corresponding quotient variety is the one above. The result then follows from Aidan Schofield’s paper Birational classification of moduli spaces of representations of quivers. Btw. in this special case one does not have to use the full force of Aidan’s result. Zinovy Reichstein, who keeps me updated on events in Atlanta, emailed the following elegant short proof Here is an outline of a geometric proof. Let $X = {(A, B) : det(A) = 0} \subset M_n^2 $ and $Y = \mathbb{P}^{n-1} \times M_n $. Applying the no-name lemma to the $PGL_n $-equivariant dominant rational map $~X \rightarrow Y $ given by $~(A, B) \rightarrow (Ker(A), B) $ (which makes X into a vector bundle over a dense open $PGL_n $-invariant subset of Y), we see that $X//PGL_n $ is rational over $Y//PGL_n $ On the other hand, $Y//PGLn = M_n//PGL_n $ is an affine space. Thus $X//PGL_n $ is rational. The moment I read this I knew how to do this quiver-wise and that it is just another Brauer-Severi type argument so completely inadequate to help settling the genuine matrix-problem. Update on the paper by Esther Beneish : Esther did submit the paper in february.

THE rationality problem

This morning, Esther Beneish
arxived the paper The center of the generic algebra of degree p that may contain the most
significant advance in my favourite problem for over 15 years! In it she
claims to prove that the center of the generic division algebra of
degree p is stably rational for all prime values p. Let me begin by
briefly explaining what the problem is all about. Consider one n by n
matrix A which is sufficiently general, then it will have all its
eigenvalues distinct, but then it is via the Jordan normal form theorem uniquely
determined upto conjugation (that is, base change) by its
characteristic polynomial. In
other words, the conjugacy class of a sufficiently general n by n matrix
depends freely on the coefficients of the characteristic polynomial
(which are the n elementary symmetric functions in the eigenvalues of
the matrix). Now what about couples of n by n matrices (A,B) under
simultaneous conjugation (that is all couples of the form $~(g A
g^{-1}, g B g^{-1}) $ for some invertible n by n matrix g) ??? So,
does there exist a sort of Jordan normal form for couples of n by n
matrices which are sufficiently general? That is, are there a set of
invariants for such couples which determine it is freely upto
simultaneous conjugation?

For couples of 2 by 2 matrices, Claudio Procesi rediscovered an old
result due to James Sylvester saying
that this is indeed the case and that the set of invariants consists of
the five invariants Tr(A),Tr(B),Det(A),Det(B) and Tr(AB). Now, Claudio
did a lot more in his paper. He showed that if you could prove this for
couples of matrices, you can also do it for triples, quadruples even any
k-tuples of n by n matrices under simultaneous conjugation. He also
related this problem to the center of the generic division algebra of
degree n (which was introduced earlier by Shimshon Amitsur in a rather
cryptic manner and for a while he simply refused to believe Claudio’s
description of this division algebra as the one generated by two
_generic_ n by n matrices, that is matrices filled with independent
variables). Claudio also gave the description of the center of this
algebra as a field of lattice-invariants (over the symmetric group S(n)
) which was crucial in subsequent investigations. If you are interested
in the history of this problem, its connections with Brauer group
problems and invariant theory and a short description of the tricks used
in proving the results I’ll mention below, you might have a look at the
talk Centers of Generic Division Algebras, the rationality problem 1965-1990
I gave in Chicago in 1990.

The case of couples of 3 by 3 matrices was finally
settled in 1979 by Ed Formanek and a
year later he was able to solve also the case of couples of 4 by 4
matrices in a fabulous paper. In it, he used solvability of S(4) in an
essential way thereby hinting at the possibility that the problem might
no longer have an affirmative answer for larger values of n. When I read
his 4×4 paper I believed that someone able to prove such a result must
have an awesome insight in the inner workings of matrices and decided to
dedicate myself to this problem the moment I would get a permanent
job… . But even then it is a reckless thing to do. Spending all of
your time to such a difficult problem can be frustrating as there is no
guarantee you’ll ever write a paper. Sure, you can find translations of
the problem and as all good problems it will have connections with other
subjects such as moduli spaces of vectorbundles and of quiver
representations, but to do the ‘next number’ is another matter.

Fortunately, early 1990, together with
Christine Bessenrodt we were
able to do the next two ‘prime cases’ : couples of 5 by 5 and couples of
7 by 7 matrices (Katsylo and Aidan Schofield had already proved that if
you could do it for couples of k by k and l by l matrices and if k and l
were coprime then you could also do it for couples of kl by kl matrices,
so the n=6 case was already done). Or did we? Well not quite, our
methods only allowed us to prove that the center is stably rational
that is, it becomes rational by freely adjoining extra variables. There
are examples known of stably rational fields which are NOT rational, but
I guess most experts believe that in the case of matrix-invariants
stable rationality will imply rationality. After this paper both
Christine and myself decided to do other things as we believed we had
reached the limits of what the lattice-method could do and we thought a
new idea was required to go further. If today’s paper by Esther turns
out to be correct, we were wrong. The next couple of days/weeks I’ll
have a go at her paper but as my lattice-tricks are pretty rusty this
may take longer than expected. Still, I see that in a couple of weeks
there will be a meeting in
Atlanta were Esther
and all experts in the field will be present (among them David Saltman
and Jean-Louis Colliot-Thelene) so we will know one way or the other
pretty soon. I sincerely hope Esther’s proof will stand the test as she
was the only one courageous enough to devote herself entirely to the
problem, regardless of slow progress.