Can
it be that one forgets an entire proof because the result doesn’t seem
important or relevant at the time? It seems the only logical explanation
for what happened last week. Raf Bocklandt asked me whether a
classification was known of all group algebras l G which are
noncommutative manifolds (that is, which are formally smooth a la Kontsevich-Rosenberg or, equivalently, quasi-free
a la Cuntz-Quillen). I said I didn’t know the answer and that it looked
like a difficult problem but at the same time it was entirely clear to
me how to attack this problem, even which book I needed to have a look
at to get started. And, indeed, after a visit to the library borrowing
Warren Dicks
lecture notes in mathematics 790 “Groups, trees and projective
modules” and browsing through it for a few minutes I had the rough
outline of the classification. As the proof is basicly a two-liner I
might as well sketch it here.
If l G is quasi-free it
must be hereditary so the augmentation ideal must be a projective
module. But Martin Dunwoody proved that this is equivalent to
G being a group acting on a (usually infinite) tree with finite
group vertex-stabilizers all of its orders being invertible in the
basefield l. Hence, by Bass-Serre theory G is the
fundamental group of a graph of finite groups (all orders being units in
l) and using this structural result it is then not difficult to
show that the group algebra l G does indeed have the lifting
property for morphisms modulo nilpotent ideals and hence is
quasi-free.
If l has characteristic zero (hence the
extra order conditions are void) one can invoke a result of Karrass
saying that quasi-freeness of l G is equivalent to G being
virtually free (that is, G has a free subgroup of finite
index). There are many interesting examples of virtually free groups.
One source are the discrete subgroups commensurable with SL(2,Z)
(among which all groups appearing in monstrous moonshine), another
source comes from the classification of rank two vectorbundles over
projective smooth curves over finite fields (see the later chapters of
Serre’s Trees). So
one can use non-commutative geometry to study the finite dimensional
representations of virtually free groups generalizing the approach with
Jan Adriaenssens in Non-commutative covers and the modular group (btw.
Jan claims that a revision of this paper will be available soon).
In order to avoid that I forget all of this once again, I’ve
written over the last couple of days a short note explaining what I know
of representations of virtually free groups (or more generally of
fundamental algebras of finite graphs of separable
l-algebras). I may (or may not) post this note on the arXiv in
the coming weeks. But, if you have a reason to be interested in this,
send me an email and I’ll send you a sneak preview.
neverendingbooks Posts
After yesterday’s post I had to explain today what
point-modules and line-modules are and that one can really
describe them as points in a (commutative) variety. Seemingly, the
present focus on categorical methods scares possibly interested students
away and none of them seems to know that this non-commutative projective
algebraic geometry once dealt with very concrete examples.
Let
us fix the setting : A will be a quadratic algebra, that is, A is
a positively graded algebra, part of degree zero the basefield k,
generated by its homogeneous part A_1 of degree one (which we take to be
of k-dimension n 1) and with all defining relations quadratic in these
generators. Take m k-independent linear terms (that is, elements of A_1)
: l1,…,lm and consider the graded left A-module
L = A/(Al1 + ... + Alm)
Clearly, the Hilbert series of this
module (that is, the formal power series in t with coefficient of t^a
the k-dimension of the homogeneous part of L of degree a) starts off
with
Hilb(L,t) = 1 + (n+1-m) t + ...
and
we call L a linear d-dimensional module if the Hilbert series is
the power series expansion of
1/(1-t)^{d +1} = 1 + (d+1)t +(d +1)(d +2)/2 t^2 ...
In particular, if d=0 (that is, m=n) then L
is said to be a point-module and if d=1 (that is, m=n-1) then L
is said to be a line-module. To a d-dimensional linear module L
one can associate a d-dimensional linear subspace of ordinary (that is,
commutative) projective n-space P^n. To do this, identify
P^n = P(A 1^*)
the projective space of the n 1 dimensional space of
linear functions on the homogeneous part of degree one. Then each of the
linear elements li determines a hyperplane V(li) in P^n and the
intersection of the m hyperplanes V(l1),…,V(lm) is the wanted
subspace. In particular, to a point-module corresponds a point in
P^n and to a line-module a line in P^n. So, where
is the non-commutativity of A hidden? Well, if P is a point-module
P = P0 + P1 + P2 +...
(with all components P_a one dimensional)
then the twisted module
P' = P1 + P2 + P3 + ...
is
again a point-module and the map P–>P’ defines an automorphism on the
point variety. In low dimensions, it is often possible to
reconstruct A from the point-variety and automorphism. In higher
dimensions, one has to consider also the higher dimensional linear
modules.
When I explained all this (far clumsier as it was a
long time since I worked with this) I was asked for an elementary text
on all this. ‘Why hasn’t anybody written a book on all this?’ Well,
Paul Smith wrote such a book so have a look at his
homepage. But then, it turned out that the version one can download from
one of his course pages is a more recent and a lot more
categorical version. There is no more sight of a useful book on
non-commutative projective spaces and their linear modules which might
give starting students an interesting way to learn some non-commutative
algebra and the beginnings of algebraic geometry (commutative and
non-commutative). So, hopefully Paul still has the old version around
and will make it available… The only webpage on this I could find in
short time are the slides of a talk by Michaela Vancliff.
One
of the best collections of links to homepages of people working in
non-commutative algebra and/or geometry is maintained by Paul Smith. At regular intervals I use it to check
up on some people, usually in vain as nobody seems to update their
homepage… So, today I wrote a simple spider to check for updates in
this list. The idea is simple : it tries to get the link (and when this
fails it reports that the link seems to be broken), it saves a text-copy
of the page (using lynx) on disc which it will check on a future
check-up for changes with diff. Btw. for OS X-people I got
lynx from the Fink Project. It then collects all data (broken
links, time of last visit and time of last change and recent updates) in
RSS-feeds for which an HTML-version is maintained at the geoMetry-site, again
using server side includes. If you see a 1970-date this means that I
have never detected a change since I let this spider loose (today).
Also, the list of pages is not alphabetic, even to me it is a surprise
how the next list will look. As I check for changes with diff the
claimed number of changed lines is by far accurate (the total of lines
from the first change made to the end of the file might be a better
approximation of reality… I will change this soon).
Clearly,
all of this is still experimental so please give me feedback if you
notice something wrong with these lists. Also I plan to extend this list
substantially over the next weeks (for example, Paul Smith himself is
not present in his own list…). So, if you want your pages to be
included, let me know at lieven.lebruyn@ua.ac.be.
For those on Paul\’s list, if you looked at your log-files today
you may have noticed a lot of traffic from www.matrix.ua.ac.be as
I was testing the script. I\’ll keep my further visits down to once a
day, at most…
If
you are interested in getting daily RSS-feeds of one (or more) of the
following arXiv
sections : math.RA, math.AG, math.QA and
math.RT you can point your news-aggregator to
www.matrix.ua.ac.be. Most of the solution to my first
Perl-exercise I did explain already yesterday but the current program
has a few changes. First, my idea was to scrape the recent-files
from the arXiv, for example for math.RA I would get http://www.arxiv.org/list/math.RA/recent but this
contains only titles, authors and links but no abstracts of the papers.
So I thought I had to scrape for the URLs of these papers and then
download each of the abstracts-files. Fortunately, I found a way around
this. There is a lesser known way to get at all abstracts from
math of the current day (or the few last days) by using the Catchup interface. The syntax of this interface is
as follows : for example to get all math-papers with
abstracts posted on April 2, 2004 you have to get the page with
URL
http://www.arxiv.org/catchup?smonth=04&sday=02&num=50&archive= math&method=with&syear=2004
so in order to use it I had
to find a way to parse the present day into a numeric
day,month,year format. This is quite easy as there is the very
well documented Date::Manip-module in Perl. Another problem with
arXiv is that there are no posts in the weekend. I worked around
this by requesting the Catchup starting from the previous
business day (an option of the DateCalc-function. This means
that over the weekend I get the RSS feeds of papers posted on Friday, on
Monday I\’ll get those of Friday&Monday and for all other days I\’ll get
those of today&yesterday. But it is easy to change the script to allow
for a longer period so please tell me if you want to have RSS-feeds for
the last 3 or 4 days. Also, if you need feeds for other sections that
can easily be done, so tell me.
Here are the URLs to give to
your news-aggregator for these sections :
math.RA at
http://www.matrix.ua.ac.be/arxivRSS/mathRA/
math.QA at
http://www.matrix.ua.ac.be/arxivRSS/mathQA/
math.RT at
http://www.matrix.ua.ac.be/arxivRSS/mathRT/
math.AG at
http://www.matrix.ua.ac.be/arxivRSS/mathAG/
If
your news-aggregator is not clever then you may have to add an
additional index.xml at the end. If you like to use these feeds
on a Mac, a good free news-aggregator is NetNewsWire Lite. To get at the above feeds, click on the Subscribe
button and copy one of the above links in the pop-up window. I
don\’t think my Perl-script breaks the Robots Beware rule of the arXiv. All it does it to download one page a day
using their Catchup-Method. I still have to set up a cron-job to
do this daily, but I have to find out at which (local)time at night the
arXiv refreshes its pages…
As
far as I know (but I am fairly ignorant) the arXiv does not
provide RSS feeds for a particular section, say mathRA. Still it would be a good idea for anyone
having a news aggregator to follows some weblogs and
news-channels having RSS syndication. So I decided to write one as my
first Perl-exercise and to my own surprise I have after a few hours work
a prototype-scraper for math.RA. It is not yet perfect, I still
have to convert the local URLs to global URLs so that they can be
clicked and at the moment I have only collected the titles, authors and
abstract-links whereas it would make more sense to include the full
abstract in the RSS feed, but give me a few more days…
The
basic idea is fairly simple (and based on an O\’Reilly hack).
One uses the Template::Extract module to
extract the goodies from the arXiv\’s template HTML. Maybe I am still
not used to Perl-documentation but it was hard for me to work out how to
do this in detail either from the hack or the online
module-documentation. Fortunately there is a good Perl Advent
Calendar page giving me the details that I needed. Once one has this
info one can turn it into a proper RSS-page using the XML::RSS-module.
In fact, I spend far
more time trying to get XML::RSS installed under OS X than
writing the code. The usual method, that is via
iMacLieven:~ lieven$ sudo /usr/bin/perl -MCPAN -e shell Terminal does not support AddHistory. cpan shell -- CPAN exploration and modules installation (v1.76) ReadLine support available (try \'install Bundle::CPAN\') cpan> install XML::RSS
failed and even a
manual install for which the drill is : download the package from CPAN, go to the
extracted directory and give the commands
sudo /usr/bin/perl Makefile.pl sudo make sudo make test sudo make install
failed. Also a Google didn\’t give immediate results until
I did find this ADC page which set me on the right track.
It seems that the problem is in installing the XML::Parser for which one first need expat
to be installed. Now, the generic sourceforge page contains a
version for Linux but fortunately it is also part of the Fink
project so I did a
sudo fink install expat
which worked
without problems but afterwards I still was not able to install
XML::Parser because Fink installs everything in the /sw
tree. But after
sudo perl Makefile.pl EXPATLIBPATH=/sw/lib EXPATINCPATH=/sw/include
I finally got the manual installation
going. I will try to tidy up the script over the weekend…
I
just finished the formal lecture-part of the course Projects in
non-commutative geometry (btw. I am completely exhausted after this
afternoon\’s session but hopeful that some students actually may do
something with my crazy ideas), springtime seems to have arrived and
next week the easter-vacation starts so it may be time to have some fun
like making a new webpage (yes, again…). At the moment the main
matrix.ua.ac.be page is not really up to standards
and Raf and Hans will be using it soon for the information about the
Liegrits-project (at the moment they just have a beautiful logo). My aim is to make the main page to be the
starting page of the geoMetry site
(guess what M stands for ?) on which I want
to collect as much information as possible on non-commutative geometry.
To get at that info I plan to set some spiders or bots or
scrapers loose on the web (this is just an excuse to force myself
to learn Perl). But it seems one has to follow strict ethical guidelines
in doing so. One of the first sites I want to spider is clearly the arXiv but they have
a scary Robots Beware page! I don\’t know whether their
robots.txt file will allow me to get at any of
their goodies. In a robots.txt file the webmaster can put the
directories on his/her site which are off limits to robots and as I
don\’t want to do anything that may cause that the arXiv is no longer
available to me (or even worse, to the whole department) I better follow
these guidelines. First site on my list to study tomorrow will be The
Web Robots Pages …