Skip to content →

neverendingbooks Posts

GAP on OS X


GAP the Groups, Algorithms, and Programming-tool
(developed by two groups, one in St. Andrews, the other in Aachen) is
the package if you want to work with (finite or finitely
presented) groups, but it has also some routines for algebras, fields,
division algebras, Lie algebras and the like. For years now it is
available on MacClassic but since the last clean install of my
computer I removed it as I was waiting for a Mac OS X-port to be
distributed soon. From time to time I checked the webpage at gap-system.org
but it seems that no one cared for OS X. For my “The book of
points”
-project I need a system to make lots of examples so perhaps
one could just as well install the UNIX-version. Fortunately, I did a
last desperate Google on GAP OS X which brought me to the
Aachen-pages of the GAP-group where one seems to be more Macintosh
minded. The relevant page is the further notes for OS X on the
GAP-installation for UNIX-page. Here is what I did to get GAP running
under OS X. First go to the download page (btw. this page has
version 4.4 whereas St-Andrews is still distributing 4.3) and download
the
files

gap4r4.tar.gz,packages-2004_01_27-11_37_UTC.tar.gz,xtom1r1.tar
.gz

This will give you three tar-files on your Desktop. Fire
up the Terminal and make a new directory /usr/local/lib if
it doesn’t exist yet. Then, go to your Desktop folder and do

sudo
cp gap4r4.tar /usr/local/lib sudo cp xtom1r1.tar /usr/local/lib cd
/usr/local/lib sudo tar xvf gap4r4.tar sudo tar xvf
xtom1r1.tar

Then return to your Desktop Folder and copy the
remaining tar-file in the /usr/local/lib/gap4r4/pkg-folder which
is created by untarring the former two files and untar it as above.
Then, it is time to compile everything (assuming you have installed the
Developer’s tools) and there is one magic OS X-command which will
speedup GAP by 20%. Here is what to do

cd
/usr/local/lib/gap4r4 sudo ./configure sudo make COPTS="-fast
-mcpu=7450"

and everything will compile nicely. If you
are so lucky as to have a G5-system, you should replace the last command
by sudo make COPTS=”-03″. Finally, get everything in the right
place

cd /usr/local/lib/gap4r4/bin sudo cp gap.sh
/usr/local/bin/gap

and if /usr/local/bin is in
your $PATH then typing gap at the command line will give
you the opening GAP-banner :

Leave a Comment

COLgo

COL is a map-coloring game invented by Colin Vout.
Two players Left (bLack) and Right (white) take turns in coloring the
map subject to the rule that no two neighboring regions may be colored
the same. The last player to be able to move wins the game. For my talk
on combinatorial game theory in two weeks, I choose for a simplified
version of COL, namely COLgo which is played with go-stoned on a
(partial) go-board. Each spot has 4 neighbors (North, East, South and
West). For example, the picture on the left is a legal COLgo-position on
a 5×5 board. COL is a simple game to illustrate some of the key features
of game theory. In sharp contrast to other games, one has a general
result on the possible values of a COL-position : each position has
value $z$ or $z+\\bigstar$ where $z$ is a (Conway)-number (that is, a
dyadic integer) and where $\\bigstar$ is the fuzzy game {0|0}. In
the talk I will give a proof of this result (there are not so many
results in combinatorial game theory one can prove from scratch in 50
minutes but this is one of them). Of course, to illustrate the result I
had to find positions which have counter-intuitive values such as 1/2.
The picture on the left is an example of such a position on a 5×5 board
but surely one must be able to find 1/2-positions on a 4×4 board
(perhaps even on a 3×3?). If you have an example, please tell me.

On a slightly different matter : I used the psgo.sty package in LaTeX to print the (partial)
go-boards and positions. If I ever write out the notes I’ll post them
here but they will be in Dutch.

Leave a Comment

the iTunes hack

If you
are interested in getting thousands of mp3-files on your computer
using only 128 Kb of ROM, read on! Yesterday I made my hands dirty and
with Jan’s help upgraded two 6 Gb colored iMacs (a blue and a
pink one) to potential servers for our home-network having a 80 Gb resp.
a 120 Gb hard disk. If you do the installation yourself such an upgrade
costs you roughly 1 Euro/Gigabyte which seems to me like a good
investment. Clearly, you need to know how to do this and be less
hardware-phobic than I am. Fortunately, the first problem is easily
solved. There is plenty of good advice on the net : for the colored
iMacs we used the upgrade an iMac-page of MacWorld. For possible
later use, there is also a page for replacing the hard disk in an old iBook
(which seems already more challenging) and in a flat screen iMac (which seems to be impossible
without proper tools). Anyway, we followed the page and in no time
replaced the hard disks (along the way we made all possible mistakes
like not connecting the new hard disk and then being surprised that the
Disk Utility cannot find it or not putting back the RAM-chips and
panicking when the normal start-up chime was replaced by an aggressive
beep). An unexpected pleasant surprise was that the blue iMac, which I
thought to be dead, revived when we replaced the hard disk.

Back home, I dumped a good part of our CD-collection on the blue
iMac (1440 songs, good for 4.3 days of music and taking up 7.11 Gb of
the vast 120 Gb hard disk) to test the iTunes Central hack
explained by Alan Graham in his six
great tips for homemade dot mac servers
. Would I manage to get the
entire collection on my old iBook which had only (after installing all
this WarWalking-software) 800 Mb of free disk space? Here is what
I did :

1. On the iBook (or any machine you want to
play this trick on) go to your Home/Music/iTunes-folder and drag
the two files and one directory it contains to the Trash. Do the
same for the two files com.apple.iTunes.eq.plist and
com.apple.iTunes.plist which are in the
Home/Library/Preferences-folder.

2. On the
iBook, use the Finder/Network-icon to connect to the server
(iMacServer in my case) and browse to the iTunes-folder where you placed
all the music (still, on the iBook in the Finder-window opened when you
connect to iMacServer). Make an Alias of the two files and the
directory in it (click on one of them once, go to the
File-submenu of the Finder and choose Make Alias) which
results in three new entries in the iTunes directory : iTunes 4 Music
Library alias
, iTunes 4 Music Library.xml alias and iTunes
4 Music Library alias
. Drag these 3 aliases to the
Home/Music/iTunes-folder on the iBook and rename them by removing
the alias-addendum.

3. In the Finder-window on
the iBook corresponding to the iMacServer browse to the
Home/Library/Preferences-folder and drag the two files
com.apple.iTunes.eq.plist and com.apple.iTunes.plist to
the Home/Library/Preferences-folder of the iBook. Launch
iTunes and it will give you access to the whole iTunes-collection
of iMacServer! In all, the three aliases and the 2 copied files take up
128 Kb…

Leave a Comment

NOG master class update


Yesterday I made a preliminary program for the first two months
of the masterclass non-commutative geometry. It is likely that
the program will still undergo changes as at the moment I included only
the mini-courses given by Bernhard
Keller
and Markus Reineke but several other people have
already agreed to come and give a talk. For example, Jacques Alev (Reims),
Tom Lenagan (Edinburgh),
Shahn Majid (London),
Giovanna Carnovale (Padua) among others. And in
may, Fred assures me, Maxim Kontsevich will give a couple of talks.

As for the contents of the two courses I will be
teaching I changed my mind slightly. The course non-commutative
geometry
I teach jointly with Markus Reineke and making the program
I realized that I have to teach the full 22 hours before he will start
his mini-course in the week of March 15-19 to explain the few
things
he needs, like :

To derive all the
counting of points formulas, I only need from your course:

the definition of formally smooth algebras basic properties, like
being
hereditary
– the definition of the component
semigroup
– the fact that dim Hom-dim Ext is constant along
components. This I need
even over finite fields $F_q$, but I
went through your proof in “One quiver”,
and it works. The
key fact is that even over $F_q$, the infinitesimal lifting
property implies smoothness in the sense Dimension of variety =
dimension of
(schematic) tangent space in any $F_q$-valued
point. But I think it’s fine for
the students if you do all
this over C, and I’ll only sketch the (few)
modifications for
algebras over $F_q$.

So my plan is to do all of
this first and leave the (to me) interesting problem of trying to
classify formally smooth algebras birationally to the second
course projects in non-commutative geometry which fits the title
as a lot of things still need to be done. The previous idea to give in
that course applications of non-commutative orders to the resolution of
singularities (in particular of quotient singularities) as very roughly
explained in my three talks on non-commutative geometry@n I now
propose to relegate to the friday afternoon seminar. I’ll be
happy to give more explanations on all this (in particular more
background on central simple algebras and the theory of (maximal)
orders) if other people work through the main part of the paper in the
seminar. In fact, all (other) suggestions for seminar-talks are welcome
: just tell me in person or post a comment to this post.

Leave a Comment

Bill Schelter’s Maxima

Bill
Schelter was a remarkable man. First, he was a top-class mathematician.
If you allow yourself to be impressed, read his proof of the
Artin-Procesi theorem. Bill was also among the first to take
non-commutative geometry seriously. Together with Mike Artin he
investigated a notion of non-commutative integral extensions and he was
the first to focuss attention to formally smooth algebras (a
suggestion later taken up by a.o. Cuntz-Quillen and Kontsevich) and a
relative version with respect to algebras satisfying all identities of
n x n matrices which (via work of Procesi) led to smooth@n
algebras. To youngsters, he is probably best know as the co-inventor of
Artin-Schelter regular algebras. I still vividly remember an
overly enthusiastic talk by him on the subject in Oberwolfach, sometime
in the late eighties. Secondly, Bill was a genuine Lisp-guru and
a strong proponent of open source software, see for example his
petition against software patents. He maintanind
his own version of Kyoto Common Lisp which developed into Gnu
Common Lisp
. A quote on its history :

GCL is
the product of many hands over many years. The original effort was known
as the Kyoto Common Lisp system, written by Taiichi Yuasa and Masami
Hagiya in 1984. In 1987 new work was begun by William Schelter, and that
version of the system was called AKCL (Austin Kyoto Common Lisp). In
1994 AKCL was released as GCL (GNU Common Lisp) under the GNU public
library license. The primary purpose of GCL during that phase of it’s
existence was to support the Maxima computer algebra system, also
maintained by Dr. Schelter. It existed largely as a subproject of
Maxima.

Maxima started as Bill’s version of
Macsyma an MIT-based symbolic computation program to which he
added many routines, one of which was Affine a package that
allowed to do Groebner-like computations in non-commutative
algebras (implementing Bergman’s diamond lemma) and which he
needed to get a grip on 3-dimensional Artin-Schelter regular
algebras
. Michel and me convinced Fred to acquire funds to
buy us a work-station (costing at the time 20 to 30 iMacs) and have Bill
flown in from the States with his tape of maxima and let him
port it to our Dec-station. Antwerp was probably for years
the only place in the world (apart from MIT) where one could do
calculations in affine (probably highly illegal at the time).
Still, lots of people benefitted from this, among others Michaela
Vancliff
and Kristel Van Rompay in their investigation
of 4-dimensional Artin-Schelter regular algebras associated to an
automorphism of a quadric in three-dimensional projective space.
Yesterday I ran into Bill (alas virtually) by browsing the
crypto-category of Fink. There it was, maxima, Bill’s package! I tried to install it
with the Fink Commander and failed but succeeded from the command line.
So, if you want to have your own version of it type

sudo fink
install maxima

from the Terminal and it will install without
problems (giving you also a working copy of common lisp). Unfortunately
I do not remember too much of Macsyma or Affine but there is plenty of
documentation on the net. Manuals and user guides can be obtained from
the maxima homepage and the University of Texas
(Bill’s university) maintains an online manual, including a cryptic description of
some Affine-commands. But probably I’ll have to send Michaela an
email asking for some guidance on this… Here, as a tribute to Bill who
died in july 2001 the opening banner

 iMacLieven:~ lieven$
/sw/bin/maxima Maxima 5.9.0 http://maxima.sourceforge.net 
Distributed under the GNU Public License. 
See the file COPYING. 
Dedicated to the memory of William Schelter. 
This is a development version of Maxima. 
The function bug_report() provides bug reporting information. 
(C1)
 
Leave a Comment

bandwidth measures


One day (hopefully) lots of MP3, JPEG and perhaps even
MPEG-files will be flying around our wireless home-network. But I
didn’t have any idea of how much data I could cram through the
Airport-connections. To estimate the available bandwith of a
network there is a nice free tool around, iperf of which you can download binaries for
almost any platform including OS X. So click on the MacOS X (Darwin 6.4)
binary button half way on the iperf-page and you get a Desktop
iperf-1.7.0-powerpc-apple-darwin6.4 Folder
which you may rename to
just iperf. Do this on two computers connected to the
Airport-network you want to measure. Now, decide which of the two will
play the ‘server’ and which the ‘client’ (the end result does not
depend on this choice). So fire up the Terminal of the serving
computer and type

sudo ~/Desktop/iperf/iperf -s

and you will
get a message saying that the server is listening on TCP port 5001. Go
to the SystemPreferences/Network to obtain the IP-address of the server
(say it is 10.0.1.5) . Walk over to the ‘client’-computer and type
into its Terminal

sudo ~/Desktop/iperf/iperf -c 10.0.1.5
-r

and after a few moments it will compute the bandwidth of the
connection for you. Here is a sample output of two Airport-card
iMacs connected to the same Airport-Extreme base station :

iMacLieven:~/Desktop/iperf lieven$ ./iperf
-s ------------------------------------------------------------\r\
nServer listening on TCP port 5001 TCP window size: 64.0 KByte
(default) -----------------------------------------------------------
- [  4] local 10.0.1.2 port 5001 connected with 10.0.1.7 port
49245 [ ID] Interval       Transfer     Bandwidth [  4]  0.0-10.3
sec  2.77 MBytes  2.27
Mbits/sec -----------------------------------------------------------
- Client connecting to 10.0.1.7, TCP port 5001 TCP window size:
65.0 KByte
(default) -----------------------------------------------------------
- [  4] local 10.0.1.2 port 49515 connected with 10.0.1.7 port
5001 [ ID] Interval       Transfer     Bandwidth [  4]  0.0-10.2
sec  2.73 MBytes  2.23 Mbits/sec indicating a bandwidth of approximately
2.25Mbits/sec. If we replay the same game with two
AirportExtreme-card iMacs on the same network we can nearly
triple (!) the bandwidth : 
[eMacAnn:~] lieven% cd
Desktop/iperf [eMacAnn:~/Desktop/iperf] lieven% ./iperf
-s ------------------------------------------------------------\r\
nServer listening on TCP port 5001 TCP window size: 64.0 KByte
(default) -----------------------------------------------------------
- [  4] local 10.0.1.5 port 5001 connected with 10.0.1.6 port
49314 [ ID] Interval       Transfer     Bandwidth [  4]  0.0-10.0
sec  8.50 MBytes  7.11
Mbits/sec -----------------------------------------------------------
- Client connecting to 10.0.1.6, TCP port 5001 TCP window size:
65.0 KByte
(default) -----------------------------------------------------------
- [  4] local 10.0.1.5 port 49320 connected with 10.0.1.6 port
5001 [ ID] Interval       Transfer     Bandwidth [  4]  0.0-10.9
sec  7.07 MBytes  5.45 Mbits/sec

However, if these two
AirportExtrame-card computers connect to each other via the
Graphite-Airport base station the bandwidth drops to a meagre 1.9
Mbits/sec which is roughly the same as two Airport-card computers
connecting (which gave me 2.45 Mbits/s). Anyway, there is no immediate
problem with bandwidth on either network for what I have in mind.
Another important number to know is the real speed of our
internet-connection (for instance if I want to replace our old router by
a better documented one and have a measure for the in/decrease of the
connection-speed). Here, a good URL is performance.chello.at which offers two tests :
String and String SSI. The later one has a graphical
resulting page such as

Leave a Comment

google spammers


In the GoogleMatrix I tried to understand the concept
of the PageRank algorithm that Google uses to list pages according to
their \’importance\’. So, if you want your webpage to come out first in
a certain search, you have to increase your PageRank-value (which
normally is a measure of webpages linking to your page) artificially. A
method to achieve this is by link spamming, that is if page A is
to webpage of which you want to increase the PageRank value, take a page
B (either under your control or that of a friend webmaster) and add a
dummy link page B -> page A. To find out the effect of this on the
PageRank and how the second eigenvalue of the GoogleMatrix is able to
detect such constructs let us set up a micro-web consisting of
just 3 pages with links 1->2 and 1->3. The corresponding GoogleMatrix
(with c=0.85 and v=(1/3,1/3,1/3) is

1/3   1/20   1/20 1/3   9/10 
 1/20 1/3   1/20   9/10

which has eigenvalues 1,0.85 and 0.28.
The eigenvector with eigenvalue 1 (the PageRank) is equal to (0.15,1,1)
so page 2 and page 3 are equally important to Google and if we scale
PageRank such that it adds up to 100% over all pages, the relative
importance values are 6,9%,46,5% and 46,5%. In this case the eigenvector
corresponding to the second eigenvalue 0.85 is (0,-1,1) and hence
detects the two leaf-nodes. Now, assume the owner of page 2 sets up a
link spam by creating page 4 and linking 4->3, then the corresponding
GoogleMatrix (with v=(1/4,1/4,1/4,1/4)) is

77/240   3/80   3/80  
3/80 77/240   71/80   3/80   37/80 77/240   3/80   71/80  
3/80  3/80   3/80   3/80   37/80

which has eigenvalues
1,0.85,0.425 and 0.283. The PageRank eigenvector with eigenvalue 1 is
in this case is (0.8,8.18,5.35,1) or in relative importance % we have
(4.9%,50.1%,32.7%,6.1%) and we see that the spammer achieved his/her
goal. The eigenvector corresponding to the second eigenvalue is
(0,-1,1,0) which again gives the leaf-nodes and the eigenvector of the
third eigenvalue is (0,-1,0,1) and detects the spam-construct.

Leave a Comment

an even better LaTeX system

A
previous post the best LaTeX system was a commercial for Gerben
Wierda’s i-Installer to get a working tetex
distribution. I’ve been working happily with this TeX-system for two
years now but recently run into a few (minor) problems. In the process
of solving these problems I created myself a second tetex-system
more or less by accident. This is what happened. On the computer at the
university I once got fun packages running such as a chess-, go-
and Feynman diagrams-package but somehow I cannot reproduce this
on my home-machine, I get lots of errors with missing fonts etc. As I
really wanted to TeX some chess-diagrams I went surfing for the most
recent version of the chess-package and found one for Linux and
one under the Fink-project : the chess-tex package. So, I did a

sudo fink
install chess-tex

forgetting that in good Fink-tradition you can
only install a package by installing at the same time all packages
needed to run it, so I was given a whole list of packages that Fink
wanted to install including a full tetex-system. Did I want to
continue? Well, I had to think on that for a moment but realized that
the iTex-tree was living under /usr/local whereas Fink
creates trees under /sw so there should not really be a problem,
so yes let’s see what happens. It took quite a while (well over an hour
and a half) but when it was done I had a second full TeX-system, but how
could I get it running? Of course I could try to check it via the
command line but then I remembered that there is an alternative
front-end for TeXShop namely iTeXMac
which advertises that it can run either iTeX or the Fink-distribution of
tetex. So I downloaded it, looked in the preferences which indeed
contains a pane

where you can choose between using the standard
tetex-distribution or the Fink-distribution and iTeXMc finds
automatically the relevant folders. So I wrote a quick chess diagram ran
it trough iTeXMac and indeed it produced the graphics I expected! This
little system gave me some confidence in the Fink-distribution so I
fired up the Fink-Commander and looked under categories :
text
for other TeX packages I could install and there were plenty :
Latex2HTML, Latex2rtf, Feynman, tex4ht and so on. Installing them with
the commander is fun : just click on the package you want and click
‘Install’ under the Source-dropdown window

and in the lower part of the window you can follow the
installation process, whereas the upper part tells you what packages are
already installed and what their version-number is. I still have to
figure out how I will add new style files to this fink-tree and I have
to get used to the iTeXMac-editor but so far I like the robustness of
the system and the easy install procedure, so try it out!

Leave a Comment

the google matrix

This morning there was an intriguing post on arXiv/math.RA
entitled A Note on
the Eigenvalues of the Google Matrix
. At first I thought it was a
joke but a quick Google revealed that the PageRank algorithm really
is at the heart of Google technology, so I simply had to find out more
about it. An extremely readable account of it can be found in The PageRank Citation Ranking: Bringing Order to the Web which is really the
start of Google. It is coauthored by the two founders : Larry Page and
Sergey Brin. A quote from the introduction

“To test the utility of PageRank for search, we built a web
search engine called Google (Section
5)”

Here is an intuitive idea of
_PageRank_ : a page has high rank if the sum of the ranks of its
_backlinks_ (that is, pages linking to the page in question) is
high and it is computed by the _Random Surfer Model_ (see
sections 2.5 and 2.6 of the paper). More formally (at least from my
quick browsing of some papers, maybe the following account is slightly
erroneous and I’ll have to spend some more time reading) let
N be the number of webpages (estimated between 3 and 4
billion) and consider the N x N matrix
A the so called GoogleMatrix where

A = cP  + (1-c)(v x
vec(1)) 

where P is the
column-stochastic matrix (meaning : all entries are zero or positive and
the sum of all entries in each column adds up to 1) with
entries

P(i,j) = 1/N(i) if i->j and 0
otherwise 

where i and j are webpages and i->j
denotes that page i has a link to page j and where N(i) is the total
number of pages linked to in page i (all this information is available
once we download page i). c is a constant 0 < c < 1 and
corresponds to the fraction of webpages containing an _outlink (that
is, a link to another page) by all webpages (it seems that Google uses
c=0.85 as an estimate). Finally, v is a column vector with zero or
positive numbers adding up to 1 and vec(1) is the constant row vector
(1,…,1). The idea behind this term is that in the _Random Surfer
Model_ to compute the PageRank the Googlebot (normally following
links randomly in pages it enters) jumps every (1-c)x100% links randomly
to an entirely different webpage where the chance that it will end up at
page i is given by the i-th entry of v (this is to avoid being trapped
in a web-loop). So, in Googles model the bot _teleports_ itself
randomly every 6th link or so. Now, the PageRank is a
column-eigenvector for the GoogleMatrix A with eigenvalue 1 which can be
approximated by the RandomSurfer model and the rate of convergence of
this process depends on the _second_ largest eigenvalue for A
(the largest being 1). Now, in the paper posted this morning a simple
proof is given that this eigenvalue is c (because the matrix P has
multiple eigenvalues equal to 1). According to a previous paper on the
subject The
Second Eigenvalue of the Google Matrix
, this statement has
implications for the convergence rate of the standard PageRank algorithm
as the web scales, for the stability of PageRank to perturbations to the
link structure of the web, for the detection of Google spammers, and for
the design of algorithms to speed up PageRank. But I’ll have to
read more to understand the Google spammers bit…

2 Comments

the cpu 2 generation

Never
ever tell an ICT-aware person that you want to try to set up a
home-network before you understand all 65536 port-numbers and their corresponding
protocols. Here is what happened to me this week. Jan Adriaenssens returned from an extended vacation in New Zealand and I told him
about my problems with trying to set up WebDAV securely. He
stared at me with that look that teenage children have if they
find out their parents dont know how to handle the simplest things on a
mobile such as saving a number, writing an SMS let alone use the
dictionary… and asked ‘now why would you want to do that??? I just
use AppleTalk to connect to my computer securely’. Now I’m not such a
fool that I didnt try this out but I didnt manage to get matrix
mounted on my Desktop. ‘Oh, but thats probably because of the
firewall’ Jan said ‘just send an email to Peter (the guy running the
defenses here) and ask him to open up ports 548 and
427…’ And sure enough five minutes later the problem was
solved and I could trow my WebDAV-plans in the dustbin (although, I
think Ive found a use for WebDAV but will keep this a bit longer to
myself until I checked it out). If you think that was the end of it,
think twice. Never ever point an ICT-professional to your
computer. They then start looking at its firewall-logs and find all
sorts of things such as : ‘I noticed that traffic from port 53
was dropped to the firewall, could it be that you configured the
firewall as DNS-server. If this is the case, you better remove it and it
will increase your network-speed, I think.’ And sure enough that
IP-address was set on my machine as one of two possibilities for the
DNS-server so I quickly removed it and in the process thought that maybe
I should also remove the other one so I did send Peter another email
asking whether that was ok. It turned out that the second IP address was
the genuine DNS-server so I got a sec answer back ‘You better leave
this as it is otherwise not much will work…’ Oh, shame, shame eternal
shame on me!

My only defense is that I still belong
to what I would call the cpu 2 generation (I’m a few years too
old to belong to the more computer-aware generation X). When I
started out doing research in 1980 the single most important command was

cpu 2

which you had to type before you could run any program.
By typing this you asked to be given 2 minutes of central processing
time, so you had to write all your programs in such a way that either
they gave a result back within 2 minutes or to include lots of
output-commands giving you a chance to determine at which parameters you
would restart the program for your next cpu 2. I once computed in
this way all factorial maximal orders in quaternion algebras by spending
a couple of days in the computer room. These days any desktop computer
would finish this task in half a minute. Perhaps the younger generations
will appreciate all the hard computer-work we had to do back then if
they read a bit from the computer history museum page!

Leave a Comment