The IBM cluster is now made up of the combination of both the original IBM SUR cluster (22 dual CPU systems) and the new SUR cluster system (32 quad processor systems). All total there are now 22 dual 200 MHz Power3 nodes and 34 quad 375 MHz Power3-II nodes. All dual CPU nodes have 1GB of RAM and 7GB of available scratch space per processor. All quad CPU nodes have 16 GB of RAM and have available scratch disk sizes of 17GB (2 nodes), 35GB (16 nodes) or 70GB (16 nodes) per processor. The high speed interconnect is Gigabit Ethernet, but is partioned into two main node groups. As a result the largest portential job can currently request 128 processors.
To use the cluster you need to first obtain an account on the cluster. To obtain an account send an email to Brett Bode (email@example.com) making the request. Note that cluster use is currently limited to the groups involved in the SUR grant.
To log on to the cluster telnet (or ssh) to cluster.scl.ameslab.gov. Once logged onto this node you have full access to your home directory (on /clusters) and all compilers. Internally this node is also known as nl1.
To get between nodes you should use ssh. For batch jobs this means you need to create a key with a zero-lenght passphrase. To do this:
ssh-keygen (use the defaults and hit enter twice when prompted for the passphrase)
cp ~/.ssh/identity.pub ~/.ssh/authorized_keys
or if you already have an authorized_keys file add the new key to the end.
To run jobs you must submit them to PBS (the queueing system). In general the submit command is:
qsub -j oe -o outputname -l nodes=#,walltime=days:hours:minutes:secs Your_Script
The high speed nodes names will be within the file 'hostlist' in the scratch directory on the first node in the hostlist.
If your job requires larger memory or disk you can place additional flags on the qsub command. To request larger memory and the bigmem flag (ie nodes=#:bigmem,walltime...). For larger disk (larger disk implies large memory so don't use both flags) add the flags d140GB, d280GB for 35GB or 70GB per processor disk sizes (ie nodes=#:d140GB,walltime...).
To run GAMESS jobs use the gms command. It will prompt you for the # of nodes, the walltime, the maximum disk per processor, and possibly the maximum memory per processor. GAMESS is maintained by Mike Schmidt so please send GAMESS related questions to him (firstname.lastname@example.org).