IUBio GIL .. BIOSCI/Bionet News .. Biosequences .. Software .. FTP

UNIX GCG Site: How much CPU power?

Jasper Rees - Oxford University jrees at vax.oxford.ac.uk
Thu Sep 1 14:44:13 EST 1994

In article <1994Aug29.220115.34484 at hulaw1.harvard.edu>, robison at mito.harvard.edu (Keith Robison) writes:
> Our department is considering upgrading the central computer which is
> used to run GCG. We will definitely go with a UNIX box.  I was wondering
> if folks would be kind enough to share their experiences with how
> fast a machine you need.

I'll lay out the configuration as it stands currently for the Oxford University
Molecular Biology Data Centre, it is running OpenVMS, but this does not
materially affect the arguement except for the availability of clustering. 
Obviously this configuration has grown, by a factor of 3 for diskspace and 15
for CPU power in the last 2.5 years, so that gives some feeling for the needs
of a data centre. Right now the configuration I describe is sufficient for the
forseeable 2 years ahead, but you never have enough CPU power if you have
people running phylogentics or doing structure solutions. 

> Key points which I can think of are:
> 	1) Architecture & speed (Sun, HP, SGI, Alpha, etc)

VAXStation 4000/90 (32 SPECmarks) and DEC 3000/500S (>100 SPECS, I forget off

VAXStation is 64 MB, with 0.435, 1.4 and 3 x 2.1 GB (Seagate Wren 9) disks, SCSI bus is slow

DEC 3000/500 is 256 MB, with 3x 1.0 GB (DEC RZ58) and 4 x 2.1 GB (Seagate Wren
9's)  all on slow SCSI buses 

Sundry collections of other periferals, none of which affect performance. 

The two systems are clustered over FDDI, so most of the inter system
communication is over this, some over ethernet, OpenVMS handles this.

Users all log in over ethernet, mostly telnet, some via LAT a few from terminal
servers and serial lines. Networks would not be expected to be limiting at any
stage of this. The LAN that the Data Centre connects to run 2% average
bandwidth, maxing out occasionally at about 20% on a bad day. The campus
backbone is FDDI, so is no limiting, connections to the hospital site (about
50% of users) are over 2MB/sec, and might be limited. 

Most sessions are with vt series terminal emulation from Mac or PC, some X
using eXceed on PC's, MacX on Mac's, or VXT2000 series X terminals. Performance
of X limited mostly by the X server performance I would say. 

> 	2) How many total users in department

600 total spread from about 20 departments in the campus and a bunch from
outside too. 

> 	3) A reasonable guess as to the number of simultaneous users

Varies from 20 to >35 during the working day, nearly all logged into the
VAXstation (ie the slower of the two systems). With batch queues running stuff
like database searching, phylognetics etc at lower priorities (ie it has no
effect on interactive response time, unless memory is a problem, which with
properly tuned configurations it should not be). 

> 	4) Is your current configuration working? (i.e. are people
> 	   happy with response time)

Seems fine, but experience shows that few users ever complain about anything... 
But I would say that both character cell and X behavior was sufficient for this
level of  usage at this stage, given the distribution I've described. This of
course requires that the systems and user quotas are properly tuned, otherwise
you can figure any system will look horrible. 

> 	5) Any relevant comments

Assuming that you will be running much X windows in future (whether to run GCG,
Staden, AceDB or whatever) you should plan for say 3 MB of RAM per X session on
a 32 bit OS, maybe 5 on a 64 bit OS. Then work out what your OS needs itself,
then double it all and buy at least that much memory. Its worth being careful
how you buy memory so as not to lock yourself into configurations where you
have to dump a lot of expensive RAM (say in 64 MB chunks) to get any more.
Resale prices are trivial, and the large, faster RAM boards are no-linearly
priced, even at street-pricing. 

Otherwise its hard to imagine getting a system with insufficent CPU power from
any vendor right now to satisfy your needs, but its worth looking at what the
upgrade options are. 

Pretty much everything has Fast or Fast-Wide SCSI on now, but look carefully at
how you are going to configure disks if you plan to run database searching, its
getting increasingly hard to get DNA databases through the CPU fast enough with
GCG's version of fasta (Pace Bill, I haven't tested native fasta, but it
probably needs data *faster* if anything) as CPU speeds increase and unless the
bus bandwidth and disk speeds are sufficent this would become a problem. For
now Fast-Wide SCSI with disks like the Seagate Barracuda should be fine, if
not, then RAID of some sort will obviously help, after that you are down to
tuning files individually which isn't worth it for the rate of database
updates. Obviously fragmentation isn't an issue for unix, for OpenVMS it is,
fragmented database files equal mucho problemo... 

My personal preference right now? Probably a DEC 2100 system.... its the best
price/perforance in the market, you can get a variety of optiosn fo CPUs (up to
4, with 275Mhz Alpha chips) and upto 2GB of RAM, it has PCI bus and SCSI (fast)
supports a range of RAID options (sorry don't have details to hand), and runs a
64 bit OS in OSF/1, version 3.0 of which supports SMP and clustering (similar
to VMS clustering) if you want to get bigger than the single machine in future.
All the other options are in the "obviously" range, but I can dig out the
options catalog if you want. Add CSGL to that for software, Warrently Plus fo
3-5 years warrenty and its a pretty good deal. 

The only caveat would be that if you want high power 3D graphics as well, then
you are looking at Workstations and then its either SGI, HP or DEC in
proportion dependent on the availbility of the software you want for the 3D
work. But that doesn't sound like what you are asking for at the start. 

> Keith Robison
> Harvard University
> Department of Cellular and Developmental Biology
> Department of Genetics / HHMI
> robison at mito.harvard.edu 

Disclaimer: I built the above, I ran it, now I'm back in the real world making
protein for crystals and taking vacations... :))

regards,    jasper

D Jasper G Rees  MA DPhil                  You are in a twisty little maze of
Wellcome Senior Research Fellow            cDNA clones, all different.
The Sir William Dunn School of Pathology   You are characterising the ones
Oxford University                          most like talin.....
South Parks Road
Oxford, OX1 3RE, United Kingdom

Tel:  +44 865 275567
FAX:  +44 865 275501               Email:  Jasper.Rees at Pathology.Oxford.Ac.UK

More information about the Bio-soft mailing list

Send comments to us at archive@iubioarchive.bio.net