The Amber Cluster

From 2004–2010, the mathematics and computer-science departments shared a 16-node parallel cluster called Amber.

The nodes communicated using special message-passing software implementign protocols such as the Message-Passing Interface (MPI) or Parallel Virtual Machine (PVM).

What Does It Do, and How Does It Work?

High-Performance Computing Clusters (HPCCs) are a way of using multiple relatively inexpensive (“commodity”) computers to emulate and even surpass the performance characteristics of much, much more expensive multiprocessor systems.

Our cluster was a “Class I Beowulf cluster”. It used commodity hardware, in this case Dell PowerEdge 400SC computers, each with a 2.8 GHz Pentium 4 processor, 1 GB or 1.25 GB of RAM, and a gigabit Ethernet port.

The nodes communicated through a gigabit Ethernet switch using special “message-passing” software that runs on top of the regular network protocols.

To take advantage of the parallel-processing ability of the cluster, the algorithms used in software have to be inherently parallelizable and your code has to be written to use the message-passing libraries to allow communication between the nodes. You can't take just any program, run it on a cluster, and see parallel speedup.

For lots more background information about the Beowulf computing concept, see beowulf.org and the Beowulf FAQ.

Writing and Running Parallel Programs

More information about writing and running parallel programs is available in our parallel-software support section.