¡Rio! is a part of the Baker Lab’s computational resource suite. It is a cluster comprised of 132 enterprise-level servers manufactured by the Hewlett-Packard company. The system contains 1,296 cores operating at 2.7 GHz. This configuration gives ¡Rio! a peak theoretical double-precision performance of 14 tera-FLOPS – a statistic that would have made it the world’s most powerful supercomputer in the year 2002. The system also boasts a total of 6.8 terabytes of main memory (~ 5.25 / core), a centralized file system able to store 144 terabytes of user data, and a fully non-blocking link aggregated 2Gb/s ethernet network.
Users of ¡Rio! have access to parallel versions of MATLAB, Mathematica, OpenFOAM, Delft3D, and ADCIRC. For users requiring graphics rendering facilities, a specialized visualization workstation containing eight AMD 8393 Quad-Core 3.1GHz Opteron processors, 512GB of memory, and dual NVIDIA Quadro 6000 graphics cards is also available.
The cluster runs a stock version of Debian 8 GNU/Linux with tuned versions of the Open MPI library and TCP stacks. Each node in the cluster uses the K10 CPU architecture from AMD, Inc. which is able to support the SEE4a vectorization instruction set. This not only allows cluster-wide compilation and optimization of executable software, but dramatically increases the performance of floating-point-based mathematical operations undertaken by the cluster.
The 1,296 CPU cores that ¡Rio! contains are able to generate a peak theoretical performance of 14 teraFLOPS – enough for it to have been the world’s most powerful computer in the year 2002. This level of performance would have kept it on the Top 500 list until 2009.
While nodes are not in use, CPU time is donated to the Rosetta@Home project. This computational time is used to understand the three-dimensional shape of complex proteins on a molecular level. This knowledge will eventually help us find cures for diseases like HIV, Alzheimer’s, and Malaria. ¡Rio! is routinely the number one contributor of CPU time to this world-wide project.