Computer & Software Configuration
Simon Vogrin and Stephan Lau. Image credit: David Paul.

Computer & Software Configuration

All compute, storage and software resources at VLSCI are available via the Resource Allocation Scheme

PCF Hardware

IBM Blue Gene/Q - Avoca

  • Peak performance of 838.86 teraFLOPS.
  • 65,536 PowerPC based 1.6GHz cores.
  • A total of 64TB RAM.
  • Interconnect between compute nodes forms a five-dimensional torus providing excellent nearest neighbour and bisection bandwidth.
  • Suitable for large-scale parallel processing.
  • Compute nodes run a custom lightweight operating system called Compute.
    Node Kernel (CNK) that is similar to Linux and mostly POSIX compliant.
  • The head node runs runs the RHEL 6 operating system, a variety of Linux.

SGI Altix XE Cluster - Bruce

  • Peak performance of 11.6 teraFLOPS.
  • 1088 Intel Nehalem compute cores (8 per node) running at 2.66GHz.
  • 110 nodes with 24GB RAM per node.
  • 20 nodes with 48GB RAM per node.
  • 6 nodes with 144GB RAM per node.
  • Connected to a high speed, low latency QDR Voltair Fabric InfiniBand network for inter-process communications.
  • Connected to a 100TB Panasas file system.
  • The system runs the CentOS 5 operating system, a variety of Linux.

IBM iDataplex x86 system - Merri

  • Peak performance of 7.3 teraFLOPS.
  • 688 Intel Nehalem compute cores running at 2.66GHz.
  • 36 nodes with 96GB RAM and 8 cores per node.
  • 44 nodes with 48GB RAM and 8 cores per node.
  • 3 nodes with 1024GB RAM and 16 cores per node.
  • Connected to a high speed, low latency QDR Voltair InfiniBand switch for inter-process communications.
  • The system runs the RHEL 6 operating system, a variety of Linux.

IBM iDataplex x86 system - Barcoo

  • Peak performance - compute nodes currently performing at 20 teraFLOPS - with Xeon Phi cards* running nominally at 1 teraFLOP each
  • 1120 Intel Sandybridge compute cores running at 2.7GHz.
  • 67 nodes with 256GB RAM and 16 cores per node.
  • 3 nodes with 512GB RAM and 16 cores per node.
  • 20 Xeon Phi 5110P cards installed across 10 nodes.
  • Connected to a high speed, low latency Mellanox FDR14 InfiniBand switch for inter-process communications.
  • The system runs the RHEL 6 operating system, a variety of Linux.
  • Read about *Xeon Phi cards here in this November 2013 article.

Computing resources on these Supercomputers may be applied for under the VLSCI Resource Allocation Scheme.

Storage infrastructure:


• 100TB Panasas Parallel Data Store (attached to Bruce).
• 700TB GPFS Parallel Data Store (shared by Barcoo, Merri and Avoca)
• 1PB HSM tape system, made available through GPFS (shared by Barcoo, Merri and Avoca)


PCF Software

VLSCI makes available a range of open source and commercial software on its systems.

Software lists are available for the x86 HPC clusters and for the Blue Gene.

Generally VLSCI provide all the necessary software at no cost to users or institutes. Users and potential users with particular software needs should make a request via the Help Tracking System, describing the software you have in mind, a reference web site or contact details of vendors or similar information. Sometimes, in the case of difficult applications, we can suggest an alternative that may be more readily available.

Similarly, if researchers have a particular computational need, the help desk may be able to advise them of appropriate software. Please ask!