Parallel computer systems

Salomon

SGI cluster, IT4I, Ostrava.

The cluster consists of 1008 compute nodes, in total having 24192 compute cores with 129 TB of operating memory and giving over 2 Pflop/s of theoretical peak performance:

  • Compute nodes:
    • 576x without accelerator, 2x 12-core processor Intel Xeon E5-2680 v3 (2.5 GHz), 128 GB of memory, no local disk.
    • 432x with MIC accelerator, 2x 12-core processor Intel Xeon E5-2680 v3 (2.5 GHz), 128 GB of memory, no local disk, 2x MIC accelerator Intel Xeon Phi 7120P (61 cores, 16 GB of memory).
    • 2x with GPU accelerator (for visualization purposes), 2x 14-core processor Intel Xeon E5-2695 v3 (2.3 GHz), 512 GB of memory, no local disk, GPU accelerator Nvidia Quadro K5000 (4 GB of memory).
    • 2x SMP/NUMA system SGI UV 2000 (for large memory computations), 14x 8-core processor Intel Xeon E5-4627 v2 (3.3 GHz), 3328 GB of memory, 2x local disk SSD 400 GB.
  • Interconnection:
    • InfiniBand FDR56, 7D enhanced hypercube.
    • Gigabit Ethernet.
  • Software: operating system CentOS Linux 6.6 (Red Hat clone), PBS Pro 12 for distributed resource management and others.

Anselm

Bull cluster, IT4I, Ostrava.

The cluster consists of 209 compute nodes, in total having 3344 compute cores with 15 TB of operating memory and giving over 94 Tflop/s of theoretical peak performance:

  • Compute nodes:
    • 180x Bullx B510 blade server without accelerator, 2x octa-core processor Intel Sandy Bridge E5-2665 (2.4 GHz), 64 GB of memory, local disk SATA III 500 GB.
    • 23x Bullx B515 blade server with GPU accelerator, 2x octa-core processor Intel Sandy Bridge E5-2470 (2.3 GHz), 96 GB of memory, local disk SATA III 500 GB, GPU accelerator Nvidia Tesla Kepler K20.
    • 4x Bullx B515 blade server with MIC accelerator, 2x octa-core processor Intel Sandy Bridge E5-2470 (2.3 GHz), 96 GB of memory, local disk SATA III 500 GB, MIC accelerator Intel Phi 5110P.
    • 2x Bullx R423-E3 server (FAT node), 2x octa-core processor Intel Sandy Bridge E5-2665 (2.4 GHz), 512 GB of memory, 2x local disk SATA III 300 GB a 2x local disk SSD 100 GB.
  • Interconnection:
    • high-bandwidth low-latency Infiniband QDR network (IB 4x QDR, 40 Gbps), a fully non-blocking fat-tree network topology, transfer rate 2170 MB/s via TCP connection (single stream) and up to 3600 MB/s via native Infiniband protocol.
    • Gigabit Ethernet, transfer rate 114 MB/s.
  • Shared disk system:
    • HOME Lustre object storage, 1x disk array NetApp E5400, 227x disk NL-SAS 2 TB in RAID6.
    • SCRATCH Lustre object storage, 2x disk arrays NetApp E5400, 106x disk NL-SAS 2 TB in RAID6.
    • Lustre metadata storage, 1x disk array NetApp E2600, 12x disk SAS 300 GB in RAID5.
  • Software: operating system Bullx Linux Server 6.3 (Red Hat clone), PBS Pro 12 for distributed resource management, compilers Intel Parallel Studio 13.1 and GNU, MPI (Bullx 1.2.4, Intel 4.1, OpenMPI 1.6 a 1.8, MPICH2 1.9), programming tools for numerical computations PETSc 3.4.4, Trilinos 11.2.3, ANSYS, COMSOL Multiphysics and others.

Enna

Symmetric multiprocessor Supermicro, Institute of Geonics CAS, Ostrava:

  • Hardware: barebone system Supermicro SuperServer 5086B-TRF, 8x octa-core processor Intel Xeon E7-8837 (2.66 GHz, 32nm, smart cache 24 MB), 512 GB of shared memory, disk subsystem with 2x hard disk SATA III 2 TB (sw RAID-1) and 1x disk SSD SATA III 256 GB for operating system, graphical accelerator Asus GTX650-DC-1GD5 (384 CUDA cores), IPMI card.
  • Software: operating system CentOS 6.3, development suites Intel Cluster Studio XE 2013 for Linux and CUDA 5.0, programming tools for numerical computations Matlab 2011b (including Parallel Computing Toolbox 5.2), Comsol Multiphysics 4.4 (with modules Structural Mechanics, Subsurface Flow and LiveLink for MATLAB) and others.

Hubert

Symmetric multiprocessor TYAN, Institute of Geonics CAS, Ostrava:

  • Hardware: barebone system Tyan Transport VX50-B104985, 8x quad-core processor AMD Opteron 8380 (2.5 GHz, L2 cache 4x 512 kB, L3 cache 6 MB), 128 GB of shared memory, disk subsystem with 8x SAS hard disk 450 GB, graphical accelerator Nvidia GeForce GTX 280, numerical accelerator Nvidia Tesla C1060 (GPU based), IPMI card, uninterruptible power supply (UPS) APC Smart-UPS 3000 VA.
  • Software: operating system CentOS 6.0, development suites Intel Cluster Studio XE 2013 for Linux and CUDA 4.1, programming tools for numerical computations Matlab 2011, Comsol 4.2, Ansys 13.0, Elmer, Trilinos and others.

Archive:  Ra  Simba  Termit  Thea  Natan  Lomond