- ARIS
- Armcluster
- Avitohol
- BA-HPC
- Cy-Tera
- Gamma
- ICAM BlueGene/P
- IMAN1-Booster/King
- InfraGRID
- Leo
- MK-03-FINKI
- NIIFI SC
- PARADOX
- UPT-HPC
- Zaina
ARIS
ARIS (Advanced Research Infromation System) is an HPC cluster based on IBM’s NeXtScale platform, incorporating the Intel ® Xeon ® E5 v2 processors, (Ivy Bridge) and has a theoretical peak performance (Rpeak) of 190,85 TFlops and a sustained performance (Rmax) of 179,73 TFlops on the Linpack benchmark. With a total of 426 compute nodes that incorporate 2, 10 core CPUs (Ivy Bridge - Intel Xeon E5-2680v2, 2.8 GHZ), it will offer more than 8500 processor cores (CPU cores) interconnected through FDR Infiniband network, a technology offering very low latency and high bandwidth. Each compute node offers 64 Gbyte of RAM. In addition, the system offers about 1 Petabyte (quadrillion bytes) of storage, based on the IBM General Parallel File System (GPFS). The system software allows developing and running scientific applications and provides several pre-installed compilers, scientific libraries and popular scientific application suites.
Supported Software Modules | |||||
Climate | FERRET | IDL | OPENFOAM | Paraview | R |
RegCM | WRF | WRF-CHEM | |||
Cultural Heritage | |||||
Life Sciences | AMBER | Desmond | FFTW | GAMESS | GROMACS |
JVM | Molekel | NAMD | OpenCV | TCL | |
VMD |
Armcluster
In 2004, in the Institute for Informatics and Automation Problems of NAS RA (IIAP) the first high Performance computing cluster (Armenian Cluster - Armcluster) in Armenia had been developed, which consists of 128 Xeon 3.06 GHz (64 nodes) processors. Myrinet high bandwidth and Gigabit networks interconnect the nodes of the cluster. The Myrinet network is used for computation and Gigabit for task distribution and management. The cluster achieved 523.4 GFlops performance by HPL (High Performance Linpack) test. Many intellectual software packages to support the advance in the field of modeling and analysis of quantum systems, signal and image processing, theory of radiation transfer, calculation of time constants for bimolecular chemical reactions, a system of mathematically proved methods, fast algorithms and programs for solving of certain classes of problems in linear algebra, calculus, algebraic reconditibility, test-checkable design of the built-in control circuits have been developed. In addition, user friendly tools and scientific gateways have been developed for Armcluster.
Supported Software Modules | |||||
Climate | WRF | ||||
Cultural Heritage | |||||
Life Sciences | GROMACS | JVM |
Avitohol
The Supercomputer System Avitohol at IICT-BAS consists of 150 HP Cluster Platform SL250S GEN8 servers, each one equipped with 2 Intel Xeon E5-2650 v2 8C 2600 GHz CPUs and two Intel Xeon Phi 7120P co-processors. Six management nodes control the cluster, with 4 of them dedicated to the provision of access to the storage system through Fibre Channel. The storage system is HP MSA 2040 SAN with a total of 96 TB of raw disk storage capacity. All the servers are interconnected with fully non-blocking FDR Infiniband, using a fat-tree topology. The HP CMU is used for fabric management, together with the torque/Moab combination for local resource management. Most of the computing capacity of the system comes from the Intel Xeon Phi 7120P co-processors, which use the Multiple Integrated Core (MIC) technology. For optimum use of these resources Intel compilers and the Intel MKL are deployed. Since this supercomputer is relatively new, it is in the process of deploying the software and libraries and streamlining the computational environment.
Supported Software Modules | |||||
Climate | |||||
Cultural Heritage | |||||
Life Sciences |
BA-HPC
The Bibliotheca Alexandrina (BA) has been operating a High-Performance Computing (HPC) cluster since August 2009. The goal of this initiative is to provide the computational resources needed for modern scientific research in the various domains as a merit-based service to researchers locally and regionally. The cluster consists of 130 compute nodes, providing a total of 1,040 CPU cores, each with access to 1 GB of RAM. Storage for input and output data is provided by a Lustre file system hosted on storage hardware with a total raw capacity of 36 TB. The cluster is wired with 10-Gbps DDR InfiniBand. The BA-HPC participated in the LinkSCEEM-2 project and continues to participate in joint calls with the Cy-Tera cluster operated by the Cyprus Institute. The majority of usage on the system comes from projects by researchers at Egyptian universities. In the VI-SEEM project, the BA is dedicating 20 percent of the system, i.e., approximately 1.8 million core hours yearly, for hosting projects that will be granted access to HPC resources through VI-SEEM. In addition, on the BA large-scale storage cluster, 100 TB are being dedicated to the VI-SEEM project.
Supported Software Modules | |||||
Climate | FERRET | NCL | OPENFOAM | Paraview | R |
WRF | |||||
Cultural Heritage | |||||
Life Sciences | FFTW | GAMESS | GROMACS | JVM | Molekel |
NAMD | OpenCV | TCL |
Cy-Tera
Cy-Tera is a hybrid CPU/GPU HPC cluster composed of 116 iDataPlex dx360 M3 nodes and a theoretical peak performance of 30.5 TFlops. 98 of these are twelve core compute nodes and 18 of these are GPU nodes with dual NVidia M2070 GPUs. Each node has 48 GB of memory and 4xDR Infiniband network for MPI and I/O to the 300 TB GPFS filesystem.
Supported Software Modules | |||||
Climate | EMAC | FERRET | IDL | NCL | OPENFOAM |
R | WRF | WRF-CHEM | |||
Cultural Heritage | |||||
Life Sciences | AMBER | Desmond | FFTW | GROMACS | NAMD |
TCL |
Gamma
IMAN1 (Jordan) as supercomputing center has diversity of HPC cluster architectures (Gama, Zaina, Booster/King) which are made available for VI-SEEM project. One of them is Gamma. Gamma is Intel Xeon based hybrid CPU/GPU computing node equipped with NVIDIA Tesla K20 GPU card. This computing node is used for data visualization and pattern / object detection. Mainly, this computing node is used for academic and research purposes.
Supported Software Modules | |||||
Climate | |||||
Cultural Heritage | |||||
Life Sciences |
ICAM BlueGene/P
ICAM BlueGene/P is a supercomputer designed for highly scalable applications. Our infrastructure is based on one IBM BlueGene/P rack with 1,024 physical CPUs (4,096 cores) and more than 1TB of RAM memory. The computing power is backed by a 19TB (RAID 5) storage system delivered using a 10Gbps Ethernet connection.
ICAM BlueGene/P is managed by two special nodes, one dedicated to users (the head node) and one dedicated to management and monitoring activities. The storage is exported to the supercomputer by two NSD nodes connected directly to the storage system. The storage space is exported using IBM GPFS solution. ICAM BlueGene/P is cooled by air using two dedicated cooling units. The cooling model splits the cooling zone into two areas: hot zone above the technical floor and cold zone under it. The two cooling units are working in cluster mode to ensure high-availability.
Supported Software Modules | |||||
Climate | WRF | ||||
Cultural Heritage | |||||
Life Sciences | FFTW | NAMD |
IMAN1-Booster/King
Both of booster (IMAN1-03-PS3) and King (IMAN1-04-PS3) Clusters are HPC cluster based on the IBM CELL Processor 8 C SPEs + 1 PPE. Booster cluster is a small scale cluster consisted of five PlayStations (PS3) used to run pilot projects and to make prove of concept of code porting on the IBM CELL Processor computing architecture. However, the successful jobs on booster cluster are moved to the large-scale cluster King (IMAN1-04-PS3), which is consisted of 250 PlayStation.
Supported Software Modules | |||||
Climate | |||||
Cultural Heritage | |||||
Life Sciences |
InfraGRID
InfraGRID Cluster consists of 50 compute nodes powered by IBM BladeCenter-H technology. The entire solution is built up from 4 blade center chassis, each with 14 HS21 blade server. Each blade server has a dual quad core Intel Xeon E5504 CPU (clocked at 2.00Ghz) and 10GB RAM memory. The connectivity is delivered by: (a) Infiniband for interconnect and storage, (b) fiber channel for dedicated storage and (c) Gigabit for service networking. This cluster was initially setup in 2009. In 2011 the InfraGRID Cluster received an update by adding a new IBM BladeCenter-H chassis powered with 7 dual CPU/GPU HS22 blade servers. The CPU is powered by Intel XEON technology with a clock speed at 3.46Ghz and 32GB RAM memory. The GPU cards are NVidia Tesla M2070Q (448 GPU cores and 6GB GDDR5 RAM memory).
The cluster is managed by two service nodes, one dedicated to user access (also called head node) and one dedicated exclusively for service actions and cluster management. The storage is shared using GPFS file system over Infiniband using two dedicated NSD nodes. The service management is conducted using IBM BladeCenter the advanced management module (AMM) that is built in into both blade center chassis and blade servers (IMM). This feature allows remote administration and monitoring of any of the installed hardware. Infragrid Cluster is cooled with air. The displacement of the cooling units is in-row with a well delimited hot-cold area. Currently three APC InRow cooling units are installed and working in a cluster configuration to obtain cooling unit high-availability behavior.
Supported Software Modules | |||||
Climate | WRF | ||||
Cultural Heritage | |||||
Life Sciences | FFTW | JVM | OpenCV | TCL |
Leo
The HPC is the newest machines named after Leo Szilard, a Physicist and inventor of the nuclear reactor. The Top500 qualified heavily accelerated machine has 252 Nvidia GPUs to enable running highly accelerated codes to help the 168 Sandy Bridge 8-core CPUs. The system is located at Debrecen, the second largest city in Hungary, and shown in Figure. The machine is integrated to PRACE European HPC network.
Supported Software Modules | |||||
Climate | Paraview | R | |||
Cultural Heritage | |||||
Life Sciences | FFTW | OpenCV | VMD |
MK-03-FINKI
MK-03-FINKI Cluster consists of 84 compute nodes powered by HP BladeSystem c7000 technology. The entire solution is built up from 2 BladeSystem chassis, each with 32 HP BL2x220c G7 blade servers. Each blade server has a dual six core Intel Xeon L5640 CPU (clocked at 2.267Ghz) and 24GB RAM memory. The connectivity is delivered by: (a) QDR Infiniband for interconnect and storage and (b) Gigabit for service networking. This cluster was initially setup in 2012.
The cluster is managed by six service nodes, one dedicated to user access (also called head node), one dedicated exclusively for service actions and cluster management and four dedicated for the storage. The storage is shared using Lustre file system over Infiniband using two dedicated nodes for MGS/MDS, and two dedicated for OSS, all in HA mode. The service management is conducted using HP BladeSystem Onboard administration. This feature allows remote administration and monitoring of any of the installed hardware. MK-03-FINKI is cooled using HP Modular Cooling system. The displacement of the cooling unit is between the self-cooled racks.
Supported Software Modules | |||||
Climate | R | WRF | |||
Cultural Heritage | |||||
Life Sciences | FFTW | GROMACS | JVM | NAMD |
NIIFI SC
This is one of the smaller systems NIIF is operating since 2012, having 64 Opteron 12-core CPUs. This is one of the two machines operated at NIIF HQ, Budapest. The system is integrated to PRACE European HPC network.
Supported Software Modules | |||||
Climate | Paraview | R | |||
Cultural Heritage | |||||
Life Sciences | acml | blender | cuda | FFTW | Gaussian |
GROMACS | Hdf5 | ImageMagic | JVM | Lammps | |
maple | NAMD | parallel | Paraview | scalapack | |
teseract | VMD |
PARADOX
Fourth major upgrade of PARADOX installation (Paradox IV) became operational during September 2013. This upgrade consists of 106 working nodes and 3 service nodes. Working nodes (HP ProLiant SL250s Gen8, 2U height) are configured with two Intel Xeon E5-2670 8-core Sandy Bridge processors, at a frequency of 2.6 GHz and 32 GB of RAM (2 GB per CPU-core). The total number of new processor-cores in the cluster is 1696. Each working node contains an additional GP-GPU card (NVIDIA Tesla M2090) with 6 GB of RAM. With a total of 106 NVIDIA Tesla M2090 graphics cards, PARADOX is a premier computer resource in the wider region, which provides access to a large production GPU cluster and new technology. The peak computing power of PARADOX is 105 TFlops. One service node (HP DL380p Gen8), equipped with an uplink of 10 Gbps, is dedicated to cluster management and user access (gateway machine). All cluster nodes are interconnected via Infiniband QDR technology, through a non-blocking 144-port Mellanox QDR Infiniband switch. The communication speed of all nodes is 40 Gbps in both directions, which is a qualitative step forward over the previous (Gigabit Ethernet) PARADOX installation. The administration of the cluster is enabled by an independent network connection through the iLO (Integrated Lights-Out) interface integrated on motherboards of all nodes.
PARADOX cluster is installed in four water-cooled racks. The cooling system consists of 4 cooling modules (one within each rack), which are connected via a system of pipes with a large industrial chiller and configured so as to minimize power consumption.
Supported Software Modules | |||||
Climate | DREAM | IDL | OPENFOAM | R | WRF |
Cultural Heritage | |||||
Life Sciences | acml | apr | apr | boost | cp2k |
espresso | FFTW | GROMACS | Hdf5 | JVM | |
maple | NAMD | netcdf | numactl | numpy | |
OpenCV | parallel | scalapack | scalasca | scipy | |
siesta | TCL |
UPT-HPC
HPC resources of UPT were implemented from an initiative of Albania Ministry of Infrastructure in collaboration with RP of China. It includes three blocks of 8 blade servers and two separate servers interconnected through two Ethernet switch of 1 Gbps. In total the system has 208 cores. Only two of blade blocks are active, the third one is not in use due to technical problems, leaving only 144 active cores. Interconnection is done using two Ethernet switches of 1 Gbps. Two local Ethernet segments are configured, the internal one used for MPI exchanges, the other external for Internet access. Only two separate servers may be accessed from outside using SSH, serving as frontal to access the system. Currently, only minimal software for parallel processing is installed: GCC and MPICH-2. Other open source software may be installed if requested.
Supported Software Modules | |||||
Climate | OPENFOAM | Paraview | |||
Cultural Heritage | |||||
Life Sciences |
Zaina
Zaina is Intel Xeon based computing cluster with 1 Gbit Ethernet interconnect. This cluster is used for code development, code porting and Synchrotron Radiation application purposes. It compounds of Two Dell PowerEdge R710 and Five HP ProLiant DL140 G3 servers.
Supported Software Modules | |||||
Climate | |||||
Cultural Heritage | |||||
Life Sciences |