Computing

SBD++ partners provide transnational access to allow the execution of large scale experiments on High Performance Computing facilities or virtual access upon submission of a detailed project to the mediators.

The main characteristics of the computing facilities are listed in the table below. For further information, interested users can contact the corresponding institution mediators (see Mediators page).

 

Institution Machine Name Details

CNR

D4Science Infrastructure 4 sites, 3650 cores, 14 TB RAM
  Servers KDD

4 x (Server 1024 GB Ram, 192 Core, 11 TB Disk)
14 x (Server 32 GB Ram, 4 Core, 4 TB Disk)
2 x (Server 32 GB Ram, 4 Core, 4 TB Disk, GPU NVidia T4)
1 x (Server 32 GB Ram, 16 Core, 5 TB Disk, GPU Nvidia Quadro RTX 6000)

UNIPI

  1 cluster with 40 cores , 60 GB memory and 24 TB disk

UT

Rocket HPC 2700 cores , 400 TB disk space

BSC

MareNostrum4

165888 CPU cores and 3456 nodes, 390 Terabytes of main memory

 

Nord3-HPC

756 compute nodes and 12,096 Intel SandyBridge-EP E5–2670 cores, 24.2 TB of main memory

 

CTE-Power 9

52 compute nodes 2 x IBM Power9 8335-GTH @ 2.4GHz, 4 x GPU NVIDIA V100 (Volta), Each node:  512GB of main memory, 2 x SSD 1.9TB as local storage, 2 x 3.2TB NVME, 

 

Nord3-Cloud

Nodes, 16x2 CPU cores each (for VMs, OpenStack), Each node: 32 GB RAM

ETHZ

 

1 NVIDIA TITAN RTX, 1TB storage.

 

 

  Leonhard Open
5,128 CPU cores and 2,334,720 CUDA cores, Single core job memory limit is 3 TB, parallel - 100TB. Storage of up to 10TB shared with the group.

LUH

HADOOP cluster 240 nodes, 900 CPU cores , .4PB HDFS storage, 6TB main memory

AALTO

  1 cluster with 2900 cores,12 TB memory (1024 GB per job) and 420 TB disk space

 

Grid 500 CPU cores, includes 7 GPU machines equipped with NVIDIA GTX 480 , 2-6 GB per job

USFD

  121 worker nodes, 2024 CPU cores, 40 GPUs , Total memory: 12160 GiB, Fast network filesystem (Lustre): 669 TiB