Computing

SBD++ partners provide transnational access to allow the execution of large scale experiments on High Performance Computing facilities.

The main characteristics of the computing facilities are listed in the table below.

For further information, interested users can contact the corresponding institution mediators (see Mediators page).

 

Institution Machine Name Processors Connection, Memory, other info

CNR

D4Science Infrastructure 4 sites, 3650 cores 14 TB RAM
  Other resources 16 servers, 64 cores 8-32GB memory and 24TB disk
  Other resources 1 server with 64 cores 2.6GHz 125GB memory, 15 TB disk
  Other resources 1 server with 40 cores 2.4 GHz, 1 GPU Titan xp 1.2TB memory, 90+TB HDD storage, 2TB SSD storage
  Other resources 11 physical servers with 128 cores 1.2TB memory, 90+TB HDD storage, 2TB SSD storage, 10gigabit NICs and internet bandwidth
  Other resources servers with 256 cores 2 GPUs (Titan XP) 3TB RAM and 20 TB HD

UNIPI

  1 cluster with 40 cores 60 GB memory and 24 TB disk

UT

Rocket HPC 2700 cores 400 TB disk space

BSC

MareNostrum4

165888 CPU cores and 3456 nodes

390 Terabytes of main memory

 

Nord3-HPC

756 compute nodes and 12,096 Intel SandyBridge-EP E5–2670 cores 

24.2 TB of main memory

 

CTE-Power 9

52 compute nodes 2 x IBM Power9 8335-GTH @ 2.4GHz, 4 x GPU NVIDIA V100 (Volta)

Each node:  512GB of main memory, 2 x SSD 1.9TB as local storage, 2 x 3.2TB NVME, 

 

Nord3-Cloud

Nodes, 16x2 CPU cores each (for VMs, OpenStack)

Each node: 32 GB RAM

ETHZ

 

1 NVIDIA TITAN RTX

 

1TB storage.

 

 

  Leonhard Open  5,128 CPU cores and 2,334,720 CUDA cores
Single core job memory limit is 3 TB, parallel - 100TB. Storage of up to 10TB shared with the group.

LUH

HADOOP cluster 40 nodes, 900 CPU cores 2.4PB HDFS storage, 6TB main memory

AALTO

  1 cluster with 2900 cores 12 TB memory (1024 GB per job) and 420 TB disk space

 

Grid 500 CPU cores, includes 7 GPU machines equipped with NVIDIA GTX 480 2-6 GB per job

USFD

  121 worker nodes, 2024 CPU cores, 40 GPUs Total memory: 12160 GiB, Fast network filesystem (Lustre): 669 TiB
TUDelft