Data Infrastructure

The storage and computational backbone of the PRP@CERIC is ORFEO, the data center managed by the Laboratory of Data Engineering (LADE) of Area Science Park. Here, novel HPC and cloud-ready technologies for the general fields of life sciences and genomics are continuously developed, tested, and deployed with a production-ready grade for the project's partners. Among the offered services, there are:

Simple Linux Utility for Resource Management (SLURM)

A managed instance of the Simple Linux Utility for Resource Management (SLURM) offers access to accelerated and regular HPC resources. The facility counts more than 1300 cores, distributed over more than 20 nodes with up to 1500 GiB of RAM. All these nodes are interconnected with InfiniBand at 100 Gb/s and a 25Gb/s redundant ethernet. Furthermore, AI-ready nodes equipped with V100, and 2 DGX-workstations with A100 GPUs are available.


A managed instance of Kubernetes is available to test and run cloud workloads based on containers that are a lightweight, efficient, and portable way to package, distribute, and run applications. Containers promote scalability and resource utilization. Moreover, by leveraging the Kubernetes orchestration platforms, organizations can easily manage and scale containerized applications based on demand, optimizing resource allocation and enhancing overall efficiency.

Storage with Different Performance Levels

The facility also offers storage with different performance levels. Ceph is our primary technology, and it is interfaced with both the HPC partition and the Kubernetes infrastructure. Through Ceph, we offer fast storage based on solid-state drives and regular storage on HDD. Plus, long-term storage solutions on low-cost devices or tapes are available.

Moreover, the HPC Datacenter at the University of Salerno is available within the PRP@CERIC.
The datacenter has 3 different typologies of computing subsystems: thin nodes, fat nodes and GPU nodes. It also features two storage subsystems: one with a high-performance parallel file system and one of cold data storage. Everything is interconnected with infinibamnd switch Hdr 200 and managed with 4 dedicated nodes.