Skip to content

marek.steklac

Storage Hardware overview

Once you connect to HPC DEVANA cluster you have direct access to two different storage locations from the Login nodes, and one that is accessible only when your job is running. Direct access from the Login nodes is available to /home and /scratch storage directories. The /work storage is the local storage of the computing node and can only be accessed when there is a job running.

Home

The /home storage directory location is where you are placed after logging in, and it's where your user directory is located. Currently, a user is limited to storing up to 1 TB of data in total in their home and project directory. When your project expires, the data in your HOME directory will be stored for 90 days. After this period, both the data and your user account will be purged.

We do not provide data backup services for the any directory (/scratch, /home, /work)

Access Mountpoint Limitations per user Backup Net Capacity Throughput Protocol
Login and Compute nodes /home 1TB no 547 TB write 3 GB/s and read 6 GB/s NFS

Configuration of the HOME storage:

2x ThinkSystem SR630 V2(1U):

  • 2x 3rd Gen. Intel® Xeon® Scalable Processors Gold 6338: 2.00GHz/32-core/205W
  • 16x 16GB DDR4-3200MHz 8 ECC bits
  • 2x 480GB M.2 SATA SSD (HW RAID 1)
  • 2x InfiniBand ThinkSystem Mellanox ConnectX-6 HDR/200GbE, QSFP56 1-port, PCIe4
  • 1x Ethernet adapter ConnectX-6 10/25 Gb/s Dual-port SFP28, PCIe4 x16, MCX631432AN-ADAB
  • 2x ThinkSystem 430-16e SAS/SATA 12Gb HBA
  • 2x 1100W Power Supply
  • OS CentOS Linux 7

server_top

Fig. 1. Top view of ThinkSystem SR630 V2(1U) unit

1x DE6000H 4U60 Controller SAS:

  • 32GB controller cache
  • 100x 3,5" 8TB 7200RPM 256MB SAS 12Gb/s
  • RAID level 6
  • 8x 12 Gb SAS host ports
  • 1x 1 GbE port (UTP, RJ-45) per controller for out-of-band management

controller_back

Fig. 2. Back view of DE6000H 4U60 Controller SAS unit

Scratch

The Scratch directory is designed for temporary data generated during computations. All tasks with heavy I/O requirements should use the SCRATCH file system as their working directory, or alternatively, use local disk storage located on each compute node at under the /work mountpoint. In this context, users are required to transfer essential data from the SCRATCH filesystem to their HOME directory after completing their computations and to remove temporary files.

The SCRATCH filesystem is implemented as a parallel BeeGFS filesystem, accessible through the 100Gb/s Infiniband network, and it can be accessed from all login and compute nodes. Accessible capacity is 282 TB, shared among all users. Users have no restrictions or quotas on their individual accounts, and there are also no quotas for project directories. We also do not offer data backup services for the /scratch directory.

Access Mountpoint Limitations per user Backup Net Capacity Throughput Protocol
Login and Compute nodes /scratch none no 282 TB XXX GB/s BeeGFS

Configuration of the HOME storage:

4x ThinkSystem SR630 V2(1U):

  • 2x 3rd Gen. Intel® Xeon® Scalable Processors Gold 6338: 2.00GHz/32-core/205W
  • 16x 16GB DDR4-3200MHz 8 ECC bits
  • 2x 480GB M.2 SATA SSD (HW RAID 1)
  • 2x InfiniBand ThinkSystem Mellanox ConnectX-6 HDR/200GbE, QSFP56 1-port, PCIe4
  • 1x Ethernet adapter ConnectX-6 10/25 Gb/s Dual-port SFP28, PCIe4 x16, MCX631432AN-ADAB
  • 2x ThinkSystem 430-16e SAS/SATA 12Gb HBA
  • 2x 1100W Power Supply
  • OS CentOS Linux 7

server_top

Fig. 3. Top view of ThinkSystem SR630 V2(1U) unit

2x DE4000F 2U24 SFF All Flash Storage Array:

  • 64GB controller cache
  • 24x 2,5" 15.36TB PCIe 4.0 x4/dual port x2
  • RAID level 5
  • 8x 12 Gb SAS host ports (Mini-SAS HD, SFF-8644) (4 ports per controller)
  • 8x 10/25 Gb iSCSI SFP28 host ports (DAC or SW fiber optics [LC]) (4 ports per controller)
  • 8x 8/16/32 Gb FC SFP+ host ports (SW fiber optics [LC]) (4 ports per controller
  • 1x 1 GbE port (UTP, RJ-45) per controller for out-of-band management

scratch_front

Fig. 4. Front view of DE4000F 2U24 unit

scratch_back

Fig. 5. Back view of DE4000F 2U24 unit

Work

Like a scratch directory, the /work directory serves as temporary data storage during computations, with the requirement that data must be moved and deleted once the job is completed. The /work directory represents the local storage on the computing node and is accessible only during an active job. We do not offer backup services for the /work directory. The capacity of the /work directory depends on the node number. Nodes numbered 001 - 049 and 141-146 have a 3.5TB /work capacity, while the remaining nodes have a 1.8TB capacity. There are no user or project quotas.

Access Mountpoint Limitations per user Backup Net Capacity Throughput Protocol
Compute nodes /work none no 1.5/3.5 TB XXX GB/s XFS

The technical specifications of individual computing nodes can be found in the following table. For more detailed information about the compute nodes, please refer to the compute nodes section.

Feature n[001-048] n[049-140]
Processor 2x Intel Xeon Gold 6338 CPU @ 2.00 GHz
RAM 256 GB DDR4 RAM @ 3200 MHz
Disk 3.84 TB NVMe SSD @ 5.8 GB/s , 362 kIOPs 1.92 TB NVMe SSD @ 2.3 GB/s, 166 kIOPs
Network 100Gb/s HDR Infiniband
Performance ????? GFLOP/s per compute node
fDisk*b 3.84 TB NVMe SSD @ 5.8 GB/s , 362 kIOPs 1.92 TB NVMe SSD @ 2.3 GB/s, 166 kIOPs

Last update: October 23, 2023
Created by: marek.steklac