Storage Overview¶
Effective data management is essential for ensuring high performance and productivity when working on the Devana HPC cluster. This guide outlines the available storage systems, their intended uses, and best practices for optimal usage.
No Backups Available
There are no backup services for any directory (/home
, /projects
, /scratch
, /work
). Users are responsible for safeguarding their data.
Storage Systems¶
Upon logging into the Devana cluster, multiple storage locations are available, each designed to support specific aspects of computational workflows:
Overview of Available Filesystems on Devana
- /home
- A personal directory that is unique to each user.
- Intended for storing personal results.
- /projects
- A shared directory that all project members can access.
- Used for storing project-related results.
- /scratch
- A shared directory designed for large files, accessible to all project members.
- Intended for calculations involving files exceeding the local disk capacity.
- /work
- Local storage on each compute and GPU node.
- Suitable for calculations with files not exceeding the local disk capacity.
- Only accessible during an active job.
Home¶
The /home directory is the default storage location after logging in, containing a user's personal directory. A quota of 1 TB per user is enforced.
For details on storage quotas, refer to the home quotas section.
Data Retention Policy
When a project concludes, data in the home directory is retained for 3 months.
Access | Mountpoint | Per-User Limit | Backup | Total Capacity | Performance | Protocol |
---|---|---|---|---|---|---|
Login & Compute Nodes | /home | 1TB | No | 547 TB | 3 GB/s write, 6 GB/s read | NFS |
Projects¶
Each active project is allocated a /projects directory, accessible from login and compute nodes at /projects. Currently, there are no storage quotas for this directory.
Data Retention Policy
Data in /projects is preserved for 3 months after the project concludes.
To check your project’s directory within /projects
, use the following command:
id
uid=187000083(user1) gid=187000083(user1) groups=187000083(user1),187000062(user1)
Access | Mountpoint | Per-User Limit | Backup | Total Capacity | Performance | Protocol |
---|---|---|---|---|---|---|
Login & Compute Nodes | /projects | None | No | 269 TB | XXX GB/s | NFS |
Scratch¶
The /scratch directory is temporary storage for computational data and is implemented as a BeeGFS parallel filesystem with 100Gb/s Infiniband connectivity.
User Responsibility
Users are required to transfer important data from /scratch to /home or /projects once calculations are complete and remove temporary files.
Access | Mountpoint | Per-User Limit | Backup | Total Capacity | Performance | Protocol |
---|---|---|---|---|---|---|
Login & Compute Nodes | /scratch | None | No | 269 TB | XXX GB/s | BeeGFS |
Work¶
The /work directory, similar to /scratch, is a temporary storage space specifically for calculations. However, it consists of local storage on individual compute nodes, accessible only during an active job.
Node-Specific Capacity
- Nodes
001 - 049
and141-146
offer 3.5 TB of /work storage. - Other nodes provide 1.8 TB.
Access | Mountpoint | Per-User Limit | Backup | Total Capacity | Performance | Protocol |
---|---|---|---|---|---|---|
Compute Nodes | /work | None | No | 1.5/3.5 TB | XXX GB/s | XFS |
For additional hardware details, visit the Storage Hardware Section.
Where to Store Data?¶
Storage locations are categorized based on their intended use.
Path (Mountpoint) | Quota | Retention | Protocol |
---|---|---|---|
/home/username/ |
1 TB | 3 months after project ends | NFS |
Details
A personal home directory. Check the path with echo $HOME
.
Path (Mountpoint) | Quota | Retention | Protocol |
---|---|---|---|
/projects/<project_id> |
Unlimited | 3 months after project ends | NFS |
Details
A shared project directory accessible to all project members.
Path (Mountpoint) | Quota | Retention | Protocol |
---|---|---|---|
/scratch/<project_id> |
Unlimited | 3 months after project ends | BeeGFS |
/work/$SLURM_JOB_ID |
Unlimited | Automatically deleted after job completion | XFS |
Details
Temporary storage directories for calculations, accessible only during a running job.
/scratch/<project_id>
– Shared scratch directory, available from all compute nodes./work/$SLURM_JOB_ID
– Local storage, specific to the allocated compute node.
Where to Run Calculations?¶
Mountpoint | Capacity | Accessible From | Performance (Write/Read) |
---|---|---|---|
/home |
547 TB | Login & Compute Nodes | 3 GB/s & 6 GB/s |
/projects |
269 TB | Login & Compute Nodes | XXX |
/scratch |
269 TB | Login & Compute Nodes | 7 GB/s & 14 GB/s |
/work |
3.5 TB | Compute/GPU (Nodes 001-048, 141-148) | 3.6 GB/s & 6.7 GB/s |
/work |
1.5 TB | Compute (Nodes 049-140) | 1.9 GB/s & 3.0 GB/s |
Choosing the Right Filesystem
The optimal filesystem depends on various factors. In general, /work provides the best performance for workloads where storage capacity is not a primary concern.