Directory Structure
Where to store data?¶
You can use following directories for your data:
Path (mounted at) | Quota | Purging |
---|---|---|
/home/username/ |
1 TB | 3 months after the end of the project |
Description
Personal home directory, you can check the path with echo $HOME
command.
Path (mounted at) | Quota | Purging |
---|---|---|
/projects/<project_id> |
unlimited | 3 months after the end of the project |
Description
Shared project directory for all project participants.
Path (mounted at) | Quota | Purging |
---|---|---|
/scratch/<project_id> |
unlimited | 3 months after the end of the project |
/work/$SLURM_JOB_ID |
unlimited | automatically after the job termination |
Description
Directories for temporary files created during calculations, accesible only through compute nodes during the running of the job.
/scratch/<project_id>
- shared scratch directory accesible from all compute nodes/work/$SLURM_JOB_ID
- local scratch directory, unique for selected compute node (see more below)
Determine quotas¶
Users can determine their quota usage by running the following command:
quota -s
User quota on /home
Disk quotas for user <user> (uid <user_id>):
Filesystem space quota limit grace files quota limit grace
store-nfs-ib:/storage/home
507G 1000G 1024G 2196k* 2048k 2098k
Alternatively, users can view the size of the respective folders by running the following command:
du -sh /home/<user>/<dir>
Exceeded quotas¶
If you exceed your storage quotas, you have several options:
-
Remove any unneeded files and directories from your directories.
-
Tar and compress files/directories from your directories. Since the write permissions are suspended, you can use your
/scratch
subdirectory to tar and zip the files, remove them from/home
, and then copy the tar-zipped file back to/home
or store them locally:Verify the archive and delete original files:tar czvf /scratch/<user>/data_backup.tgz /home/<user>/<Directories-files-to-tar-and zip>
Move data from scratch to avoid deletion:cd /scratch/<user> tar xzvf data_backup.tgz # Verify the content rm /home/<user>/<Directories-files-to-tar-and zip>
This procedure buys only so much space, thus should immediatelly follwed by downloading the data and storing them locally.cp /scratch/<user>/data_backup.tgz /projects/<user>/data_backup.tgz rm /scratch/<user>/data_backup.tgz
-
Have the principal investigator (PI) justify the requirements for additional storage to be approved for the project.
Total capacity of mountpoints¶
Mountpoint | Capacity | Access nodes | Capacity |
---|---|---|---|
/home |
547 TB | Login and Compute/GPU | shared |
/projects |
269 TB | Login and Compute/GPU | shared |
/scratch |
269 TB | Login and Compute/GPU | shared |
/work |
3.5 TB | Compute/GPU : n[001-048], n[141-148] | local capacity on specified nodes |
/work |
1.5 TB | Compute : n[049-140] | local capacity on specified nodes |