High Performance Computing

Disk Storage on Jupiter

All jobs to run on Jupiter are queued and run via your clone: you never log in to Jupiter. Jupiter mounts its user disks on the clones as listed in the table that follows. It is vital that users appreciate and understand the disk layout AND associated block sizes on Jupiter so as to enable efficient running of jobs and prevent an unfair impact on the other users.

The disk layout is as follows, with notes and explanations after the table

Label File System Disk Size Block Size Notes
         
/scratch ReiserFS 44GB 4kb Local disk on slave node. Use for high IO
/ufs UFS 2.75TB 8kb Used for small files, log files, scripts etc
/qfs1 QFS 11TB 128kb Striped across 4 storage servers
/qfs2 QFS 8.2TB 64kb Striped across 3 storage servers
/qfs3 QFS 5.5TB 64kb Striped across 4 storage servers
  • /ufs is a UFS file system with a small block size. It is a central file system mounted everywhere in the facility. However its performance is very poor. One person running jobs with high IO can very easily choke the storage network and affect EVERYBODY. Therefore, only low intensity jobs should be run on /ufs. Also, /ufs is NOT a central file repository. When you have finished running you should move your stuff off /ufs and onto /data or /home on your clone.

  • the QFS file systems are fast and have a much bigger block size. They are especially suitable for jobs which
    1. generate large files
    2. generate a lot of files per second
    3. generate large files AND at a high rate

    Again, the qfs filesystems are NOT a central file repository. When you have finished running you should move your stuff off QFS and onto /data or /home on your clone. These file syatems are not backed up, and if the storage systems have problems, there is the potential for data loss. If you have particularly large amounts of data, which are not suitable for storage on the clone, please contact hpc AT nottingham.ac.uk for assistance.

  • /scratch is to be used for high IO rates where users are generating up to about 20GB. To use /scratch write out to /scratch/username (created for you) in your submission script you must include a line at the bottom to move all your data onto /ufs for example so that you can see it. You must also remember to delete all your data in /scratch at the end of every run so that the next user can use the /scratch space. /scratch is also cleaned every 60 minutes automatically : those users who do not have jobs on a particular slave node will have their /scratch/username directories emptied MOLPRO users MUST use /scratch.

Note: Device Filling There are 2 maximums on a file system: physical disk space (bytes), and number of files (inodes). On /ufs, there is 2.7TB of disk space and 3million inodes. As soon as one of these is reached the 'device is full'. If you have 10 files for example, thats 10 inodes. However, if you tar them up thats just one file - 1 inode. If you can tar up your files you would immediately free a large number of inodes.