Skip to content
System Status: All systems are operational • Services are available and operational.
Click for detailed status

Storage

Euler offers several storage options, for different needs:

Name Filesystem Storage Technology Location
Home NFS v3 SSD (NVMe) attached to the network
Project NFS v3 HDD with NVMe and SAS SSD caching attached to the network
Scratch Lustre SSD (NVMe) attached to the network
Work Lustre HDD attached to the network
Tmp XFS SSD (NVMe) attached to the compute node
Name Path Environment Variable holding path Access
Home /cluster/home/<username> $HOME personal
Project /cluster/project/<group> - shared
Scratch /cluster/scratch/<username> $SCRATCH personal
Work /cluster/work/<group> - shared
Tmp /tmp $TMPDIR shared
Name Capacity File/folder limit Snapshots Backups Automatic deletion
Home 50 GB per user 500'000 per user hourly and daily nightly when user's ETHZ account is deleted
Project depends on shareholder group depends on shareholder group hourly and daily multiple times per week never
Scratch 2.5 TB per user 1'000'000 per user no no after 2 weeks
Work depends on shareholder group depends on shareholder group no multiple times per week never
Tmp depends on compute node no no no when slurm job terminates
Name Best used for
Home private long-term storage of important files
Project long-term group storage of critical data
Scratch short-term storage
Work high-performance, medium-term group storage of large files
Tmp low latency, node-local storage for the duration of a slurm job

Project and Work

Only members of a shareholder group, who invested in the respective storage system, have access to Project and/or Work.

Quotas

Quotas restrict how much storage can be used. Each storage type has a capacity limit and limit on number of files and folders. This is enforced by a soft quota and a hard quota.

If the soft quota is exceeded, a warning is submited, but files can still be created. Reaching a limit, starts a grace period of one week to reduce usage. If usage remains above the limit, you cannot create new files until usage falls below the limit.

If the hard quota is reached (typically 10% above soft quota), you cannot write new data, until usage falls below the limits.

You can get an overview of all your storage usage with the command:

lquota

Without specifying a path, it will show the quota for home and scratch. For project and work, you need to specify the path to the corresponding storage share.

Snapshots

Snapshots can be accessed in every subdirectory with the command

cd .snapshot

where users can then copy back files from any of the snapshots. These folders are only visible/mounted when accessed, therefore they are not listed in their parent directory.

Backups

To restore files from a backup, contact cluster support

Personal storage

Home

/cluster/home/<username>
Every user has a home directory. Its content is private. It's limited to 50 GB and 500'000 files/directories. Snapshots are created hourly and daily, backups every night. Snapshots can be accessed in every subdirectory with the command
cd .snapshot
where users can then copy back files from any of the snapshots. Restore form tape backup requires a request to cluster support. It can be used for safe long-term storage. Its path is stored in the environment variable $HOME.

Scratch

/cluster/scratch/<username>
Every user has a personal scratch directory. It's limited to 2.5 TB and 1'000'000 files/directories. Files are automatically deleted after 2 weeks and there is no backup. It can be used for short-term storage of large datasets. Its path is stored in the environment variable$SCRATCH. It is only visible/mounted when accessed, therefore it's not listed in /cluster/scratch/.

Carefully read the usage rules before using scratch, to avoid misunderstandings.

Usage Rules
--------------------------------------------------------------------------------
 U S A G E   R U L E S   F O R   P E R S O N A L   G L O B A L   S C R A T C H
--------------------------------------------------------------------------------

Your personal global scratch directory on Euler, `/cluster/scratch/username`, can be used if you need more disk space than available in your home directory. You can store at most **2.5 TB** in at most **1 million files**. This storage system is optimized for large parallel HPC applications. The storage is non-permanent and is not backed up. As described below, older files are continuously removed.

If you want to use it you **MUST** read and respect the following **RULES**:

## Rules

1. **Out of respect for other users** in a shared environment, please **CLEAN UP** your personal scratch directory and promptly remove the files that are no longer needed for your computations.

2. **Files older than 15 days will be automatically DELETED** without prior notice.

3. Any attempt to change the time stamp of files or directories to prevent them from being purged automatically is subject to technical or administrative action, up to and including removing your access to this storage system or suspending your account.

4. The **GLOBAL scratch file systems are optimized for LARGE data files**. If you are reading or writing many small files, or large files by small increments, you will achieve better performance by using a LOCAL scratch file system or (space permitting) your home directory.

5. Like all work directories, global scratch file systems are **NOT BACKED UP**. The safekeeping of your data is **YOUR responsibility**.

---

**By using the global scratch file systems of Euler, you are implicitly accepting these rules. People who do not respect these rules, or try to circumvent them, will face administrative actions such as the suspension of their Euler account.**

---

*2014-10-29 / Cluster Support*

Group storage (shareholders only)

Shareholders can purchase any amount of additional storage inside Euler (contact cluster support for more info). Access rights and restrictions are managed by the shareholder group.

Project

/cluster/project/<groupname>
The project file system is designed for safe, long-term storage of critical data. Snapshots are created hourly, daily and there and backups are stored multiple times per week with a retention time of 90 days.

Work

/cluster/work/<groupname>
The work file system is a lustre file sytem. It is optimized for I/O performance and can be used for short-, medium- and long-term storage of large files. Multiple times per week a backup is stored. Recovering data from backup requires a request to cluster support.

Data stored in a (sub)directory named nobackup is excluded from the backup. Such a directory can be located on any level of the directory hierarchy:

/cluster/work/YOUR_STORAGE_SHARE/nobackup
/cluster/work/YOUR_STORAGE_SHARE/project101/nobackup
/cluster/work/YOUR_STORAGE_SHARE/project101/data/nobackup/filename
/cluster/work/YOUR_STORAGE_SHARE/project101/data/nobackup/subdir/filename
Backing up large, frequently changing datasets can significantly increase backup size and slow down both backup and restore operations. To ensure that critical data can be restored quickly when needed, please exclude data that does not require backup. This helps keep backups efficient and focused on important data.

It is only visible/mounted when accessed, therefore it's not listed in /cluster/scratch/.

Tmp

Euler compute nodes are equipped with local storage, providing fast, low-latency storage for I/O-intensive applications during Slurm jobs. When you request scratch space via Slurm, a unique directory is automatically created for your job to prevent conflicts with other users. The path to this directory is stored in the environment variable $TMPDIR. After your job finishes, Slurm automatically deletes the directory and its content. Note that node-local scratch is temporary and not backed up.

Example jobscript to use local scratch:

!/usr/bin/bash
#SBATCH -n 1
#SBATCH --time=01:00:00 
#SBATCH --mem-per-cpu=2g
#SBATCH --tmp=100g 

# Copy files to local scratch
rsync -aq ./ ${TMPDIR}
# Run commands
cd $TMPDIR
# Command to run the job that processes the data
do_my_calculation
# Copy new and changed files back.
# Slurm saves the path of the directory from which the job was submitted in $SLURM_SUBMIT_DIR
rsync -auq ${TMPDIR}/ $SLURM_SUBMIT_DIR

External storage

Using external storage on Euler

Please note that external storage is convenient to bring data in to the cluster or to store data for a longer time. But we recommend to not directly process data from external storage systems in batch jobs on Euler as this could be very slow and potentially put a high load on the external storage system that can lead to DOS.

Central NAS/CDS

Groups who purchased storage on the central NAS of ETH or CDS can request the IT Services storage group to export it to Euler. To use central NAS/CDS shares on Euler, the following requirements must be met:

  • The NAS/CDS share must be mountable via NFS (shares supporting only CIFS cannot be mounted).
  • The NAS/CDS share must be exported to the subnet of the HPC clusters (contact ID Systemdienste for an NFS export).
  • Set file and directory permissions carefully if you do not want other cluster users to have read/write access.

NAS/CDS shares are mounted automatically when accessed at /nfs/<servername>/<sharename>. A typical NFS export entry for Euler:

# cat /etc/exports
/export 129.132.93.64/26(rw,root_squash,secure) 10.205.0.0/16(rw,root_squash,secure) 10.204.0.0/16(rw,root_squash,secure)

If your NAS share is on IBM Spectrum Scale, also request these options:

PrivilegedPort=TRUE
Manage_Gids=TRUE

These options should only apply to the Euler subnet. For subnet and IP address details, see the network page. Once mounted, NAS shares are accessible from all compute nodes.

Local NAS

Groups who operate their own NAS can export a shared file system via NFSv3 to Euler. Requirements:

  • NAS must support NFSv3 (currently the only supported version).
  • User and group IDs on the NAS must be consistent with ETH usernames and groups.
  • The NAS must be exported to the subnet of the HPC clusters.
  • Set permissions carefully if you do not want other cluster users to have read/write access.

The mount point is:

/nfs/<servername>/<sharename>

Example NFS export entry:

# cat /etc/exports
/export 129.132.93.64/26(rw,root_squash,secure) 10.205.0.0/16(rw,root_squash,secure) 10.204.0.0/16(rw,root_squash,secure)

For subnet and IP address details, see the network page.

The share is mounted automatically when accessed.

Central LTS (Euler)

Groups with storage on the central LTS of ETH can request the ITS SD backup group to export it to the LTS nodes of Euler. Requirements:

  • The LTS share must be mountable via NFS (CIFS-only shares are not supported).
  • The LTS share must be exported to the LTS nodes of the HPC clusters (contact ITS SD Backup for an NFS export).
  • Set file and directory permissions carefully if you do not want other cluster users to have read/write access.

Export the LTS share to:

129.132.93.70(rw,root_squash,secure)
129.132.93.71(rw,root_squash,secure)

To access your LTS share, log in to the LTS nodes ssh <username>@lts.euler.ethz.ch. LTS shares are mounted automatically at /nfs/lts11.ethz.ch/shares/<sharename(_repl)> or /nfs/lts21.ethz.ch/shares/<sharename(_repl)>, depending on whether your share is on lts11.ethz.ch or lts21.ethz.ch.