Queue management system Slurm #
We use Slurm resource manager to schedule all user jobs on our cluster. The job is an ordinary shell script which is submitted to the queue by
sbatch command. This script contains the information about requested resources (number of nodes, amount of RAM, required time). The user can use the usual bash commands inside this script. The script is executed by the system only on the first reserved core. The user should take care of executing his programs on other cores, e.g. with mpirun tool.
Let’s start with a simple example for job
sleep for just one core and maximum execution time equal to 5 minutes.
Create the file
sleep.sh with the following content:
#!/bin/bash #SBATCH --job-name sleep #SBATCH --nodes=1 #SBATCH --time=05:00 echo "Start date: $(date)" sleep 60 echo " End date: $(date)"
You can enqueue this job with
Strings starting with prefix
#SBATCH define the parameters for
sbatch command. You can also specify these parameters explicitly in the command line, e.g.:
sbatch --job-name sleep --nodes=1 --time=5:00 sleep.sh
The main parameters for
The working dir of the job, by default the current dir is used.
The file names for standard error (
stderr) and standard output (
stdout) streams, by default both streams are joined in one file
slurm-<job_id>.outin the current dir.
--mail-type=NONE,FAIL,BEGIN,END,ALL and others
Select type of events for notifying the user via e-mail.
NONE— no notifications,
FAIL— in case of job abortion,
BEGIN— job start,
END— job finish. You can select several events. More details in the manual
The e-mail address for notifications, by default the owner of the job.
The name of the job.
The execution queue for the job. Several queues are available, the default one is
Mprocesses on each node.
NCPU cores for each process (e.g. for MPI+OpenMP hybrid jobs). Default is 1 CPU core per process.
The required memory size per each node. The size is a number with one of the suffixes
--mem=16Gwill request 16 GB of memory per each node.
The maximum execution time for the job. The job will be terminated after this time is exceeded. Default time limit is 6 hours. The maximum time you can request is 24 hours. E.g.,
walltime=1:45:00will stop the execution of the job after 1 hour and 45 minutes.
A comma-delimited list of extra constraints for nodes. Current contraints are:
ib– working InfiniBand,
avx, avx2, avx512– CPUs with support for AVX, AVX2 и AVX-512 CPU command extensions.
Some useful environment variables defined by the Slurm management system:
The directory from which the user submitted the job.
Unique ID of job.
The hostname of the current node.
Number of allocated CPU cores per job.
You can find more information in the manual
You can check the status of the jobs with
squeue command. You will get the information for specific user running
squeue -u user. The current state of the job is shown in ST column.
- PD — the job is waiting in the queue for requested resources.
- R — the job is running.
- Other states are documented in the manual
You can get the list of nodes allocated for your job in the NODELIST column.
squeue -l will show requested time for each job, and
squeue --start will show an estimated time of a job start.
You can remove your job from the queue, e.g. in case you requested to much cores, with the command
scancel <job_id>. You can also remove the running job the same way, in this case the job will be terminated first.
You can monitor the general statistics about cluster usage with
slurmtop program. Press key
q for exit.
Available queues #
We use several queues on the cluster. All compute nodes are separated in groups. Each queue can use nodes only from specific groups. The maximum execution time may be reduced for some queues.
|Queue||017–020||021–030||031–050||Total cores||Max walltime|
* Only 12 cores from
cl1n022are available in
In order to submit the job to e.g.
x12core queue, you should use parameter
-p x12core for
sbatch -p x12core -J big-problem --nodes=2 --ntasks-per-node=24 qbig.sh
Or you can add the following line in your job file:
#SBATCH -p x12core
The default queue is