Slurm migration

Slurm migration quick reference #

Job script parameters #

Torque Slurm
#PBS -N myjob #SBATCH --job-name=myjob
#PBS -l walltime=1:00:00 #SBATCH --time=1:00:00
#PBS -l nodes=N:ppn=M #SBATCH --nodes=N --ntasks-per-node=M
#PBS -l mem=Xgb #SBATCH --mem=Xgb
#PBS -l pmem=Xgb #SBATCH --mem-per-cpu=Xgb
#PBS -q queue #SBATCH --partition queue
mpiexec mpirun

Submit and manage commands #

Torque Slurm
qsub <jobscript> sbatch <jobscript>
qdel <jobid> scancel <jobid>
qhold <jobid> scontrol hold <jobid>
qrls <jobid> scontrol release <jobid>
qstat -u <user> squeue -u <user>
pbstop slurmtop

Notable differences #

First line of job script must be #!<shell>, e.g.:

#!/bin/bash

Slurm jobs start in the submission directory rather than in home directory.

Slurm jobs have stdout and stderr output log files combined by default.

Use mpirun in Slurm jobs in place of mpiexec.

For interactive jobs you can use salloc command, e.g.:

salloc --ntasks=8 --time=1:00:00

Torque commands support #

Running old Torque commands is not supported since January, 2023. Please use Slurm commands and parameters.