Slurm migration

Slurm migration quick reference #

Job script parameters #

Torque Slurm
#PBS -N myjob #SBATCH --job-name=myjob
#PBS -l walltime=1:00:00 #SBATCH --time=1:00:00
#PBS -l nodes=N:ppn=M #SBATCH --nodes=N --ntasks-per-node=M
#PBS -l mem=Xgb #SBATCH --mem=Xgb
#PBS -l pmem=Xgb #SBATCH --mem-per-cpu=Xgb
#PBS -q queue #SBATCH --partition queue
mpiexec mpirun

Submit and manage commands #

Torque Slurm
qsub <jobscript> sbatch <jobscript>
qdel <jobid> scancel <jobid>
qhold <jobid> scontrol hold <jobid>
qrls <jobid> scontrol release <jobid>
qstat -u <user> squeue -u <user>
pbstop slurmtop

Notable differences #

First line of job script must be #!<shell>, e.g.:

#!/bin/bash

Slurm jobs start in the submission directory rather than in home directory.

Slurm jobs have stdout and stderr output log files combined by default.

Use mpirun in Slurm jobs in place of mpiexec.

For interactive jobs you can use srun --pty command, e.g.:

srun --pty -p mix --ntasks=8 --time=1:00:00 /bin/bash

Torque commands support #

Compatibility tools like qsub, qinfo and others are installed for your convenience on headnode h1.

These commands are limited in functionality. We recommend to migrate to Slurm commands and parameters.