The major software upgrade in January, 2023 may affect some users. If you come across any problems review the change list below and other pages on this site. If you did not find a solution contact the administrator for an advice.
- The operating system on all nodes was upgraded to SUSE Linux Enterprise High Performance Computing 15 SP4 (the head node had 15 SP3 version previously, the compute nodes had 15 SP2)
- Slurm workload manger was updated from version 20.02 to 22.05. The most notable changes include:
- For interactive jobs instead of
srun --pty ... /bin/bashplease usesalloc ..., for example:salloc --ntasks=8 --time=1:00:00. - Jobs can be scheduled regularly with
scrontab. - New
--preferoption allows to specify desired node list, but the job is allowed to run on any idle nodes if desired nodes are busy.
- For interactive jobs instead of
- Compute nodes are renamed and now are simply named from
n01ton36. - New Intel oneAPI compilers are installed. They are used by default when
intelmodule is loaded. To use older compilers specify the version explicitly, for example:module load intel/19.1.3.304 impi/2019.9.304. - Queues
x20core,mix,longare removed. Onlynormalqueue is available. - Maximum job walltime limit now depends on the number of used nodes. For example, a job with a single node can run for 7 days, but a job with 36 nodes can only run for 12 hours. More details on page.
- Legacy Torque commands (
qsub,qstat, etc.) are no more available.