Minerva can be reached via ssh to one of two login nodes, which are known by DNS as minerva01.aei.mpg.de and minerva02.aei.mpg.de. Internally, their names are login01 and login02.
Once logged in, you can access the other login node, and the compute nodes (node[001-594]) by passwordless ssh - a special keypair without passphrase has been created and installed for you. Do not use this key outside of Minerva.
salloc -p devel …
will submit a job to the selected partition env | grep SLURM
[stefgru@login01 ~]$ env | grep SLURM SLURM_NODELIST=node579 SLURM_JOB_NAME=bash SLURM_NODE_ALIASES=(null) SLURM_NNODES=1 SLURM_JOBID=39809 SLURM_TASKS_PER_NODE=16 SLURM_JOB_ID=39809 SLURM_SUBMIT_DIR=/home/stefgru SLURM_JOB_NODELIST=node579 SLURM_CLUSTER_NAME=minerva SLURM_JOB_CPUS_PER_NODE=16 SLURM_SUBMIT_HOST=login01.cluster SLURM_JOB_PARTITION=devel SLURM_JOB_NUM_NODES=1
srun
will use the allocated node(s)screen
to have access to the node(s) in parallel)srun …
exit
will close the session, and return the allocated node(s) to the poolIt is recommended to use IntelMPI (IMPI) with Slurm (that's what PIK people say). Slurm and IMPI will do the correct binding for you, with OpenMPI you may see strange effects.
(to be extended)