HPC Maintenance: May 26, 2026

All systems are expected to return to service by Wednesday, May 27 at 6 am.

IMPORTANT MAINTENANCE NOTES

As part of this maintenance, we will:

  • Upgrade the Open OnDemand and Slurm scheduler

  • Introduce significant updates to compiler and container toolchains

  • Add a new RTX 6000 Pro GPU node to the gpu-mig partition as part of our ongoing efforts to improve GPU resource availability and reduce wait times

What to Expect During Maintenance

Access:

You will not be able to log in or use the HPC systems, including Open OnDemand and FastX, between 6 a.m. Tuesday, May 26, to 6 a.m. Wednesday, May 27.

Jobs:

You can submit jobs until maintenance begins. If the scheduler determines that a job cannot finish before maintenance starts, it will be held in the queue and will automatically start once the system is back online. No jobs will run during maintenance.

Storage:

Research Standard and Research Project storage will remain accessible via Globus, Server Message Block (SMB), and Network File System (NFS) mounts. However, access to Research Project storage may be temporarily disrupted during the maintenance window. In general, take note of the following:

  • You can continue to access files from other systems (e.g., your computer) if mounts are set up.

  • You cannot access files through the HPC interface during maintenance.

  • The Data Transfer Nodes (DTNs) stay online for ongoing data transfers through Globus.

Improving GPU Resource Availability

To improve access to GPU resources, RC previously introduced the gpu-mig partition using NVIDIA MIG technology to divide A100 80GB GPUs into smaller instances. Since the existing 10 GB GPU slices are insufficient for many workloads, RC has added a new RTX 6000 Pro GPU node that provides 32 GPU instances, each with 24 GB of GPU memory. Following maintenance, the RTX node will replace the current A100 node in the gpu-mig partition, and the A100 node will be returned to the main gpu partition.

What this means for you: Run smaller GPU jobs with less queue delays — and without using any Service Units (SUs).

See how to submit a job to the gpu-mig partition.

System

The NVIDIA driver will be upgraded to 595.45.04 to support CUDA 13.1. See section on NVHPC below.

Open OnDemand will be ugpraded to 4.1.4.

  • Code Server will be upgraded to 4.115.0.
  • JupyterLab will be upgraded to 4.5.6-py3.13.
  • RStudio Server will be upgraded to 2026.04.0 (R 4.6.0).

Modules

There are major changes to compiler and container toolchains (GCC, Intel, NVHPC, Apptainer). Users should read the corresponding sections carefully if applicable.

GCC

GCC 14.2.0 will become the new default. Most modules under 11.4.0 will be migrated under 14.2.0. Some of the most important upgrades include:

  • bcftools, htslib, samtools/1.23
  • python/3.13.11
  • R/4.6.0
  • rust/1.92.0

If a module is only found under GCC 11.4.0, it should be seen as deprecated, meaning it will be removed with the entire GCC 11.4.0 toolchain in the future, but not during this maintenance. Deprecated modules are listed at the end of this page. If you still need to use these modules, please let us know as soon as possible.

Some specifics:

  • A few modules will be moved under openmpi (e.g. hisat2/2.2.1) or apptainer (e.g. grass/8.4.2). Please use module spider <NAME> to check the load command.
  • [R] Apart from the removal of 4.3.1, existing R versions will remain under GCC 11.4.0, so that users won’t have to reinstall their R libraries. Starting from 4.6.0, R will be built under 14.2.0. (We have prepared a script to help you transition to 4.6. See section below for more information.)
  • [Berkeley DB] berkeley_db will be renamed to db.
  • [Boost] Starting from version 1.88.0, the MPI-enabled module name will be boost.mpi. The non-MPI module name will be boost.
  • [CUDA] Modules that depend on CUDA 12.x will be consolidated to 12.8.0. cuda/12.2.2 will be removed. If you built your own CUDA code with these versions, check if they run fine under 12.8.0. You may not need to rebuild.
  • [SRA Toolkit] sratoolkit will be renamed to sra-toolkit.
  • [wigToBigWig/Kent Tools] wigtobigwig will be absorbed into kent-tools/487. Note the change in the version format of the latter.

GCC 12.4.0 and all modules under it will be removed.

Intel

Intel 2025.3 will become the new default. Most modules under 2023.1 and 2025.0 will be migrated under 2025.3. Some of the most important upgrades include:

  • cp2k/2026.1
  • elpa/2025.06.002
  • libxc/7.0.0
  • siesta/5.4.2

A module that is not migrated may exist under GCC (e.g. spglib) or NVHPC (e.g. gromacs). If not, it should be seen as deprecated, meaning it will be removed with the Intel 2023.1 and 2025.0 toolchains in the future, but not during this maintenance. Deprecated modules are listed at the end. If you still need to use these modules, please let us know as soon as possible.

NVHPC

Starting from 26.1, the nvhpc toolchain will become a complete toolchain (compiler + MPI + math libary). Instead of building our own OpenMPI with the NVIDIA compilers, we will switch to the bundled HPC-X (NVIDIA’s modified OpenMPI). Simply run module load nvhpc/26.1. Do not load nvompi or openmpi. In Slurm scripts replace srun with mpirun.

Users who build MPI code using NVIDIA compilers should test out the new toolchain as soon as possible. We recommend against cross-compiling (e.g. building on the frontend) as it can often lead to “illegal instruction” errors at runtime. We recommend building on an Ampere GPU (A40, A6000, A100).

Users who only need the compilers but not NVIDIA’s MPI or math libraries should load nvidia-compilers/26.1.

During this maintenance, only 24.5 will be removed. 25.x are deprecated and will be removed in the future.

Apptainer

Apptainer will be upgraded to 1.4.5. Existing containers do not need to be rebuilt.

Transitioning to a new R version

If you want to use the newer R 4.6.0 version you will need to update your R packages. Without reinstalling your packages manually, you can run the updateRlib script. This script is located under /share/resources/HPCtools.

To use the script, provide two arguments: the current version of R and the newer version. For example, if you want to install the packages you’re using with R 4.3.1 with R 4.6.0, type:

/share/resources/HPCtools/updateRlib 4.3.1 4.6.0

The script will prompt you:

Packages from
~/R/goolf/4.3 
will be built in 
~/R/goolf/4.6 
Is this what you want to do? <Y or N>

Respond with Y to re-install your packages and make them compatible with the newer R version.

Removed modules

The modules to be removed during this maintenance are listed below.

Module Remove Replace with
amber 24-CUDA-12.2.2 26-CUDA-12.8.0
apptainer 1.3.4 1.4.5
blender 3.6.17 5.0.1
busco 5.8.2 6.0.0
cellassign 0.99.2 -
cellranger 9.0.1 10.0.0
cellpose 3.0.10 4.0.5+
chemps2 1.8.12 -
clara-parabricks 4.2.0 4.6.0
cmake 4.0.0, 4.1.2 4.2.2
code-server 4.99.1 4.115.0
cuda 12.2.2 12.4.1+
cudnn 8.9.4.25 9+
cumulus_feature_barcoding 0.10.0 -
danpos 2.2.2 -
gcc 12.4.0 14.2.0 (default), 11.4.0 (legacy)
gdb 13.1-py3.11 16.3
go 1.23.6 1.26.2
gromacs 2023.2, 2025.1 2025.3 (cpu), 2026.0 (gpu)
grass 8.2.1, 8.4.1 8.4.2
gsea 4.3.3 4.4.0
itk 5.3.0 -
jupyterlab 4.4.6-py3.12 4.5.6-py3.13
libtorch 2.1.1, 2.4.1-precxx11 2.7.1
matlab R2017a, R2023a, R2023b, R2024a, R2024b R2025a+
metamorpheus 0.0.320 -
nanopolish 0.13.2 -
nccl 2.18.3, 2.21.5 2.22.3+
nextflow 25.04.6 25.10.4
nibabies 22.1.3 -
nvhpc 24.5 25.3+
openmm 7.5.0 -
peer 1.3 -
R 4.3.1 4.4.1+
rapidsai 25.06 26.02
rmats-turbo 4.1.1 -
skopeo 1.13.1 -
snakemake 9.8.1 9.19.0
subversion 1.14.0 -
thermorawfileparser 1.3.4 -
tensorrt 10.0.1 -
tmux 3.4, 3.5a 3.6a
ucc-cuda 1.2.0-CUDA-12.2.2 1.3.0-CUDA-12.8.0
ucx-cuda 1.15.0 1.18.0+
unrar 6.0.2 7.21
vg 1.22.0 -

Deprecated modules

Deprecated GCC and Intel modules are listed alphabetically below. They will not be removed during this maintenance.

  • gcc/11.4.0
    • Non-MPI

      abseil bart-mri bedops canu circos eigensoft
      g2clib g2lib gd gemma grackle
      lapack libibmad libibumad libmatheval libxml++
      mm-common mrtrix3tissue nlopt
      pasapipeline protobuf protobuf-python qwt
      seqoutbias shapeit4 shapelib trinity
      viennarna xxdiff
      
    • MPI

      attrdict3 cloudcompare finestructure gildas
      mafft regtools rsem sundials wxpython
      
  • intel/2023.1, 2025.0
    • Non-MPI
      superlu_mt 
      
    • MPI
      abinit cesm
      

If you have any questions about the maintenance, please contact our user services team.