Rivanna is the University of Virginia’s High-Performance Computing (HPC) system. As a centralized resource it has hundreds of pre-installed software packages available for computational research across many disciplines. Currently the Rivanna supercomputer has 575 nodes with over 20476 cores and 8PB of various storage.All UVA faculty, staff, and postdoctoral associates are eligible to use Rivanna, or students when part of faculty research.
The sections below contain important information for new and existing Rivanna users. Please read each carefully.
New users are invited to attend one of our free orientation sessions ("Introduction to Rivanna") held throughout the year.
A high performance computing cluster is typically made up of at least four service layers:
- Login nodes - Where you log in, interact with data and code, and submit jobs.
- Compute nodes - Where production jobs are run. On Rivanna these nodes are heterogenous; some have more memory, some have GPU devices, and so forth. Partitions are homogeneous so you can select specialty hardware by your partition request, sometimes along with a resource request (gres).
- Storage - Where files are stored, accessible by all nodes in the cluster.
- Resource Manager - A software system that accepts job requests, schedules the jobs on a node or set of nodes, then manages their execution.
Click on elements of the image to learn more:
|Partition||Max time / job||Max nodes / job||Max cores / job||Max cores / node||Max memory / core||Max memory / node / job||SU Charge Rate|
||3 days||4||10||10||32GB||375GB||3.00 *|
||3 days||8||512 cores / 2048 threads||512||3GB (per physical core)||192GB||1.00|
* GPU charge rate = number of cores + 2 * number of GPU devices.
|Cores/Node||Memory/Node||Specialty Hardware||GPU memory/Device||GPU devices/Node||# of Nodes|
|40||383GB||GPU: RTX 2080 Ti||11GB||10||2|
Research computing resources at the University of Virginia are for use by faculty, staff, and students of the University and their collaborators in academic research projects. Personal use is not permitted. Users must comply with all University policies for access and security to University resources. The HPC system has additional usage policies to ensure that this shared environment is managed fairly to all users. UVA’s Research Computing (RC) group reserves the right to enact policy changes at any time without prior notice.
Exceeding the limits on the login nodes (frontend) will result in the user’s process(es) being killed. Repeated violations will result in a warning; users who ignore warnings risk losing access privileges.
Each job in the standard partition is restricted to a single node. Users may submit multiple jobs or job arrays, but the maximum aggregate cpu cores allowed for a single user’s running jobs is 1000.
Users must request a minimum of two nodes and four cpu cores (and no more than 900 cpu cores) when submitting a job to the parallel partition.
The gpu partition is dedicated to jobs that can utilize a general purpose graphics processing unit (GPGPU). Any job submitted to the gpu partition must request at least one GPU device through the gres option; jobs that do not utilize any GPUs are not allowed in this partition. Users may submit multiple jobs or job arrays, but the maximum aggregate number of GPU devices allowed for a single user’s running jobs is 16.
Rivanna’s scratch file system has a limit of 10TB and 350,000 files per user. This policy is in place to guarantee the stability and performance of the scratch file system. Scratch is intended as a temporary work directory. It is not backed up and files that have not been accessed for more than 90 days are marked for deletion. Users are encouraged to back up their important data. Home directories and leased storage are not subject to this policy.
Excessive consumption of licenses for commercial software, either in time or number, if determined by system and/or RC staff to be interfering with other users’ fair use of the software, will subject the violator’s processes or jobs to termination without warning. Staff will attempt to issue a warning before terminating processes or jobs but inadequate response from the violator will not be grounds for permitting the processes/jobs to continue.
Any violation of the University’s security policies, or any behavior that is considered criminal in nature or a legal threat to the University, will result in the immediate termination of access privileges without warning.