/tag/allocations

  • Access to HPC Resources

    Compute time on Rivanna/Afton is available through two service models.

    Service Unit (SU) Allocations. One SU corresponds to one core-hour. Multiple SUs make up what is called an SU allocation (e.g., a new allocation = 1M SUs). Dedicated Computing. This model allows researchers to lease hardware managed by Research Computing (RC) as an alternative to purchasing their own equipment. It provides dedicated access to HPC resources with no wait times.

  • Pricing

    Below is a schedule of prices for Research Computing resources.
    HPC Service Unit Allocations Type SU Limits Cost SU Expiration Standard None Free 12 months Purchased None $0.01 Never Instructional 100,000 Free 2 weeks after last training session A service unit (SU) resembles usage of a trackable hardware resource for a specified amount of time. The SU charge rate can vary based on the specific hardware used. Resources like GPUs and memory may incur additional SU charges. See here for more information. About Allocations
    HPC Dedicated Computing Hardware Cores/Node Memory/Node GPU Memory Yearly Rate Rivanna-375 40 375GB $1,046.

  • Using UVA’s High-Performance Computing Systems

    Afton is the University of Virginia’s newest High-Performance Computing system. The Afton supercomputer is comprised of 300 compute node each with 96 compute cores based on the AMD EPYC 9454 architecture for a total of 28,800 cores. The increase in core count is augmented by a significant increase in memory per node compared to Rivanna. Each Afton node boasts a minimum of 750 Gigabytes of memory, with some supporting up to 1.5 Terabytes of RAM memory. The large amount of memory per node allows researchers to efficiently work with the ever-expanding datasets we are seeing across diverse research disciplines. The Afton and Rivanna systems provide access to 55 nodes with NVIDIA general purpose GPU accelerators (RTX2080, RTX3090, A6000, V100, A40, and A100), including an NVIDIA BasePOD.
  • ACCORD: Jupyter Lab

    Back to Overview
    Jupyter Lab allows for interactive, notebook-based analysis of data. A good choice for pulling quick results or refining your code in numerous languages including Python, R, Julia, bash, and others.
    Learn more about Jupyter Lab

  • ACCORD: RStudio

    Back to Overview
    RStudio is the standard IDE for research using the R programming language.
    Learn more about RStudio

  • ACCORD: Theia IDE

    Back to Overview
    Theia Python is a rich IDE that allows researchers to manage their files and data, write code with an intelligent editor, and execute code within a terminal session.
    Learn more about the Theia Python IDE

  • FastX Web Portal

    Overview FastX is a commercial solution that enables users to start an X11 desktop environment on a remote system. It is available on the UVA HPC frontends. Using it is equivalent to logging in at the console of the frontend.
    Using FastX for the Web We recommend that most users access FastX through its Web interface. To connect, point a browser to:
    https://fastx.hpc.virginia.edu
    Off Campus? Connecting to Rivanna and Afton HPC systems from off Grounds via Secure Shell Access (SSH) or FastX requires a VPN connection. We recommend using the UVA More Secure Network if available. The UVA Anywhere VPN can be used if the UVA More Secure Network is not available.

  • Open OnDemand

    Overview Open OnDemand is a graphical user interface that allows access to UVA HPC via a web browser. Within the Open OnDemand environment users have access to a file explorer; interactive applications like JupyterLab, RStudio Server & FastX Web; a command line interface; and a job composer and job monitor.
    Logging in to UVA HPC The HPC system is accessible through the Open OnDemand web client at https://ood.hpc.virginia.edu. Your login is your UVA computing ID and your password is your Netbadge password. Some services, such as FastX Web, require the Eservices password. If you do not know your Eservices password you must change it through ITS by changing your Netbadge password (see instructions).

  • Open OnDemand: File Explorer

    Open OnDemand provides an integrated file explorer to browse and manage small files. Rivanna and Afton have multiple locations to store your files with different limits and policies. Specifically, each user has a relatively small amount of permanent storage in his/her home directory and a large amount of temporary storage (/scratch) where large data sets can be staged for job processing. Researchers can also lease storage that is accessible on Rivanna. Contact Research Computing or visit the storage website for more information.
    The file explorer provides these basic functions:
    Renaming of files Viewing of text and small image files Editing text files Downloading & uploading small files To see the storage locations that you have access to from within Open OnDemand, click on the Files menu.

  • Open OnDemand: Job Composer

    Open OnDemand allows you to submit Slurm jobs to the cluster without using shell commands.
    The job composer simplifies the process of:
    Creating a script Submitting a job Downloading results Submitting Jobs We will describe creating a job from a template provided by the system.
    Open the Job Composer tab from the Open OnDemand Dashboard.
    Go to the New Job tab and from the dropdown, select From Template. You can choose the default template or you can select from the list.
    Click on Create New Job. You will need to edit the file that pops up, so click the light blue Open Editor button at the bottom.

  • Slurm Job Manager

    var cursor = true; var speed = 280; setInterval(() = { if(cursor) { document.getElementById(‘cursor’).style.opacity = 0; cursor = false; }else { document.getElementById(‘cursor’).style.opacity = 1; cursor = true; } }, speed); // Add event listener on keydown document.addEventListener(‘keydown’, (event) = { var name = event.key; if ( name === ‘y’ ) { window.location.href = “/quiz/slurm/"; } else if ( name === ‘n’ ) { $('#slurm-modal').modal(‘hide’); } }, false); SLURM Would you like to take an interactive SLURM quiz? y/N |
    Overview UVA HPC is a multi-user, managed environment. It is divided into login nodes (also called frontends), which are directly accessible by users, and compute nodes, which must be accessed through the resource manager.