/tag/gpu

  • Rivanna Queues

    Several queues (or “partitions”) are availble to users for different types of jobs. One queue is restricted to single-node (serial or threaded) jobs; another for multinode parallel programs, and others are for access to specialty hardware such as large-memory nodes or nodes offering GPUs. .rotate { font-size:80%; text-align:left; padding:auto; } .queue { text-align:right; } Partition Max time / job Max nodes / job Max cores / job Max cores / node Max memory / core Max memory / node / job SU Charge Rate standard 7 days 1 40 40 9GB 375GB 1.00 parallel 3 days 45 900 40 9GB 375GB 1.
  • FastX Web Portal

    Overview FastX is a commercial solution that enables users to start an X11 desktop environment on a remote system. It is available on the Rivanna frontends. Using it is equivalent to logging in at the console of the frontend. Using FastX for the Web We recommend that most users access FastX through its Web interface. To connect, point a browser to: https://rivanna-desktop.hpc.virginia.edu Off Campus? Connecting to Rivanna from off Grounds via Secure Shell Access (SSH) or FastX requires a VPN connection. We recommend using the UVA More Secure Network if available. The UVA Anywhere VPN should only be used if the UVA More Secure Network is not available.
  • Open OnDemand

    Overview Open OnDemand is a graphical user interface that allows access to Rivanna via a web browser. Within the Open OnDemand environment users have access to a file explorer; interactive applications like JupyterLab, RStudio Server & FastX Web; a command line interface; and a job composer and job monitor. Logging in to Rivanna Rivanna is accessible through the Open OnDemand web client at https://rivanna-portal.hpc.virginia.edu. Your login is your UVA computing ID and your password is your Netbadge password. Some services, such as FastX Web, require the Eservices password. If you do not know your Eservices password you must change it through ITS by changing your Netbadge password (see instructions).
  • Open OnDemand: File Explorer

    Open OnDemand provides an integrated file explorer to browse and manage small files. Rivanna has multiple locations to store your files with different limits and policies. Specifically, each user has a relatively small amount of permanent storage in his/her home directory and a large amount of temporary storage (/scratch) where large data sets can be staged for job processing. Researchers can also lease storage that is accessible on Rivanna. Contact Research Computing or visit the storage website for more information. The file explorer provides these basic functions: Renaming of files Viewing of text and small image files Editing text files Downloading & uploading small files To see the storage locations that you have access to from within Open OnDemand, click on the Files menu.
  • Open OnDemand: Job Composer

    Open OnDemand allows you to submit SLURM jobs to the cluster without using shell commands. The job composer simplifies the process of: Creating a script Submitting a job Downloading results Submitting Jobs We will describe creating a job from a template provided by the system. Open the Job Composer tab from the Open OnDemand Dashboard. Go to the New Job tab and from the dropdown, select From Template. You can choose the default template or you can select from the list. Click on Create New Job. You will need to edit the file that pops up, so click the light blue Open Editor button at the bottom.
  • Compiling GPU Applications on Rivanna

    Available NVIDIA CUDA Compilers ModuleVersion Module Load Command cuda11.0.228 module load gcc/9.2.0 cuda/11.0.228 cuda10.1.168 module load cuda/10.1.168 cuda10.2.89 module load cuda/10.2.89 cuda11.0.228 module load cuda/11.0.228 cuda9.2.148.1 module load cuda/9.2.148.1 ModuleVersion Module Load Command nvhpc20.9 module load nvhpc/20.9 GPU architecture -arch According to the CUDA documentation, “in the CUDA naming scheme, GPUs are named sm_xy, where x denotes the GPU generation number, and y the version in that generation.” The documentation contains details about the architecture and the corresponding xy value. On Rivanna, the GPU nodes are K80, P100, V100, and RTX 2080 Ti, which are Kepler, Pascal, Volta, and Turing, respectively.
  • Compiling GPU Applications on Rivanna

    Compiling For a GPU Using a GPU can accelerate a code, but requires special programming and compiling. Several options are available for GPU-enabled programs. OpenACC OpenACC is a standard Available NVIDIA CUDA Compilers ModuleVersion Module Load Command cuda11.0.228 module load gcc/9.2.0 cuda/11.0.228 cuda10.1.168 module load cuda/10.1.168 cuda10.2.89 module load cuda/10.2.89 cuda11.0.228 module load cuda/11.0.228 cuda9.2.148.1 module load cuda/9.2.148.1 ModuleVersion Module Load Command nvhpc20.9 module load nvhpc/20.9 GPU architecture -arch According to the CUDA documentation, “in the CUDA naming scheme, GPUs are named sm_xy, where x denotes the GPU generation number, and y the version in that generation.