Container-based architecture, also known as "microservices," is an approach to designing and running applications as a distributed set of components or layers. Such applications are typically run within containers, made popular in the last few years by Docker.
Containers are portable, efficient, reusable, and contain code and any dependencies in a single package.
Containerized services typically run a single process, rather than an entire stack within the same environment.
This allows developers to replace, scale, or troubleshoot portions of their entire application at a time.
General Availability (GA) of Kubernetes - Research Computing now manages microservice orchestration
with Kubernetes, the open-source tool from Google. New deployments are now launched directly within Kuberenetes.
» Read about Kubernetes and user deployments.
Microservices at UVA
Research Computing runs microservices in a clustered orchestration environment that automates the deployment and
management of many containers easy and scalable. This cluster has >1000 cores and ~1TB of memory allocated to
running containerized services. It also has over 300TB of cluster storage and can attach to
project and value storage.
UVA's microservices platform is hosted in the standard security zone. It is suitable for processing public or internal use data. Sensitive or highly sensitive data are not permitted on this platform.
1 Microservice architecture is a design approach, or a way of building things. Microservices can be considered the opposite of "monolithic" designs.
A few guiding design principles:
- Separate components and services
- Availability and resilience
- Replaceable elements
- Easily distributable
- Reusable components
- Decentralized elements
- Easy deployment
Here’s a talk given by Martin Fowler explaining the idea:
2 The easiest and most common way to run microservices is inside of containers.
- We teach workshops on containers and how to use them. Browse the course overview for Building Containers for Rivanna at your own pace.
- Docker provides an excellent Getting Started tutorial.
- Users may inject
ENV environment variables and encrypted secrets into containers at runtime. This means sensitive information does not need to be written into your container.
Uses for Research
Microservices are typically used in computational research in one of two ways:
- Standalone microservices or small stacks - Such as interactive or data-driven web applications and APIs, small databases (<100GB), or scheduled task containers. Some examples:
Microservices in support of HPC jobs - Some workflows in HPC jobs require supplemental services in order to run, such as relational databases, key-value stores, or reference APIs. Some examples:
- Simple web container to serve Project files to the research community or as part of a publication.
- Reference APIs can handle requests based either on static reference data or databases.
- Shiny Server presents users with interactive plots to engage with your datasets.
- A scheduled job to retrieve remote datasets, perform initial ETL processing, and stage them for analysis.
- Cromwell/WDL pipelines rely on MySQL databases to track job status and state if a portion of your pipeline fails.
- Key-value stores in Redis can track an index of values or a running count that is updated as part of a job.
- A scheduled job to refresh a library of reference data from an external source, such as reference genomes or public datasets.
Browse a list of recent UVA projects employing microservices.
||NGINX Web Server
||A fast web server that can run
- Static HTML demo
- Flask or Django apps demo
- RESTful APIs demo
- Expose Project storage demo
||Apache Web Server
||An extremely popular web server that can run your static HTML, Flask or Django apps, RESTful APIs, or expose files stored in Project storage.
||Runs R-based web applications and offers a dynamic, data-driven user interface. See a demo or try using LOLAweb
||A stable, easy to use relational database. Run MySQL in support of your HPC projects in Rivanna or web containers.
||A popular NoSQL database. Use mongo in support of your Rivanna jobs or web containers. Try Mongo
||An extremely fast, durable, key-value database. Use Redis in support of Rivanna jobs or other processes you run. Try Redis
||Schedule or automate tasks or data staging using the language of your choice (bash, Python, R, C, Ruby).
Service Eligibility & Limitations
To be eligible to run your microservice on our infrastructure, you must meet the following requirements:
- Microservices and custom containers must be for research purposes only. We do not run production systems outside the scope
of academic research support.
- Your container(s) must pass basic security checks. Containers may not contain passwords, SSH keys, API keys, or
other sensitive information. There are secure methods for passing sensitive information into containers.
- If bringing your own custom container, it must be ready to go! Unfortunately, we cannot create custom containers for you
unless it is part of a funded project.
Microservices may not run efficiently for all use cases. Some scenarios that cannot run successfully in containers include:
- Large (over 100GB) database collections.
- Services (apart from web-based services over HTTP/HTTPS) that need to be accessed from outside the HPC network.
- Services that require licensing, such as Microsoft SQL Server, MATLAB, etc.
- Services that require GPU to run.
Container services hosted by UVA Research Computing fall under this pricing structure:
||1 - 5
||$5 / month
||6 - 15
||$10 / month
||> 15 containers
||$48 / month
No charges will be incurred for stopped containers or any cluster storage.
Want to run your container within an HPC environment? It can be done, using Singularity!
Singularity is a container application targeted to multi-user, high-performance computing systems. It interoperates well with Slurm and with the Lmod modules system. Singularity can be used to create and run its own containers, or it can import Docker containers.
Learn more about Singularity.
Have a containerized application ready for launch? Or want a consultation to discuss your microservice implementation?