Skip to main content

Kubernetes Storage on Compute Canada: A Case for Longhorn

· 4 min read
Pieter Botha
LINCS Technical Manager

Laptop

As a Cyberinfrastructure project funded by the Canada Foundation for Innovation, LINCS is building its infrastructure on the resources of Compute Canada which operates a Cloud service run with OpenStack.

Cinder is the default volume storage provider for virtual machines provisioned on the OpenStack platform of Compute Canada. It is the last layer of a complex multi-tenant system that abstracts Ceph storage and hypervisors on top of hard disk drives. It is, however, the closest you can get to the network-attached storage (NAS) system of Compute Canada...

One way to run Kubernetes and attach storage is to create virtual machines in OpenStack, mount volumes in the host systems, install and configure Kubernetes on each host, and expose the mounted volumes to Kubernetes through the local-path storage provisioner. Although workable, this solution isn’t ideal. Managing the storage now becomes a host task and performance wouldn’t be great if a container is deployed on a different host than its storage volume.

Rancher is a software solution that provides Kubernetes as a service and can also be configured with OpenStack and Cinder drivers to enable a very integrated experience through the Rancher web UI. You can create a new cluster from Rancher and manage all resources right there from the Rancher UI. Rancher will create the virtual machine on OpenStack, install Docker, install Rancher Kubernetes Engine (RKE), and configure all basic services automatically in the background. And if you configure a Cinder storage provisioner, all new deployments to your cluster that have a Persistent Volume Claim (PVC) will automatically get a new Cinder volume mounted directly to the container running the workload. Magic!

This solution is much more maintainable. Volumes can be managed right from Rancher and only the storage space requested from the PVCs is reserved on the Cinder backend. Since Cinder volumes are directly mounted onto containers, there are no dependencies and no additional layers that may introduce latency and other performance problems. Sounds perfect? Turns out, not so much.

Challenges Encountered

Compute Canada does not guarantee the Cinder API to be available when the hosts are running. During upgrades and maintenance, the Cinder API could be taken offline while the VMs could still be running. This will prevent Kubernetes from connecting workloads to their storage and create all sorts of havoc. Another problem we had was that we could not back up Cinder volumes in our Compute Canada environment. Compute Canada only supports file-based backups, which would require one to log into containers to access the file system and backup the files on every individual deployment. To top it off, directly mounted Cinder volumes on Compute Canada performed horrendously slow with our typical workloads.

So, what could we do? As a Compute Canada client, we don’t have access to the raw Ceph storage backend as Compute Canada can’t provision that storage in a multi-tenant environment. Thus, provisioning Ceph with, for example, Rook to Kubernetes was not an option. We had to do something on top of Cinder volumes.

Enter Longhorn

Longhorn is an open source, cloud-native container storage solution that was originally developed by Rancher but is now an official CNCF sandbox project. Longhorn uses mounted volumes on nodes and provisions that to Kubernetes.

Wait a minute! This just adds another layer compared to the local-path provisioner that I mentioned at the beginning of this article. How would it solve any of the Cinder issues, you might ask.

  • All Cinder volumes are mounted directly in the VM hosts. Even during maintenance when the Cinder API goes offline, the Cinder volumes would stay mounted and be accessible as long as we can access the VM.
  • Longhorn provides an easy-to-use UI and the ability to detach entire volumes, create snapshots, and back them up to S3 storage with the click of a button (or automated if you like).
  • Due to the quality of service of the various layers of the NAS setup at Compute Canada, it actually helps to create stripe sets of Cinder volumes to increase storage bandwidth, which is what we did and mounted four identical volumes on each VM that served storage to the cluster.
  • Longhorn implements some queueing and caching that reduces latency and increases bandwidth compared to direct Cinder mounts. On average, performance was doubled for the LINCS infrastructure with our typical workloads.
  • Volume expansion works consistently with the Longhorn provisioner. No need to over-provision storage for a volume.
  • Longhorn only measures actual data usage and allows you to have, for example, five 1TB volumes on a 2TB drive as long as the combined usage does not go over 2TB.

Longhorn truly saved the day for the LINCS architecture and at the very least made my life as a system administrator much, much easier.