Storage types
Storage is found in many parts of the OpenStack cloud environment. It is important to understand the distinction between ephemeral storage and persistent storage:
- Ephemeral storage – If you only deploy OpenStack Compute service (nova), by default your users do not have access to any form of persistent storage. The disks associated with VMs are ephemeral, meaning that from the user’s point of view they disappear when a virtual machine is terminated.
- Persistent storage – Persistent storage means that the storage resource outlives any other resource and is always available, regardless of the state of a running instance.
OpenStack clouds explicitly support three types of persistent storage: Object Storage, Block Storage, and File-based storage.
Block storage
The Block Storage service (cinder) in OpenStack. Because these volumes are persistent, they can be detached from one instance and re-attached to another instance and the data remains intact.
The Block Storage service supports multiple back ends in the form of drivers. Your choice of a storage back end must be supported by a block storage driver.
Most block storage drivers allow the instance to have direct access to the underlying storage hardware’s block device. This helps increase the overall read/write IO. However, support for utilizing files as volumes is also well established, with full support for NFS, GlusterFS and others.
These drivers work a little differently than a traditional block storage driver. On an NFS or GlusterFS file system, a single file is created and then mapped as a virtual volume into the instance. This mapping and translation is similar to how OpenStack utilizes QEMU’s file-based virtual machines stored in /var/lib/nova/instances.
Object Storage
Object storage is implemented in OpenStack by the Object Storage service (swift). Users access binary objects through a REST API. If your intended users need to archive or manage large datasets, you should provide them with Object Storage service. Additional benefits include:
- OpenStack can store your virtual machine (VM) images inside of an Object Storage system, as an alternative to storing the images on a file system.
- Integration with OpenStack Identity, and works with the OpenStack Dashboard.
- Better support for distributed deployments across multiple datacenters through support for asynchronous eventual consistency replication.
You should consider using the OpenStack Object Storage service if you eventually plan on distributing your storage cluster across multiple data centers, if you need unified accounts for your users for both compute and object storage, or if you want to control your object storage with the OpenStack Dashboard. For more information, see the Swift project page.
File-based storage
In multi-tenant OpenStack cloud environment, the Shared File Systems service (manila) provides a set of services for management of shared file systems. File-based storage supports multiple back-ends in the form of drivers, and can be configured to provision shares from one or more back-ends. Share servers are virtual machines that export file shares using different file system protocols such as NFS, CIFS, GlusterFS, or HDFS.
The Shared File Systems service is persistent storage and can be mounted to any number of client machines. File-based storage can also be detached from one instance and attached to another instance without data loss. During this process the data are safe unless the Shared File Systems service itself is changed or removed.
Users interact with the Shared File Systems service by mounting remote file systems on their instances with the following usage of those systems for file storing and exchange. The Shared File Systems service provides shares which is a remote, mountable file system. You can mount a share and access a share from several hosts by several users at a time. With shares, you can also:
- Create a share specifying its size, shared file system protocol, visibility level.
- Create a share on either a share server or standalone, depending on the selected back-end mode, with or without using a shared network.
- Specify access rules and security services for existing shares.
- Combine several shares in groups to keep data consistency inside the groups for the following safe group operations.
- Create a snapshot of a selected share or a share group for storing the existing shares consistently or creating new shares from that snapshot in a consistent way.
- Create a share from a snapshot.
- Set rate limits and quotas for specific shares and snapshots.
- View usage of share resources.
- Remove shares.
Differences between storage types
Ephemeral storage | Block storage | Object storage | File-based storage | |
Application | Run operating system and scratch space | Add additional persistent storage to a virtual machine (VM) | Store data, including VM images | Add additional persistent storage to a virtual machine |
Accessed through… | A file system | A block device that can be partitioned, formatted, and mounted (such as, /dev/vdc) | The REST API | A Shared File Systems service share (either manila managed or an external one registered in manila) that can be partitioned, formatted and mounted (such as /dev/vdc) |
Accessible from… | Within a VM | Within a VM | Anywhere | Within a VM |
Managed by… | OpenStack Compute (nova) | OpenStack Block Storage (cinder) | OpenStack Object Storage (swift) | OpenStack Shared File System Storage (manila) |
Persists until… | VM is terminated | Deleted by user | Deleted by user | Deleted by user |
Sizing determined by… | Administrator configuration of size settings, known as flavors | User specification in initial request | Amount of available physical storage | User specification in initial request Requests for extension Available user-level quotes Limitations applied by Administrator |
Encryption configuration | Parameter in nova.conf | Admin establishing encrypted volume type, then user selecting encrypted volume | Not yet available | Shared File Systems service does not apply any additional encryption above what the share’s back-end storage provides |
Example of typical usage… | 10 GB first disk, 30 GB second disk | 1 TB disk | 10s of TBs of dataset storage |
Depends completely on the size of back-end storage specified when a share was being created. In case of thin provisioning it can be partial space reservation |
File-level storage for live migration
With file-level storage, users access stored data using the operating system’s file system interface. Most users who have used a network storage solution before have encountered this form of networked storage. The most common file system protocol for Unix is NFS, and for Windows, CIFS (previously, SMB).
OpenStack clouds do not present file-level storage to end users. However, it is important to consider file-level storage for storing instances under /var/lib/nova/instances when designing your cloud, since you must have a shared file system if you want to support live migration. For more information on OpenStack migrations, check out GEMINI: A workload migration engine.
Commodity storage technologies
There are various commodity storage back end technologies available. Depending on your cloud user’s needs, you can implement one or many of these technologies in different combinations.
Ceph
Ceph is a scalable storage solution that replicates data across commodity storage nodes.
Ceph utilises and object storage mechanism for data storage and exposes the data via different types of storage interfaces to the end user it supports interfaces for: – Object storage – Block storage – File-system interfaces
Ceph provides support for the same Object Storage API as swift and can be used as a back end for the Block Storage service (cinder) as well as back-end storage for glance images.
Ceph supports thin provisioning implemented using copy-on-write. This can be useful when booting from volume because a new volume can be provisioned very quickly. Ceph also supports keystone-based authentication (as of version 0.56), so it can be a seamless swap in for the default OpenStack swift implementation.
Ceph’s advantages include:
- The administrator has more fine-grained control over data distribution and replication strategies.
- Consolidation of object storage and block storage.
- Fast provisioning of boot-from-volume instances using thin provisioning.
- Support for the distributed file-system interface CephFS.
You should consider Ceph if you want to manage your object and block storage within a single system, or if you want to support fast boot-from-volume.
LVM
The Logical Volume Manager (LVM) is a Linux-based system that provides an abstraction layer on top of physical disks to expose logical volumes to the operating system. The LVM back-end implements block storage as LVM logical partitions.
On each host that will house block storage, an administrator must initially create a volume group dedicated to Block Storage volumes. Blocks are created from LVM logical volumes.
Note: LVM does not provide any replication. Typically, administrators configure RAID on nodes that use LVM as block storage to protect against failures of individual hard drives. However, RAID does not protect against a failure of the entire host.
ZFS
The Solaris iSCSI driver for OpenStack Block Storage implements blocks as ZFS entities. ZFS is a file system that also has the functionality of a volume manager. This is unlike on a Linux system, where there is a separation of volume manager (LVM) and file system (such as, ext3, ext4, xfs, and btrfs). ZFS has a number of advantages over ext4, including improved data-integrity checking.
The ZFS back end for OpenStack Block Storage supports only Solaris-based systems, such as Illumos. While there is a Linux port of ZFS, it is not included in any of the standard Linux distributions, and it has not been tested with OpenStack Block Storage. As with LVM, ZFS does not provide replication across hosts on its own, you need to add a replication solution on top of ZFS if your cloud needs to be able to handle storage-node failures.
Gluster
A distributed shared file system. As of Gluster version 3.3, you can use Gluster to consolidate your object storage and file storage into one unified file and object storage solution, which is called Gluster For OpenStack (GFO). GFO uses a customized version of swift that enables Gluster to be used as the back-end storage.
The main reason to use GFO rather than swift is if you also want to support a distributed file system, either to support shared storage live migration or to provide it as a separate service to your end users. If you want to manage your object and file storage within a single system, you should consider GFO.
Sheepdog
Sheepdog is a userspace distributed storage system. Sheepdog scales to several hundred nodes, and has powerful virtual disk management features like snapshot, cloning, rollback and thin provisioning.
It is essentially an object storage system that manages disks and aggregates the space and performance of disks linearly in hyper scale on commodity hardware in a smart way. On top of its object store, Sheepdog provides elastic volume service and http service. Sheepdog does require a specific kernel version and can work nicely with xattr-supported file systems.
Choosing storage back ends
Users will indicate different needs for their cloud architecture. Some may need fast access to many objects that do not change often, or want to set a time-to-live (TTL) value on a file. Others may access only storage that is mounted with the file system itself, but want it to be replicated instantly when starting a new instance. For other systems, ephemeral storage is the preferred choice. When you select storage back ends, consider the following questions from user’s perspective:
- Do I need block storage?
- Do I need object storage?
- Do I need to support live migration?
- Should my persistent storage drives be contained in my compute nodes, or should I use external storage?
- What is the platter count I can achieve? Do more spindles result in better I/O despite network access?
- Which one results in the best cost-performance scenario I’m aiming for?
- How do I manage the storage operationally?
- How redundant and distributed is the storage? What happens if a storage node fails? To what extent can it mitigate my data-loss disaster scenarios?
A wide variety of operator-specific requirements dictates the nature of the storage back end. Examples of such requirements are as follows:
- Public, private or a hybrid cloud, and associated SLA requirements
- The need for encryption-at-rest, for data on storage nodes
- Whether live migration will be offered
We recommend that data be encrypted both in transit and at-rest. If you plan to use live migration, a shared storage configuration is highly recommended.
To deploy your storage by using only commodity hardware, you can use a number of open-source packages, as shown in Persistent file-based storage support.
Object Storage | Block Storage | File-based Storage | |
Swift | |||
LVM | |||
Ceph | Experimental | ||
Gluster | |||
NFS | |||
ZFS | |||
Sheepdog |
This list of open source file-level shared storage solutions is not exhaustive. Your organization may already have deployed a file-level shared storage solution that you can use.
Note: Storage driver support. In addition to the open source technologies, there are a number of proprietary solutions that are officially supported by OpenStack Block Storage. You can find a matrix of the functionality provided by all of the supported Block Storage drivers on the CinderSupportMatrix wiki.
Software Defined Storage and Disaster Recovery with Aptira provide peace of mind for your data storage. Our storage solutions easily integrate with a wide range of enterprise storage platforms. We can also support live migrations and secure your data as part of a Managed Cloud strategy.
You can also learn more about storage with Aptira training. Our 2 day online Ceph Training covers the main concepts and architecture of Ceph storage, its installation and daily operation as well as using Ceph storage in OpenStack environments.
References
- Swift: https://www.openstack.org/software/releases/ocata/components/swift
- CephFS: http://ceph.com/docs/master/cephfs/
- Support Martix: https://wiki.openstack.org/wiki/CinderSupportMatrix
Just an FYI, Swift now supports at rest encryption. It’s only the basic feature set. Specify a key in the config. But it works, and ongoing work to integrate with barbican (castelan) is in the works.