Case Studies

Swinburne Ceph Deployment

By 09/07/2019 No Comments

Swinburne University of Technology is an Australian public University based in Melbourne, Victoria. They need to set up a massive (think petabyte) storage system for their researchers to store valuable research data.


The Challenge

This storage system needs to be dynamic, scalable, reliable, easy to manage and resize, also support commodity hardware. Large storage solutions are challenging to design and set up partly due to the level of detail that has to be addressed in the design and configuration process.

For example, accessing the data such as addresses and device names of the individual servers. For a distributed storage system such as Ceph, that aggregate can run into the hundreds. Collecting the information and entering the data manually into the design and later into a configuration management tool is prohibitive and error prone.


The Aptira Solution

Ceph was chosen as the storage solution because it meets all the functional requirements, as demonstrated in an earlier evaluation phase before Aptira become involved. Ceph is an open source software storage platform which implements object storage on a single distributed computer cluster. Ceph supports object-storage, block-storage and file-level storage. For file-level storage, Ceph provides a POSIX-compliant network file system that aims for high-perormance, large data storage and maximum compatibility with legacy applications. Ceph aims primarily for completely distributed operation without a single point of failure, scalable to the exabyte level, and freely available.

Several companies offer their own commercial products based on Ceph. After some evaluation, SUSE was chosen because of its competitive price and good service. Aptira partnered with SUSE in the deployment of SUSE Enterprise Storage (a SUSE commercial product based on Ceph), from the OS level (SLES) up to Ceph.

Two Ceph clusters were deployed on two data centres. Each cluster consists of 3 monitor (MON) nodes, 2 Gateway nodes and 12 object storage (OSD) nodes. Each OSD has 160 TB storage and thus each cluster provides 1.92 PB space. All Ceph nodes are managed by SUSE Manager, which acts as a local package repository and installs base OS on all Ceph nodes. After the nodes were installed, Puppet was used to do some customisation including configuration of network etc. Then DeepSea was used to deploy Ceph.

DeepSea is an Open Source tool specifically designed to make deploying Ceph easier. The goal of DeepSea is to save the administrator time and confidently perform complex operations on a Ceph cluster. The traditional method for deploying Ceph is ceph-deploy, which is functional and has a low overhead but requires administrators to obtain configuration data manually and make numerous configuration decisions external to the deployment script. DeepSea automates much of this and organises the Ceph deployment process into 6 structured stages:

  • Stage 0 – Provisioning
  • Stage 1 – Discovery
  • Stage 2 – Configure
  • Stage 3 – Deploy
  • Stage 4 – Services
  • Stage 5 – Removal

The Result

The deployment was completed in less than two months, which is less than what Swinburne had planned. Aptira conducted several performance tests on the system to validate that the system was running in a healthy state and met the performance benchmarks.

An “as-built” document was written and handed to Swinburne at the conclusion of this project. After the deployment, the system went into production straightaway: Swinburne use it as the backend storage of their OpenStack cloud, Commvault backup system (via RADOS gateway) and Windows File System (via iSCSI gateway).


Keep your data in safe hands.
See what we can do to protect and scale your data.

Secure Your Data

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.