The University of Melbourne is the premier higher education institution of Victoria with a strong track record as technology leader, being the first and primary node in the national NeCTAR Research Cloud (RC) project, running OpenStack.
Southern Cross Computer Systems (SCCS) is an IT solution delivery organisation with thirty years of experience in providing end to end infrastructure and business solutions to large organisations throughout Australia.
NetApp is one of the worlds largest computer storage and data management companies.
The Challenge
The University of Melbourne, acting as a node of the Research Data Storage Infrastructure (RDSI) project issued a public Request for Proposals (RFP) tender process to procure new storage infrastructure to form a large component of the overall VicNode deployment of RDSI.
SCCS and NetApp approached Aptira for assistance with the archival component of their proposed solution, which was to consist of NetApp hardware, delivered by SCCS and configured to work with OpenStack by Aptira.
The scope of the archival component of the solution defined in the RFP was quite large, with requirements including multi-site configuration, efficient data replication, cost effectiveness, state of the art reporting and flexible support for dataset metadata and more than a petabyte of usable storage space.
The Aptira Solution
Aptira, together with SCCS and NetApp proposed an in-depth solution which would utilise NetApp FAS to plug into the existing NeCTAR RC OpenStack Cinder installation, alongside several NetApp E-series storage appliances with commodity compute servers to provide an OpenStack Swift object storage layer for the archival component of the solution.
OpenStack Swift is an extremely robust, flexible and operator-friendly object storage/Software Defined Storage solution.
Alongside a series of hardware provisioning scripts customised for this unique solution, Aptira made several modifications to the existing NeCTAR RC puppet-swift module, some small fixes and large feature additions (multi daemon config for dedicated replication network support, support for OpenStack Ceilometer, support for converged node deployment style, etc) were provided.
The Result
The end solution is extremely fast and flexible enough to handle the University of Melbourne reporting requirements which would not be possible without OpenStack integration. Burn-in testing of the solution fronted by the Universities existing F5 load balancers showed it to operate at equivalent (or better) speeds to the Universities other OpenStack Swift clusters while providing considerable hardware density and energy efficiency gains in the datacenters where it is deployed.