Aptira Blog

A while ago I had a little rant about Ceilometer and dangers of overreaching. A similar discussion has popped up in the OpenStack-Operators list, during which it became clear that some operators had taken the bits they wanted from Ceilometer and gotten rid of the rest. Others had simply stopped using it.

A little while ago the deprecation of the EC2 API from Nova cropped up on OpenStack Operators and covered a variety of topics, notably the overheads experienced when developing an OpenStack project stifling the creativity and motivation of developers.

Yesterday, I read the new project requirements for OpenStack, where one of the criteria is:

Where it makes sense, the project cooperates with existing projects rather than gratuitously competing or reinventing the wheel

This sounds great, except when you consider a partially successful case like Ceilometer.

When it rolled around to today, and I thought “what the hell am I going to write now?” these thoughts glommed together.

What would happen if OpenStack minimised it’s scope? What if it delivered a framework to fulfil its mission, but stopped short of providing the entire solution?

  • the focus of core development would change to providing all the basic hygiene items users require: smooth upgrades, stable branches mean stable software and that sort of thing.
  • Almost all feature-based innovation moves outside of the core (into Stackforge? elsewhere? Does it actually matter?)
    • Software vendors are free to pursue their own agendas, and it becomes obvious whether they are contributing to OpenStack or building their own features. Vendors use this information to market themselves as they please.
    • Developers have frictionless mechanisms to contribute as they see fit.
    • Requirements for changes in the core are delivered no more slowly than they are today.
  • core components that get their architecture “right” create much less maintenance work freeing resources that can be applied to creating new user value. Similarly consistent architectures across components can deliver efficiency gains that can be expended on creating user value.
  • The number and size of official OpenStack project teams will probably shrink. Possibly by a great deal.

Now, I’m not an expert on the architecture of any of the components, so I can’t really comment on whether this is feasible for all projects, or perhaps has already happened in many of them. My point is more about whether this approach should be an explicit goal for OpenStack. My view is that it must.

OpenStack is already showing signs of stress from the difficulty of providing an integrated release between a large number of projects. The response from the TC has the same intent as what I have described above: improve focus on the core, lessen the friction for developers. However the methods for achieve this are markedly different:

  • The barrier to entry for a project is lower
  • The concept of an integrated release is removed over time
  • metadata is created to describe each project so that OpenStack users can determine the qualities of a given project.

It seems that the reason for this approach is to ensure that the community engagement: to allow a larger number of developers to work on projects that are sanctioned as part of OpenStack. This is laudable, but I’m not sure that the cost of maintaining an ever growing list of projects, even without an integrated release, is manageable. The quality of projects must still be assessed regularly in order for users to make accurate decisions, and this assessment must be made over a larger and larger set of projects.

The other benefit is that it allows easy reporting to the board for trademark related concerns, which is entirely secondary to anything the users might experience, and so probably quite irrelevant to increasing the quality of OpenStack.

It also does nothing to address the issue of troubled projects. Where a project has tried and only partially succeeded delivering on it’s mission we are faced with a problem. It seems we must convince the TC delegates that a competing project is worthwhile.

Reducing the scope of what an OpenStack sanctioned project to a high quality framework for innovation strongly controls the size of the QA task thus increasing the confidence of the users on the attributes of a particular version of the code. Whilst fewer developers would be working under an OpenStack sanctioned umbrella, the overall developer population should be working with a better OpenStack and those not in an OpenStack project can build whatever they want, however they want.

Yes, the number of ATCs will shrink, with a corresponding impact on the size of the electorate for the TC. I don’t think this is a problem. The core devs have a tightly focussed remit on a relatively small code base and can concentrate on that without the overhead of getting caught up in a vendor’s product delivery cycle.

This isn’t a proposal for the democratisation of the core of OpenStack. Quite the opposite: it’s a proposal to focus the control of the OpenStack Foundation to the very core of the ecosystem and to allow, or actually foster, innovation and the creation of value to occur unimpeded. Make of it what you will.

The OpenStack Foundation has 13,490 eligible voters and our 25% quorum line to allow us to make changes to our outdated bylaws is 3,373. We’ve had a little over 900 voters complete their ballots in the first day. Good progress, but we still have a long way to go.

I know many of my LinkedIn connections are Foundation members and are eligible to vote. Please vote now to help our project bylaws move forward and while you are voting, please consider casting your 8 votes for Kavit Munshi. http://www.openstack.org/community/members/profile/139 #Kavit2015

The OpenStack Foundation has 13,490 eligible voters and our 25% quorum line to allow us to make changes to our outdated bylaws is 3,373. Today's number is that 2000 votes have been cast with about 1400 to go.

I know many of my LinkedIn connections are Foundation members and are eligible to vote. Please vote now to help our project bylaws move forward and while you are voting, please consider casting your 8 votes for Kavit Munshi. http://www.openstack.org/community/members/profile/139 #Kavit2015

The OpenStack Individual Director election is on right now from the 12th to the 16th of January. This is who I’d like to see seated at the table at the first 2015 meeting on January 29:


tim Tim Bell - If the board ever decides to create permanent Board seats, Tim is the reason for it. Say no more.

randyRandy Bias - Randy gets cloud, gets business and gets everything and he isn’t afraid to share his wisdom. His observation that OpenStack has a mission but lacks vision that has led to the forming of the Product Workgroup is astute and I predict this Workgroup to become a critical focal point for the community in 2015.

kavitKavit Munshi - The Indian OpenStack community is MASSIVE and in need of representation. The 3000 people in the meetup group is just a fraction of the actual number working on OpenStack country wide. This makes India's voice crucial to bringing much needed diversity.


robRob Hirschfeld - Rob works hard for OpenStack, especially so for DefCore this past year or so. Rob’s a details guy, and the Board needs him as their details guy.


montyMonty Taylor - I love the way I can emphatically agree and disagree with Monty, all at once. Monty holds the flag for the dyed in the (organic of course) wool FOSS community, and we’d all dearly miss his theatrical interactions around the Board table.

alexAlex Freedland - Hard to not have a guy that anchors the most successful OpenStack business ever on the Board. Alex is a consummate businessman and that is always needed on any Board.

haiying Haiying Wang - As well as bringing a significant voice in the telco industry, Haiying adds to the diversity of the board representing the vastness of Asia alongside Kavit.


who????????? - I’ll leave the 8th spot open because there are several good candidates that can fill it! GO Vish, Jesse, Egle, Peter, John, Kyle, Richard, Andrew, Kyle, Mark and more!






My journey with the OpenStack project has been a very rewarding one. I have had the good fortune of meeting some wonderful people and being part of a very dynamic and vibrant community. I am the founder and the lead organiser of the OpenStack Indian User Group. I am also an OpenStack Ambassador, one of 10 people around the world nominated into the post by the OpenStack Foundation. I take these roles very seriously and do my best to live up to responsibilities that come with them.

The community has been kind enough to nominate me as a candidate for this years Board of Directors Election. This is a great honour for me and a very humbling experience. I would welcome the opportunity to represent my community and my region at the Board of Directors. I feel that the Asian region generally, and India specifically, are very under represented. More than 30% of the use cases for OpenStack and almost 50% of the users come from Asia. India has the third largest user community and the fastest growing user community in the OpenStack ecosystem. I think it is high time the region got represented on the OpenStack Board of Directors.This would increase the diversity of the Board and make it more representative of the community, thereby bringing balance to its perspective.

I would also welcome the opportunity to represent Asia and India. If elected, I would encourage the growth of OpenStack in our region. I have helped and mentored the growth of several user groups. The events and meetups I have organised in the past have led to many professionals and students getting the exposure they need to accelerate their careers. I would like to bring the limelight on the burgeoning startup scene in India and help create an environment of creativity and innovation. I have a vision for India as a Centre of Innovation  developing and commercialising intellectual property, rather than an offshoring destination. I feel that the voice of the users, developers, operators from outside of the North American region needs to be heard on the Board. I would welcome the opportunity to be that voice.  I aim to increase the voice of OpenStack customers, developers and operators worldwide, particularly in Asia, through my candidacy.

Finally, I would also like to remind the voters that this is an election of the directors selected from amongst the individual members. Most of the companies that support the OpenStack Foundation have already or are in the process of voting in Board members along company lines to represent their interests. I would urge the voters to not vote for their employers but to vote in true democratic fashion: for who they think would best represent the interest of the individual members and their region.

Kavit Munshi

This was originally posted to the product working group mailling list, which consists of people in influencing positions within their employer's OpenStack organisations. The group originally formed around the idea of Hidden Influencers, which made me think of us lying dormant somewhere, like Cthulhu lies dreaming at R'lyeh. Although with fewer tentacles. Anyway, on with the post, which I've modified slightly to add appropriate context:
There's a been a bit of traffic on the operators list about a proposal for an Operations project that would aim to curate a collection of assets to support the operator community. To generate a set of "Frequently Required Solutions" if you will.
Whilst there is some disagreement on the thread about how to proceed, it is clear there's definitely a need amongst operators for quite a bit of supporting code. As a group of product managers it behooves us to consider some of the larger questions like this and drive/enable/influence the community toward answering the questions that community considers a priority.
In just saying that, some more big questions are raised:
  • What are the goals of this group?
  • What strategy are we going to employ to achieve those goals?
  • How do we gain acceptance from the wider community for this effort?
  • How do we find an effective mechanism for product management to occur?
and no doubt there are more.
Answering these questions it's not as "easy" as starting a project, but it's necessary if this group wants to contribute in any collective or concerted fashion.
If we're going to have a mid-cycle meetup to bootstrap a product management group or forum, lets set some targets for what we want to achieve there, and lets set them well in advance. For me, the priority is establishing the basic existential motivations for this group:
  • Why are we here? (Purpose, Goals)
  • What are we going to do about it? (Strategy)
Anything more than that is a bonus, and if we can agree these prior to the meetup, so much the better. However without these basics, any work in the detail doesn't validate the existence of this group as anything more than just a mailing list.
For those who aren't on, or aren't aware of the product group mailling list: if you're in a position of influence or control of your employer's OpenStack direction think about joining up. We're going to be looking at some pretty fundamental questions about how OpenStack will move forward as it progresses further along the hype cycle.

I believe the basic goals of Ceilometer are laudable. It attempts to gather data about the state of an OpenStack instance, which is a useful goal. It then attempts to solve a bunch of problems related to that data, which is also useful.

However, many of the problems it tries to solve work at cross purposes to each other, and so choices that are made to accommodate solutions to these problems prevent the project from solving any of them satisfactorily. Solutions to some require high resolution data, some do not. Solutions to other problems require large amounts of historical data, others do not.

Ceilometer attempts to address these problems in a single database (OK a few databases, but one for each class of data: meters, events, alarms). This can never meet the requirements of every solution. Instead Ceilometer should focus on gathering the information, getting it to consuming systems in a fast, reliable and easy to consume manner, and then it should stop. There's plenty of work to do to reach this smaller set of goals, and there's an enormous amount of work to be done creating consumer systems that deliver real customer value.

Let other projects start to solve the problems we need to solve.

- Lets build a usage corellation system that takes the data stream and emits simple rateable usage records. Lets build another system that rates the records according to rules defined in a product catalogue.

- Lets build a policy engine that can pull the levers on various APIs based on a stream of information coming from Ceilometer.

- Lets build a repository for long term storage of data and accompanying analysis tools

and so on. But let's make them separate projects, not attempt to solve all the problems from a single DB.

The projects can focus on their own specific needs and because they are loosely coupled to metric collection and completely decoupled from one another, they can ignore needs of others that might be destructive to their own value. They can adopt an attitude of excellence rather than one of compromise.

More generally, lets examine our approach to adding features to OpenStack and see whether continually adding features to existing projects is necessarily the right way to go.

Welcome to part two of the series of Aptira blogs where we explore the potential of running OpenStack on FreeBSD! At the end of part one, I mentioned that the next section would be on the OpenStack Image (Glance) service but after considering it some more I thought we would try for something a bit harder and go for OpenStack Object Storage (Swift) instead, since this is a fully fledged user facing service (actually my favorite of all the OpenStack services) which can also be used as an active-active highly available, horizontally scalable backend for Glance.

So, grab your popcorn and settle in because this job will be a bit more involved than the last one! If you aren't familiar with object storage as a concept, or how OpenStack Swift is designed, it's definitely worth doing a bit of reading before proceeding as this blog is intended more as an operators guide than introduction to either object storage or swift itself.

To start with, we are going to create a new Vagrant environment building on the work we did in part one, but with some modifications to the OpenStack Identity service Keystone definition (you can tear down your Vagrant environment from part one as we will be starting this environment from scratch).

Here is the Vagrantfile we will be using for this guide (please see part one for the command to download the vagrant box we will be using below hfm4/freebsd-10.0):


Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.define "keystonebsd" do |keystonebsd|
    keystonebsd.vm.box = "hfm4/freebsd-10.0"
    keystonebsd.vm.hostname = "keystonebsd.sy3.aptira.com"
    keystonebsd.vm.network "private_network", ip: ""
    keystonebsd.vm.provider "virtualbox" do |v|
      v.customize ['modifyvm', :id ,'--memory','2048']

  config.vm.define "swiftproxybsd01" do |swiftproxybsd01|
    swiftproxybsd01.vm.box = "hfm4/freebsd-10.0"
    swiftproxybsd01.vm.hostname = "swiftproxybsd01.sy3.aptira.com"
    swiftproxybsd01.vm.network "private_network", ip: ""
    swiftproxybsd01.vm.provider "virtualbox" do |v|
      v.customize ['modifyvm', :id ,'--memory','2048']

  config.vm.define "swiftstoragebsd01" do |swiftstoragebsd01|
    swiftstoragebsd01.vm.box = "hfm4/freebsd-10.0"
    swiftstoragebsd01.vm.hostname = "swiftstoragebsd01.sy3.aptira.com"
    swiftstoragebsd01.vm.network "private_network", ip: ""
    swiftstoragebsd01.vm.provider "virtualbox" do |v|
      v.customize ['modifyvm', :id ,'--memory','2048']
      v.customize ['createhd', '--filename', '/tmp/swiftstoragebsd01.vdi', '--size',  500]
      v.customize ['storageattach', :id, '--storagectl', 'IDE Controller', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', '/tmp/swiftstoragebsd01.vdi']

  config.vm.define "swiftstoragebsd02" do |swiftstoragebsd02|
    swiftstoragebsd02.vm.box = "hfm4/freebsd-10.0"
    swiftstoragebsd02.vm.hostname = "swiftstoragebsd02.sy3.aptira.com"
    swiftstoragebsd02.vm.network "private_network", ip: ""
    swiftstoragebsd02.vm.provider "virtualbox" do |v|
      v.customize ['modifyvm', :id ,'--memory','2048']
      v.customize ['createhd', '--filename', '/tmp/swiftstoragebsd02.vdi', '--size',  500]
      v.customize ['storageattach', :id, '--storagectl', 'IDE Controller', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', '/tmp/swiftstoragebsd02.vdi']

  config.vm.define "swiftstoragebsd03" do |swiftstoragebsd03|
    swiftstoragebsd03.vm.box = "hfm4/freebsd-10.0"
    swiftstoragebsd03.vm.hostname = "swiftstoragebsd03.sy3.aptira.com"
    swiftstoragebsd03.vm.network "private_network", ip: ""
    swiftstoragebsd03.vm.provider "virtualbox" do |v|
      v.customize ['modifyvm', :id ,'--memory','2048']
      v.customize ['createhd', '--filename', '/tmp/swiftstoragebsd03.vdi', '--size',  500]
      v.customize ['storageattach', :id, '--storagectl', 'IDE Controller', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', '/tmp/swiftstoragebsd03.vdi']


As you can see the main differences to the keystone definition are that we have changed the vm name and added a private_network definition. Once you have created the Vagrantfile, start up the keystone vm and follow the steps from part one of the guide, except when running the endpoint-create command you should point it to the private_network IP defined above.

$ vagrant up keystonebsd
$ vagrant ssh keystonebsd

(now from inside the vm)

$ sudo -i
# ...(follow all the steps in part one of this guide except the last endpoint-create command which follows as)
# /usr/local/bin/keystone --os-token ADMIN --os-endpoint endpoint-create --service=identity --publicurl= --internalurl= --adminurl=
# ...(run test commands from part one of this guide using instead of localhost as the --os-auth-url flag)

while we are logged into our keystone node we should prepare the swift service, user/tenant, endpoints as well and then we can exit out of our keystone node:

# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url service-create --name swift --type object-store
# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url tenant-create --name service
# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url user-create --name swift --tenant service --pass password
# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url user-role-add --user swift --tenant service --role admin
# ...(you can confirm the above commands worked by using the test commands at the end of part one of this guide)
# exit
$ exit

Now the fun really begins, we can start spinning up our swift servers! First on the list is our swift proxy. As mentioned above, if you aren't sure what any of these services are or what their function is, it's worth reading the architecture overview (at minimum) to familiarise yourself before continuing:

$ vagrant up swiftproxy01
$ vagrant ssh swiftproxy01

(now from inside the vm)

$ sudo -i

Installing memcached, enabling it as a service and starting the memcached service

# pkg install memcached
# echo 'memcached_enable="YES"' >> /etc/rc.conf
# service memcached start

Installing swift:

# pkg install python git wget
# pkg install py27-xattr libxslt
# pip install pbr six prettytable oslo.config python-keystoneclient netaddr keystonemiddleware
# git clone https://github.com/openstack/swift.git
# cd swift
# python setup.py install

Configuring swift user and required directories:

# mkdir /var/run/swift
# mkdir /etc/swift
# pw groupadd -n swift
# pw useradd -n swift -g swift -s /sbin/nologin -d /var/run/swift
# chown -R swift:swift /var/run/swift

Now we will copy over the provided example swift configuration files and modify them. swift.conf is first (below commands assume your current working directory is the cloned git repository):

# cp etc/swift.conf.sample /etc/swift/swift.conf

and modify the swift_hash_* lines in /etc/swift/swift.conf (again, same as in part one, we are not using secure values that would be required in production, only demonstrating the concepts):

swift_hash_path_suffix = suffsuffsuff
swift_hash_path_prefix = prefprefpref

then we will copy and modify the proxy-server.conf:

# cp etc/proxy-server.conf-sample /etc/swift/proxy-server.conf

Carefully modify the following sections, starting with [DEFAULT]:

bind_port = 8000
admin_key = adminkeyadminkey
account_autocreate = true

then [pipeline:main] (in this case we are modifying the application pipeline to remove the tempauth authentication module and instead use keystone authentication):

pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo proxy-logging proxy-server

so we should also comment out the tempauth filter:

#use = egg:swift#tempauth

then uncomment and modify the authtoken filter to use the keystone configuration we setup at the start of this guide:

paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host =
auth_port = 35357
auth_protocol = http
auth_uri =
admin_tenant_name = service
admin_user = swift
admin_password = password
delay_auth_decision = 1
cache = swift.cache
include_service_catalog = False

we also need to uncomment the keystoneauth filter:

use = egg:swift#keystoneauth
operator_roles = admin, swiftoperator
reseller_admin_role = ResellerAdmin

Now we can write that configuration file and exit. The next step is to build our ringfiles, which provide the logical storage configuration to all the swift servers (ringfile documentation can be found here and here):

# cd /etc/swift
# swift-ring-builder container.builder create 18 3 1
# swift-ring-builder account.builder create 18 3 1
# swift-ring-builder object.builder create 18 3 1

once the ringfiles are created we can populate them with information about storage nodes and their devices (in the below configuration each storage node is confgured as a "zone"):

# swift-ring-builder object.builder add z1- 100
# swift-ring-builder container.builder add z1- 100
# swift-ring-builder account.builder add z1- 100

# swift-ring-builder object.builder add z2- 100
# swift-ring-builder container.builder add z2- 100
# swift-ring-builder account.builder add z2- 100

# swift-ring-builder object.builder add z3- 100
# swift-ring-builder container.builder add z3- 100
# swift-ring-builder account.builder add z3- 100

and once the rings are populated the final step is to rebalance them:

# swift-ring-builder account.builder rebalance
# swift-ring-builder container.builder rebalance
# swift-ring-builder object.builder rebalance

Finally, we can set ownership of all the files in /etc/swift to the swift user and start the service! As I mentioned at the beginning of the guide, swift is an active-active, horizontally scalable service so you can add as many swift proxies as you like to your Vagrantfile and repeat the above procedure for each of them. You don't need to build the ringfiles on each node, simply copy the gzipped ringfiles (/etc/swift/*.gz) to /etc/swift on each of the nodes (we will do this on the swift storage nodes as you will see below).

# chown -R swift:swift /etc/swift
# /usr/local/bin/swift-init proxy start

Now we can start spinning up our swift storage nodes. I am only going to go through this procedure for one of the nodes, you should simply repeat it for each of the swift storage nodes defined in the Vagrantfile:

$ vagrant up swiftstoragebsd01
$ vagrant ssh swiftstoragebsd01

(now inside the VM)

$ sudo -i

Enabling and starting the ZFS service (we are using this since I couldn't find documentation on using XFS that is normally used when operating swift clusters) and configuring the second disk defined in the Vagrantfile as a ZFS pool:

# echo 'zfs_enable="YES"' >> /etc/rc.conf
# service zfs start
# zpool create swiftdisk /dev/ada1
# zfs set mountpoint=/srv/node/swiftdisk swiftdisk

Next, install rsync (which we will use as a daemon):

# pkg install rsync

and copy the following configuration to /etc/rsyncd.conf:

uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address =
max connections = 2
path = /srv/node/
read only = false
lock file = /var/spool/lock/account.lock
max connections = 2
path = /srv/node/
read only = false
lock file = /var/spool/lock/container.lock
max connections = 2
path = /srv/node/
read only = false
lock file = /var/spool/lock/object.lock

Enable the rsyncd service and start it:

# echo 'rsyncd_enable="YES"' >> /etc/rc.conf
# service rsyncd start

Installing swift:

# pkg install git wget python
# pkg install py27-xattr py27-sqlite3
# wget https://bootstrap.pypa.io/get-pip.py
# python get-pip.py
# git clone https://github.com/openstack/swift.git
# cd swift
# python setup.py install

Configuring swift user and required directories:

# mkdir /etc/swift
# mkdir /var/cache/swift
# mkdir /var/run/swift
# pw groupadd -n swift
# pw useradd -n swift -g swift -s /sbin/nologin -d /var/run/swift
# chown -R swift:swift /var/run/swift
# chown -R swift:swift /var/cache/swift
# chown -R swift:swift /srv

Now we will copy over the provided example swift configuration files and modify them. swift.conf is first (below commands assume your current working directory is the cloned git repository):

# cp etc/swift.conf.sample /etc/swift/swift.conf

and modify the swift_hash_* lines in /etc/swift/swift.conf (again, same as in part one, we are not using secure values that would be required in production, only demonstrating the concepts):

swift_hash_path_suffix = suffsuffsuff
swift_hash_path_prefix = prefprefpref

then we can copy the configuration files for the swift storage services (these should not require any modification):

# cp etc/account-server.conf-sample /etc/swift/account-server.conf
# cp etc/container-server.conf-sample /etc/swift/container-server.conf
# cp etc/object-server.conf-sample /etc/swift/object-server.conf

At this point you should copy the ringfiles created on the swiftproxybsd01 vm to /etc/swift. I set a password for the vagrant user and copied them using scp to each vm. This is definitely not a recommended method of doing things in production but is fine for demonstration purposes:

# scp This email address is being protected from spambots. You need JavaScript enabled to view it.:/etc/swift/*.gz /etc/swift

Finally, we can set ownership of all the files in /etc/swift to the swift user and start the all the storage node services!

# chown -R swift:swift /etc/swift
# /usr/local/bin/swift-init all start

After you have completed these steps on one node, repeat them for the remaining storage nodes defined in the Vagrantfile.

Once this is done, your swift service is up and running! You can test it now, so log back into your keystone node (or any computer that can access and install the swift client:

# pip install python-swiftclient

Now you can run some test commands!

# swift --os-auth-url= --os-username=admin --os-password=test123 --os-tenant-name=admin post testcontainer
# swift --os-auth-url= --os-username=admin --os-password=test123 --os-tenant-name=admin upload testcontainer get-pip.py
# swift --os-auth-url= --os-username=admin --os-password=test123 --os-tenant-name=admin upload testcontainer get-pip.py (for some reason the first object never shows up in stat so I always run it twice)
# swift --os-auth-url= --os-username=admin --os-password=test123 --os-tenant-name=admin stat

which, if everything worked correctly, should show the following output:

Account: AUTH_516f9ace29294cff91316153d793bdab
    Containers: 1
       Objects: 1
         Bytes: 1340903
X-Account-Storage-Policy-Policy-0-Bytes-Used: 1340903
   X-Timestamp: 1408880268.17095
X-Account-Storage-Policy-Policy-0-Object-Count: 1
    X-Trans-Id: txfc005fa6c999449b81e7b-0053fa972f
  Content-Type: text/plain; charset=utf-8
 Accept-Ranges: bytes

and that's it! You're now running two OpenStack services on FreeBSD. Once we have setup and installed Glance, then the hard work of making OpenStack Compute (Nova) work somehow with FreeBSD begins. As I mentioned in part one, I am very interested in the idea of getting FreeBSD Jails supported in Nova! Stay tuned!

Things are busy here in Aptiraville. One thing we have been working on recently is upgrading our business intelligence platform, ADAPT. We originally ran this platform for our customer, Mercurial, on our VMware backed OpenStack region but as part of the upgrade we wanted to increase its capability to run our own KVM backed region as well.

So I tried to convert the existing VMDK to a QCOW2 to file with:

qemu-img convert -f vmdk ws2012.vmdk -O qcow2 ws2012.qcow2

and a bunch of variations of that. Unfortunately, no matter what I did, WS2012 just wouldn't work correctly! After wasting a very frustrating day, I decided to rebuild an identical image from scratch in QCOW2 format to bypass the issue completely. To my further frustration, the available documentation on the internet for doing so is actually quite sparse and in some cases nonexistent. I thought it would be useful to write up exactly what I did this time as a blogpost, so that others can hopefully use it as a resource and avoid the pain points that I had to go through (since Windows doesn't use text based configuration files, making changes means booting the image, making the changes and reuploading, rather than simply mounting and modifying the image).

First thing you will need is a computer capable of running KVM, with VT extensions enabled in the BIOS. My advice is to use a CentOS 6 machine, as I found that images I created on newer distributions like Ubuntu 14.04 or CentOS 7 would not boot on RHEL/CentOS 6 machines due to changes in the version of QCOW2 used. Creating the image on CentOS 6 means that the image you create will be bootable everywhere.

Along with this, you will need to have a Windows Server 2012 ISO accessible on the computer and product key (Aptira is a Microsoft SPLA partner), and and the Fedora signed VirtIO drivers. Make sure you install virt-installer and virt-manager:

yum -y install virt-install virt-manager

Alternatively, if you don't want to install the GUI virt-manager on to the computer, you can install that locally and connect to the computer over SSH with virt-manager.

Make sure you have enabled IP forwarding in sysctl so that the VM will be able to access the networks when it comes online (if you want to copy scripts or other files onto it for example):

echo 1 > /proc/sys/net/ipv4/ip_forward

Create a preallocated QCOW2 image in /tmp (I tried making a thin provisioned one and the Windows installer thought the disk size was zero) and set permissions so libvirt can access it:

cd /tmp
qemu-img create -f qcow2 -o preallocation=full ws2012.qcow2 15g
chmod o+wx ws2012.qcow2

The 15GB number chosen above as the disk size is arbitrary, the actual install only takes up just under 7GB so if you want to keep the image as small as possible then something close to that would be optimal.

Run virt-install to boot the VM with the WS2012 installer ISO and VirtIO driver ISO attached attached, the disk set as a VirtIO disk and NIC attached to the default created virbr0. Unfortunately the most recent "os-variant" available for MS Windows on the CentOS 6 version of virt-install is "win7" so that is what I chose here for better or worse:

virt-install --connect qemu:///system --arch=x86_64 -n ws2012 -r 2048 --vcpus=2 --disk path=/tmp/ws2012.qcow2,device=disk,bus=virtio,size=15 -c /mnt/Source/en_windows_server_2012_x64_dvd_915478.iso --vnc --noautoconsole --os-type windows --os-variant win7 --network=bridge:virbr0 --disk path=/mnt/Source/en_windows_server_2012_x64_dvd_915478.iso,device=cdrom,perms=ro -c /mnt/Source/virtio-win-0.1-81.iso

Once the virt-install command has started, you can view the GUI by either firing up virt-manager locally, remotely (and connect to the computer running virt-install via SSH) or simply connect directly to the VNC port running at (assuming no other KVM virtual machines are running on the same host).

Complete the Windows Server 2012 installation as you normally would. We prefer the headless mode. During the installation Windows will complain that it can't find a disk to install to, and present a dialog to load drivers. The drivers should be in:


and you are looking to install the VirtIO SCSI device driver (we will install the remaining VirtIO drivers after the operating system installation is complete and the virtual machine is running).
Once the installation is completed, the machine will restart and you can set an administrator password and login. The first step after successful installation is to run a command prompt (if using headless mode one will be presented as soon as you login) and use pnputil to insert the remaining VirtIO drivers:

pnputil -i -a D:\WIN8\AMD64\*.INF

We also want to enable .NET 2.0 and .NET 3.0/3.5 frameworks (E:\SOURCES\SXS is a directory on the WS2012 installer ISO):

DISM /Online /Enable-Feature /FeatureName:NetFx3 /All /LimitAccess /Source:E:\SOURCES\SXS

At this point, we want to copy over the installer for the latest .NET 4.5, any patches, scripts, etc as well as Cloudbase-Init (we recommend the automated Jenkins gated build) so we will create a directory for these files to go, map a CIFS network share, copy the files and then delete the share when we have completed the copy:

net use x: \\APTIRAFILESERVER01\
mkdir c:\source
copy x:\OpenStack\ADAPT\setup\* c:\source\
net use x: /DELETE

and then install Cloudbase-Init. If you are not familiar with this tool, it's a port of the popular Ubuntu Linux tool that configures cloud virtual machines when they boot up based on a variety of configuration sources (we use ConfigDrive but also supported is the EC2 style metadata server, MaaS and others):


During the installation process you should set the user that Cloudbase-Init will manage from the default "Admin" to "Administrator", unless you have (or plan to) create a special user for cloud-init purposes.

On the last dialog of the Cloudbase-Init installation you should select the option to run Sysprep and "generify" the virtual machine image. Don't select the option to shutdown the virtual machine, as we still have one thing left to do.

After the Sysprep has finished running, to ensure that any scripts the user runs on boot through the OpenStack nova --user-data flag (referred to as "Post-Creation" in the OpenStack dashboard) we will need to set the PowerShell execution policy to be unrestricted:

Set-ExecutionPolicy Unrestricted

During testing we noticed that unfortunately line-breaks don't translate correctly when using the "Post-Creation" textbox in the "Launch Instance" dialog in OpenStack dashboard and ConfigDrive (works fine with metadata servers) so that a Post-Creation entry of:

net user Administrator test123!

will appear in the user_data file as

#ps1_sysnativenet user Administrator test123!

so we recommend booting instances from the commandline python-novaclient with the --user-data=userdata.txt flag set when running "nova boot", where userdata.txt is a file written with an editor like Vi that supports writing files in DOS format (:set ff=dos).

Once this is completed you can shut down the virtual machine immediately with:

shutdown -s -t 0

Now your image is ready to upload to the closest glance server for a test run (exciting!):

glance image-create --name WS2012 --disk-format=raw --container-format=bare --is-public --progress --file /tmp/ws2012.qcow2

Now you can enjoy Windows Server 2012 as a first class instance citizen on your KVM based OpenStack cloud like Aptira does!

Late last night I was caught with a flash of inspiration and wondered to myself how hard it would be to get OpenStack working on FreeBSD if Oracle could do it with OpenSolaris.


Over the coming months (whenever I get some free time) I’m going to try and see how far I can proceed in running the various OpenStack services on FreeBSD. I imagine most of the “control plane” components will be relatively painless to get going and I might even have a go at writing a nova-compute driver for FreeBSD Jails based on the OpenSolaris Zones work or perhaps the nova-docker or LXC drivers and see if something similar can be done for OpenStack Networking (or nova-network if necessary).


But for today let’s start at the easy end of the scale and see what it takes to get the OpenStack Identity (Keystone) service running on FreeBSD!


First up I will add a FreeBSD10 VirtualBox box to vagrant (I tried a few on vagrantcloud.com and this seemed the best one). If you’re not familiar with Vagrant I definitely recommend checking out the documentation as it’s a great tool :


$ vagrant box add hfm4/freebsd-10.0


and produce a simple Vagrantfile for it:




Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

 config.vm.define "bsdstack" do |bsdstack|

    bsdstack.vm.box = "hfm4/freebsd-10.0"

    bsdstack.vm.hostname = "bsdstack.testbed.aptira.com"

    bsdstack.vm.provider "virtualbox" do |v|

     v.customize ['modifyvm', :id ,'--memory','2048']





After a quick


$ vagrant up


to bring up my FreeBSD10 virtual machine and then


$ vagrant ssh

$ sudo -i


to log in to it and switch to root, my testbed environment is ready!


Now before we continue any further I will stress that what I'm implementing here is a proof of concept, so security is not really a consideration and you should keep that in mind if you ever decided to attempt this yourself on any internet connected server.


Installing the python, git and wget packages:


# pkg install python git wget


Installing pypi pip:


# wget https://bootstrap.pypa.io/get-pip.py

# python get-pip.py


Installing libxslt:


# pkg install libxslt


I generally use MariaDB for my backend these days, so let's install and start that too, create a database called keystone, then we can get into the configuration steps:


# pkg install mariadb55-server mariadb55-client

# echo mysql_enable=\"YES\" >> /etc/rc.conf

# service mysql-server start

# mysql -u root -e "CREATE DATABASE keystone;"



Clone the keystone git repository and install it with setup.py:


# git clone https://github.com/openstack/keystone.git

# cd keystone/

# python setup.py install


We will also need a couple of PyPI packages not installed by the above process:


# pip install pbr

# pip install MySQL-python


and with those simple steps, keystone is installed and ready to use! That was pretty painless!


The next step is to copy the sample keystone config to /etc/keystone, rename and configure (these commands assume being run from inside the cloned git repository):


# cp -r etc/ /etc/keystone

# cd /etc/keystone

# mv keystone.conf.sample keystone.conf

# mv logging.conf.sample logging.conf


Edit the keystone.conf file with your favorite editor, the following changes in the appropriate sections are all that's really required:






Now we can do a database sync and start keystone:


# /usr/local/bin/keystone-manage db_sync

# /usr/local/bin/keystone-all &


If we have done everything correctly we should be able to authenticate against the service endpoint of keystone with the admin token and make a call to verify it worked (note there will be no output, just a blank line).


# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ user-list


Next, let's set up an admin tenant/user, an admin role, service and endpoints:


# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ tenant-create --name=admin

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ user-create --name=admin --tenant=admin

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ user-password-update --pass=test123 admin

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ role-create --name=admin

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ user-role-add --user=admin --tenant=admin --role=admin

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ service-create --name=identity --type=identity

# /usr/local/bin/keystone --os-token ADMIN --os-endpoint http://localhost:35357/v2.0/ endpoint-create --service=identity --publicurl=http://localhost:5000/v2.0 --internalurl=http://localhost:5000/v2.0 --adminurl=http://localhost:35357/v2.0


Once that is done we can test the new user we created and see whether everything is working:


# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url http://localhost:5000/v2.0 user-list

# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url http://localhost:5000/v2.0 tenant-list

# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url http://localhost:5000/v2.0 user-role-list --user=admin --tenant=admin

# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url http://localhost:5000/v2.0 endpoint-list

# /usr/local/bin/keystone --os-tenant-name admin --os-username admin --os-password test123 --os-auth-url http://localhost:5000/v2.0 service-list


and there we go! OpenStack Identity running on FreeBSD!


Join us next time when we will try and setup the OpenStack Image (Glance) service on FreeBSD.

Nehru Group of Institutions recently started a cloud computing initiative and have established a cloud lab as a part of their Cloud Excellence Centre. NGI decided to host a meetup at their Cloud Excellence centre to provide industry exposure to their students.  Several industry stalwarts were invited to the event including companies like Aptira, Reliance Jio Infocomm, Tashee Linux Services and many more.

NGI CEO Mr Krishnan kicked off the event with a keynote where he spoke about the importance of the new trends in technologies for Educational institutions and providing the very best for their students.  He also spoke about how OpenStack had helped NGI to introduce the Cloud in their curriculum and quickly roll out a Cloud lab using commodity hardware. He outlined the future for NGI and launching an organisation wide OpenStack cloud for faculty and students to use.

Kavit Munshi from Aptira spoke next about the Education sector and OpenStack. He outlined the advantages of an OpenSource solution like OpenStack and how it could help educational institutions in more areas than just teaching IT and Computer Science. Kavit presented several use cases including the NeCTAR project. Kavit also spoke about the shifting trends in Education with the advent of MOOCs and online learning.

Kritika from Anita Borg Institute spoke about the  pressing need for women to get into technology and how Antia Borg Institute was giving scholarships to women to achieve that. She also spoke about various employment opportunities that Anita Borg Institute was creating by working with industry leaders. Her talk was well received by the students and the professionals alike.

After a tea break, Dheeraj Khare from Tashee spoke about the importance of OpenSource and gave various example from his experiences in the Industry. He also gave sound advice to the students present to help the pick a career path in OpenStack and OpenSource. The talk centred getting the students to understand the role of OpenStack in the rapidly changing IT landscape.

Next Debanshu from Dell spoke about Big Data and OpenStack. His talk covered the basics of Big Data for the people in the Audience who did not know what the technology entailed and how it was different from the other existing DB technology. He also described how OpenStack could help with the deployment of a Big Data solution and various projects in the OpenStack Ecosystem around Big Data

Bharath Kumar, a student from NGI, presented various student projects that were happening at the NGI cloud labs based on OpenStack. It was interesting to see how far the students had come in a short time since they picked up OpenStack. They were planning to write several SaaS and PaaS solutions which could be consumed inside their institution by other departments and improve the quality of service that the IT department offered. Worth noting were their Lectures on Demand and MOOC project, along with a mobile app for students to manage their curriculum all running on an OpenStack cloud

Divyanshu from NetApp was the last to present and talked about Beyond OpenStack. He gave a very motivating talk about the power of OpenSource and OpenStack. His talk focused on driving the students towards entrepreneurship and looking at a career in IT beyond a standard corporate job. He also spoke about the burgeoning start up scene in India and how the OpenSource cloud could foster that.

The meetup was successful with over 60 attendees and hopefully we can be in Coimbatore again soon.



The second OpenStack India day @ GNUnify Pune was held on the 14th of Feb 2014. The event is organised by Pune Linux User Group and Symbiosis University. This year we saw a fairly large turnout of over a hundred people. We were also glad to see a lot of new faces during the event this year. The OpenStack track for the event was focused towards driving up student participation in the OpenStack development process. The track had the following topics

1) Keynote and Introduction to OpenStack – Kavit Munshi (Aptira)
2) Interning @ OpenStack – Sayali Lunkad (OpenStack Intern)
3) Ironic Project updates – Rohan Kanade (Izel Tech)
4) Introduction to OpenStack Deveopment 101 – Pranav Salunke (Aptira)
5) OpenStack Scalability and Interoperability – Sajid Akhtar (Reliance Jio Infocomm)
6) OpenStack and Big Data – Deepak Mane (TCS)

After the keynote by Kavit Munshi from Aptira, Sayali Lunkad gave an interesting talk about her experiences as an OpenStack Intern and working with the Foundation. Sayali was selected through the Outreach Program for Women organised by the GNOME Foundation. Her project involved working with Ceilometer and Horizon. She had a lot of interest from the Students attending the meetup who wanted to know how they could get involved as well. Next, Rohan Kanade gave a very engaging and in-depth talk about the Ironic project. He was inundated with questions from the professionals at the event about Ironic and the progress being made by the Ironic team.

The next talk of the day was conducted by Pranav Salunke from Aptira. Pranav did a session on Introduction to OpenStack Development 101. The talk took the students through the various steps required to setup the development environment and the various OpenSource software required like Gerrit, Jenkins, Git etc. Pranav and Rohan also demonstrated live bug fixing by identifying a very small bug in Nova and uploading a commit using Git. Pranav also demoed how to fix bugs on the OpenStack documentation project by fixing a bug in OpenStack training manuals. This session was very well received by students and professionals alike.

The penultimate session of the day consisted of Sajid Akhtar from Reliance Jio Infocomm talking about the unique challenges in scaling OpenStack and how interoperability between various cloud providers may work. Sajid also gave an overview of some LSPE (Large Scale Production Environment) design fundamentals and considerations. The last session of the day was given by Deepak Mane from TCS about running big data and Savanna on OpenStack. He talked about the best practices and various design issues people could run into, while keeping his talk simple enough to be understood by students.

I would like to thank the students at Sybmiosis, Pune Linux Users Group and the Symbiosis Institute of Computer Studies and Research for hosting us and providing us with a wonderful platform to connect with the Pune community at large. Looking forward to the next gnuNify event in 2015

There is a special place in our hearts at Aptira for the OpenStack Object Storage project/service known as Swift. Many of us had prior experience with it when we came to Aptira and since we started we've done a whole bunch of interesting implementations for our customers in Australia and APAC.                      
After the proliferation of several new ecosystem projects there have been some renewed discussions in the community asking "what is core?" and this seems to have driven some recent questions where customers would ask us if we felt Swift was "extensible/pluggable enough" and maybe if it even needed some kind of re-design of architecture to "bring it in-line with other OpenStack services". We thought a quick blog post with our 2c was in order.
Our opinion on the "what is core?" question is pretty simple:
  • Compute
  • Storage
  • Networking
We also recognise the value of ancillary services like Horizon and Keystone but don't want to take it beyond that. Services (and their respective projects) which fall beyond that scope should have a designation like "ecosystem". However that is simply our aggregate internal opinion (2c) and we recognise that the community is currently in the ongoing process of trying to answer this question.
Internally and amongst most of the other implementers that we speak with in the OpenStack community is that Swift has always represented the gold standard for what we feel an OpenStack service should be:
  • Robust active-active architecture
  • Flexible implementation
  • Horizontal scale
  • Rock-solid codebase and API
  • Easy to upgrade from n-1 release to n
  • Strong and open development community
You'll note that these bulletpoints basically boil down to "ops friendly" and "production friendly", since we are largely an ops company that relies on the OpenStack community at large to "close the loop" when it comes to DevOps. We have always felt like OpenStack services should have some form of "common minimum requirement" and framework that implements it so that all the services are implemented consistently and with the same HA model.  As noted above, we have always felt that Swift represented an exemplar for what those requirements and that framework might look like.

As well as that we can provide a couple of examples from the coalface of cloud:

  • We are in the middle of a few middleware projects right now to customise how Swift works for the requirements of particular customers. This ranges from custom authentication to minor changes to the existing (extremely) flexible implementation architecture.
  • At the same time we are in collaboration with a major storage vendor (I won't say which, but it's not one of the 3-letter acronyms) on combining Swift with their hardware for a very large scale deployment at a higher education institution in Australia. On this front I can tell you for a fact that the major proprietary storage vendors are scrambling to make sure their solutions work with Swift. If you're already a customer or partner, it might pay to actually ask what the roadmap is and current status.

Let's make it really clear. If you want Swift on top of some other storage solution, it can do that (we will be able to talk more about this mid-year). If you want to extend/modify Swift, you can do that and it's not even going to require any major departures from the existing architecture. I have a niggling suspicion that this is simply a matter of education and marketing, the OpenStack Foundation could probably consider their messaging, that by and large external perception of OpenStack is still as a "compute" thing only. Better messaging about what an Object Store is, its use cases/problems it solves and why Swift does such a fine job could go a long way to changing those sorts of perceptions (as well as driving developer adoption).

We think Swift is great and would love to discuss how Aptira can help you integrate Swift with your existing storage infrastructure.


These past few weeks have seen a few articles and press releases posted that Aptira, myself and OpenStack have gotten a mention in.

Firstly, one that I'm sorry to say I missed the publication of back in March (sorry Rohan!), was a case study on the Sydney Gay and Lesbian Mardi Gras organisation and how they scale from a few staff in off peak season to 1000's of staff and volunteers in their festival season. Aptira is proud to sponsor and support the Mardi Gras organisation, as well as providing their virtual desktops in our cloud infrastructure with the ability for them to scale up as their demands dramatically rise in season, in April we also took over running their web site which experiences the same variations of load. Here's the article in Computerworld.

Next up our awesome partners at Equinix did a case study on how we work with them and houes our main opeartions from their state of the art modern data centre in Sydney, SY3. Here's a link to download the case study.

Next was two press releases that went out when Michael, Sina, Iain, Andy and I manned the OpenStack booth at CeBit a few weeks ago. Here's Sina and Michael being booth babes.

aptira staffs openstack booth at cebit

The press releases were about OpenStack obviously, and we also did one for Piston Enterprise OpenStack, as we spent 3 days showing hundreds of people how we could build them an enterprise cloud in roughly 13.5 minutes, from complete bare metal and raw switch, using just a USB key. Piston really have OpenStack nailed. Here are the releases for you reading pleasure:

http://www.prweb.com/releases/2013/5/prweb10772569.htm and http://www.cebit.com.au/cebit-in-the-media/2013/aptira-presents-piston-enterprise-openstack-at-cebit

At CeBit I had an interview with a very nice bloke, Stuart Corner, and what followed were a couple of articles in the Fairfax media. The first article I commented in was the very first mention of OpenStack (and Aptira for that matter) in the tier one media in Australia. I felt pretty proud of that, and even my Mum and Dad now think there's something in this OpenStack stuff. Here's the article:


It is true that OpenStack presents a unique opportunity for nations around the world to develop cloud solutions ground up, and I do hope our government (whichever way that game goes) soon do something serious to encourage investment in local talent and technology, else I see Aptira moving to Singapore or a myriad of places that have better investment regimes than Australia. The mining boom is gone, wake up Australia.

The second article from Stuart was a bit disappointing in 2 ways. Stuart should have checked facts about OpenStack adoption before publishing, and Angus should have been better informed. No, Rackspace was not first to launch OpenStack public cloud in Australia, Haylix beat them by well over a year, and it was a bit disappointing that Angus didn't know of the efforts of the current deployers of OpenStack in the country. Most notably, providing a world class and world leading OpenStack deployment that aids serious stuff like genomics, bushfire and disaster research and a bunch of other good things, is not playing around (NeCTAR).


Good on Ruslan Kogan for continuing to lead online in Australia. I know Angus is very keen to do better press next time, which is great!

Anyway, it's late I need to sleep...


On the 15th of December OpenStack India meetup group held a full day event at Bangalore. The event had over 120 attendees and speakers from companies such as Aptira, Dell, HP, Canonical, Ericsson etc. Aptira was one of the main sponsors of the event. You can read about it on the OpenStack blog by Atul Jha.

Earlier this week we held what may be the first multi city video linked OpenStack User Group meetup. Tristan has blogged about it up on openstack.org

Our CEO Tristan Goode has had an article featured in the online publication Technology Spectator and indeed a thought provoking piece it is. The rest of the Aptira family are quite proud that our leader is being recognised and published as a technology thought leader and Cloud Computing expert. 

Please follow the link :


tristan technology spectator cloud

I believe a key role of the OpenStack Foundation is to protect the brand that is OpenStack. I also believe that training is of key importance to advancing OpenStack.

Recently Rackspace advertised offering "Rackspace Certified Training for OpenStack". In some media this was interpreted as certified OpenStack training, it was tweeted by Rackspace staff as "OpenStack Certified Technician", and it was emailed to all summit attendees that Rackspace could "accelerate your career by becoming one of the first Certified OpenStack technicians".

Lets be clear here. Rackspace DO NOT have any sort of official certification for OpenStack, from OpenStack. They are tying the words Certified and OpenStack together in PR, and quite frankly, it's makes things grey and not black and white as it should be.

The main problems I see:

1. The timing is unfortunate because there is no clear Board/Foundation policy as yet.
2. I believe it is the Foundation’s right to determine what is certified and what is not under the terms of the Foundation’s trademarks.
3. This type of activity is clearly open to abuse. Any operator regardless of size, integrity or professionalism could adopt a similar approach. Therefore it needs to be dealt with urgently.

Leaving Rackspace’s actions aside, I made a suggestion to the Foundation mailing list and Board mailing list for a certification process for training materials and deliverables that they be offered to an OpenStack community sourced committee for review and approval. If this proposal is accepted then it would be nice to see Rackspace offer up their course materials for this review, and that this might form a baseline for establishing the certification benchmark.

If certified qualifications are not valued in the USA, courses that present an official certification are highly valued in Australia, Asia and Europe as clear markers of achievement. Integrity of that certification is key.

If we cannot gather enough support to resist such opportunistic behaviour then we (as a community) risk letting this run away on us. The Foundation needs to formulate a considered and effective response on this very important topic. Anything else risks tarnishing OpenStack.


We are very pleased to announce that Aptira is once again being recognized as a thought leader in the Managed Services and OpenStack community. We have been featured in the Rust Report, and our CEO Tristan Goode was asked about our business and his thoughts regarding the marketplace in general. 

It was a good opportunity for Aptira to get some press around the leaps and bounds that OpenStack is making in the region and the part we are playing in making that come to fruition.

Aptira is getting the attention it deserves as a disruptive technology provider and those that matter are starting to take notice, which is good because we are out to make as much noise in the marketplace as possible.

The featured article can be be found here at the Rust Report interview with Aptira.

tristan talks openstack rush report

The latest news from team Aptira is our hosting of the Sydney event for the Australian OpenStack user group last night. It was a fantastic night in which Aptira provided a venue for OzStackers from Sydney to get together, drink beer and learn from our resident technical masters about OpenStack.

Most importantly of all it is where community is built over a few beers and many laughs. This is the essence of what Aptira is all about, sharing knowledge and helping as much as possible. There was also solid representation from enterprise organisations with us as well on the night from such diverse industries such as managed services, video renders and financial data mining organisations all looking to become involved and contribute to the OpenStack project. Interestingly they are all looking to deploy OpenStack in their production environments in the not too distant future.

This proves not only that the community is growing but that big business is starting to take notice of the maturity of OpenStack as a platform that can potentially provide huge value and flexibility in their infrastructure environments.

The news from all of our friends in other OpenStack cells across the country who held similar events in the other capital cities was just as promising, that an inclusive and increasingly business focused community is growing and growing quickly.

There were many brilliant technical discussions that everyone learned from and the big buzz is definitely around the increased capabilities of the Quantum component of the new “Folsom” version of OpenStack that is slated for release on the 27th of September. This promises to give network virtualisation a huge boost in functionality and the community waits with baited breath to see it in full flight.

Ultimately the night was in aid of celebrating the fact that the Foundation can now give a solid direction to the project through the board and it can develop in a much more coherent and focused way. It is also to support the people around the world that have tirelessly given their time and expertise in their fields to make the project what it is. People such as our own CEO, Tristan Goode, that now sits on the Foundation Board and gives his time to help make the project what it is today and he is one of many dedicated individuals from users and vendors alike.

Most importantly though, the night was FUN! All tiers of the IT industry were represented from end users to resellers and distributors and celebrating the OpenStack foundations inception and what that means for users all around the world. We got some great feedback about what the user groups would like to see and this helps us to be able to help the community and deliver that to the OpenStack users which we will be doing over the next few months with several events planned for November and December, so watch this space ! 

Indian OpenStack User group - Chennai Meetup

The Indian OpenStack User group’s First Chennai meetup was held on the 19th September 2012 at the Malles Manotta hotel in T. Nagar in Chennai. We saw a decent turnout of 31 people at the event. The even started off with Yogesh from CSS Corp talking about OpenStack and its various components. This also led to a very interesting Q&A session about the OpenStack eco system and cloud computing in general. Next a demo of an OpenStack setup was given by Johnson and Yogesh from CSS Corp. This was very interesting and a lot of a first time users got to see the working of the OpenStack solution first hand. Atul from CSS Corp then spoke about the OpenStack foundation and gave us a brief overview of the Foundation and its layout. Kavit from Aptira then spoke briefly on the impact of OpenStack on SMEs and the Hosting community in general. After the talk, delicious lunch was provided and the users continued to chat and network. This was a successful first meetup in Chennai and we hope there are many more to come here after.

aptira indian openstack user group kavit


aptira indian openstack user group dash demo















Indian OpenStack User group - Bangalore Meetup

The Bangalore chapter of the Indian OpenStack User Group held their 4th meetup on the 22nd of September at JP Nagar. The venue was provided by Ahimanikya Satapathy and 20 users attended the meetup. Deepak Garg of Citrix provided a demo and live installation of Devstack and discusses the various developments happening in the world of OpenStack. Kavit Munshi of Aptira gave a talk on the OpenStack foundation and the future direction of OpenStack. There was networking between the users over pizza and drinks. The community also discussed what they wanted to see happening in the future OpenStack meetups. The organisers from the various cities have also decided to collaborate a bit more with the other cities and the users to organise more focused meetups.













Kavit Munshi