We saw at the end of the last post that the concept of cloud computing emerged in the late 90’s. However, to bring the Cloud into viable existence still required technological step-changes. That’s what we’ll look at in this post.
We start at the turn of the 21st century, in the wreckage of the dot-com bust. Even though most of the speculative capital had been lost, large investments had been made in building a high-speed network backbone and server infrastructure, which was largely intact.
And there were still many large and viable internet-based businesses with growing markets. After a relatively short contraction, innovation brought new business ideas to market every day.
These organisations were building super-dense computing platforms and were intimately familiar with internet connectivity, but building out infrastructure in this way propagated the operational problems we outlined last post.
Two capabilities emerged to help resolve these problems: the convergence of server virtualisation and hyperconnectivity and the creation of the Representational State Transfer (REST) architecture for Application Programming Interfaces (API’s).
The concept of server virtualisation is one physical machine supporting multiple virtualised machines. But if you abstract this concept to its most essential then virtualisation can be described as:
With the availability of ubiquitous and high-speed networking, we can easily conceive of architectures where these resource queue endpoints are almost anywhere. They certainly can exist on other computing nodes in a dense platform. But with low network latency these resource endpoints could also be placed at remote locations.
From this perspective, Cloud Computing is just server virtualisation writ large by managing a broader set of dense and modular infrastructure components.
To produce a viable Cloud, it turns out we need one more thing: a modular and decoupled software architecture. (We’ll cover software in future posts, but it’s important to mention here in the context of enabling Cloud computing).
In 2000, the concept of an API was fragmented and challenging. As well as many proprietary interfaces, the two “standards” were SOAP and CORBA, both notoriously difficult to implement and maintain.
In his doctoral dissertation, Roy Fielding proposed the REST architecture, which promoted easy-to-use API’s based on http. This approach was rapidly adopted, and the number of public REST API’s exploded.
Not only did the introduction of REST promoted simplicity of interconnection, but it also increased the focus on applications decoupling from infrastructure.
The ingredients for Cloud Computing are in place, and along comes Amazon, Inc.
In around 2000, Amazon realized that they have developed core skills in the delivery of infrastructure to service their internal needs, but that infrastructure delivery was still a major bottleneck for themselves and their customers.
Amazon realises the need to deliver infrastructure services much faster and more efficiently at the global web scale. They also realise that utilising this infrastructure to deliver capabilities to their partners and customers means they need to decouple their code from infrastructure much better, with cleaner interfaces and access API’s. Amazon was a very early adopter of REST.
Multiple initiatives emerged at Amazon that finally produce the initial offerings of Elastic Compute Cloud and Simple Storage Service. Developers flock to the platform, and AWS is fully launched in 2006.
From Amazon, the public cloud is born, enabling companies to build infrastructure services quickly and scale globally for a fraction of what it cost during the dot-com boom.
Amazon takes Sun’s vision to a global level: to leverage the internet to integrate web-scale applications.