In this and the next post, we’ll be covering the last two components of the Open Networking Open Network Software domain. This post is about Software Defined Networking, and the next post will be on Network Functions Virtualisation.
By the mid-2000’s, as massive demand for network services drove enterprises to build larger and larger networks, a paradox emerged: Product innovation at the component level was still heavily dominated by proprietary vendor architectures and at the same time, scaling and operationalising networks at the levels being driven by corporations like Google and Amazon, and even the US Government, was problematic – and expensive.
”Once you bought a piece of networking hardware, you didn't really have the freedom to re-program it. … You would buy a router from Cisco and it would come with whatever protocols it supported and that’s what you ran.Scott Shenker, UC Berkeley computer science professor and former Xerox PARC researcherhttps://www.wired.com/2012/04/nicira/
At one level, there is good reason for this: operational stability.
”If you buy switches from a company and you expect them to work. A networking company doesn't want to give you access and have you come running to them when your network melts down because of something you did.Scott Shenker
On the other hand, this characteristic gave the network vendors enormous control and leverage over their marketplaces. Companies that bought their products were highly dependent on these vendors for both for feature innovation and also to address operational issues.
A small number of companies sought to address these dependencies. In 2005, Google started to build its own networking hardware, in part because it needed more control over how the hardware operated. Again in 2010, Google was building its own networking hardware for its “G-Scale Network”. But Google was less interested in building its own network hardware than it was in driving the softwar-isation of the network stack.
”It’s not hard to build networking hardware. What’s hard is to build the software itself as well.Urs HölzleGoogle’s SVP of Infrastructure
In 2008, Google had deployed an internally developed Load Balancer called Maglev which was entirely based on software running on commodity hardware.
Although Google was quite capable of developing software components internally, when it came to G-Scale Network they would turn to an outside development and change the balance between hardware and software forever.
In 2003 Martin Casado, while studying at Stanford, began to develop OpenFlow, software that enabled a new type of network that exists only as software and that you can control independently of the physical switches and routers running beneath it. Casado wanted a network that can be programmed like a general-purpose computer and that could work with any networking hardware.
”Anyone can buy a bunch of computers and throw a bunch of software engineers at them and come up with something awesome, and I think you should be able to do the same with the network. We've come up with a network architecture that lets you have the flexibility you have with computers, and it works with any networking hardware.Martin Casadohttps://www.wired.com/2012/04/nicira/
This approach is called Software Defined Networking (SDN). With SDN, the network hardware became less relevant. Large users were able either to develop software solutions on commodity hardware or go direct to manufacturers in Asia (who also supplied big network vendors) and buy the basic network hardware directly.
Software Defined Networking purposefully combines the flexibility of software development with the raw power of network devices to produce an intelligent network fabric.
In 2011, the Open Networking Foundation (ONF) was founded to transfer control of OpenFlow to a not-for-profit organization, and the ONF released version 1.1 of the OpenFlow protocol on 28 February 2011. Interest boomed and prompted new product lines from vendor startups offering SDN capabilities on open hardware-based equipment.
Google rolled out its “G-Scale Network“, entirely based on OpenFlow by 2012, and many of the top-ranked internet companies implemented SDN-based networks in parallel or soon after.
Based on SDN, software now plays a controlling part in the open network and enables a new set of applications to be built that leverage this network fabric. With the rollout of SDN as a major driver of network design, deployment, operation and evolution, we see the requirements for success changing. Successful networks are now less about bolting together nodes and links with (relatively) pre-defined characteristics. Successful networks now start to take on aspects of the Software Development Lifecycle (SDLC). This change has huge implications for enterprises of all sizes because it completely changes the fundamental paradigm of how distributed computing resources are deployed.
Hot on the heels of SDN came another key technology within Open Networking, i.e. Network Functions Virtualisation (NFV). NFV only further entrenches the software-isation of computing infrastructure and the need for a ‘software first’ paradigm.
We’ll see more about this in our post. Stay tuned.