Continuing our analysis of the Open Network Software domain, this post is about Network Function Virtualisation (NFV).
In Open Networking, the network is no longer just a pipe but is part of the computing infrastructure and must perform some computing functions, not just move data between computing devices.
As we saw last post, Software Defined Networking (SDN) revolutionised Open Networking by implementing network connectivity in software instead of hardware.
Rapidly came the next step in the evolution Open Networking: the implementation of software network appliances that implemented the “interesting” functions of the network. In October 2012 (almost immediately after the first major rollout of OpenFlow) Network Function Virtualisation (NFV) is born via a white paper published at the SDN and OpenFlow World Congress in Darmstadt, Germany
The white paper was presented by an Industry Specification Group (ISG) group called “Network Function Virtualisation”, established within the European Telecommunications Standards Institute (ETSI), and comprising 28 representatives from across 13 telcos from Europe, China, Australia and the USA.
This white paper outlined the building blocks of NFV and envisaged its purpose to be:
Network Function Virtualisation enables Network Services to be easily and flexibly created from software components that can be stitched together “on the fly”. These components implement network functions such as firewalls, load balancers, network address translation (NAT), intrusion detection, domain name service (DNS), and so forth.
Many of these network functions were previously implemented as dedicated, and often proprietary, physical network devices that had to be instantiated in the network and configured individually to provide an element of end-to-end service capability.
The 2012 white paper outlined three (3) basic building blocks of NFV:
- Virtualized network function (VNF) which are software implementations of network functions that can be deployed on NFV infrastructure.
- Network function virtualization infrastructure (NFVI), which is the hardware and software environment where VNFs are deployed, which may span several locations.
- Network function virtualization management and orchestration (NFV M&O) is responsible for managing and orchestrating NFVI and VNFs.
In the above model, NFVI’s are the basic cloud infrastructure which we’ve covered in earlier posts, with additional standards-based network function virtualisation capabilities.
VNF’s are the building blocks of services and were originally conceived as the software equivalent of existing hardware products, as seen in the early VNF implementations from major Network Equipment Providers (NEP’s) such as Ericsson, Cisco or F5.
As a building block there is no underlying architectural reason that this needs to be the case: VNF’s are just domain-specific cloud applications which can be designed to implement required functions in any number of architectural configurations. This “like for like” VNF implementation of existing products is more an artefact of the relative youth of the NFV lifetime – remember that the NFV spec is younger than the TV series of the Game of Thrones novels!. More recent VNF implementations are built from the ground up as software components, especially so in open source projects.
SDLC lifecycle factors are emerging as much bigger influencers in VNF design than existing product architectures. For example, VNF’s integrate better into customer solutions if they are designed as Cloud native applications, i.e. comply with best practice cloud design recommendations such as the 12 factors or the Pets and Cattle analogy. These drivers, and the general openness of software, will result in much more interesting VNF architectures in the future.
The third Network Function Virtualisation building block is orchestration, which is the management of VNF’s within the NFVI. Orchestration performs on-boarding, lifecycle management and policy management for new Network Services (NS). Orchestration enables the Open Networking solution to more fully capture the benefits of the software enabled implementation of network functions but is dependent on a high level of inter-operability.
Interoperability is a key benefit of Open Networking components, but it is possible to implement NFV components in ways that don’t fully capture these benefits. Paradoxically, the software-isation trend is both a blessing and a potential curse in the quest for better interoperability, for example there is nothing to prevent a VNF not being implemented as ‘cloud native’ applications or to avoid a partial implementation of reference API’s.
Based on current experience, we see a very wide range of implementations of VNF’s in the market including by established network vendors. The implication for integrators and users is that VNF’s need to be carefully onboarded and validated on a case-by-case basis.
In the last 4 posts, we’ve covered software trends in general and three components of Open Networking, but we’ve only scratched the surface of this huge topic.
A natural next step in our Open Networking series would be to continue with the third and last domain (Open Network Integration). However, I think it’s important that we continue our examination of the implications of the software-isation of networking and computing infrastructure.
So, for the next few posts we are going to take a “software interlude” to unpack this topic in more detail. Then we’ll come back to Open Network Integration and wrap up our series.
Stay tuned.