Skip to main content
Thoughts

The Network is the Computer. Part 1 – Foundations: The Unfulfilled Promise

Aptira: What is Open Networking? The Computer is the Network.

As we saw in parts 1 & 2 of this series, Open Networking has a large footprint covering different components of technologies, skills and practices. To help you understand that footprint we’re starting a series of posts that will look at the evolution of these components and show you how they became Open Networking as we define it today.  We start with the Infrastructure domain.

In 1984 at a legendary company made a bold but, at that time, unfulfillable promise:

The network is the computer

John Gage, Sun Microsystems

Sun was born from the explosion of microelectronics advances in the mid 1970’s that initiated the long growth curve described by Moore’s Law. Sun pioneered a range of computing servers and workstations that brought computing power directly to workgroups and smaller enterprises. 

Previously, computing services were provided by large centralised mainframes, where all the computing work was done. Networks merely provided a pipe to get access to that central resource, using largely passive devices such as terminals and printers. 

This architecture produced a centralised and rigid management structure with control over computing. Centralised control had some benefits e.g. security, resource optimisation and budget control, but it also resulted in inflexibility, huge backlogs, lack of engagement with end users and many well-publicised disaster projects. 

The emergence of workgroup servers and workstations delivered computing power independently of the centralised technical and management architecture. Personal Computers (PCs) opened access to computing power to even larger numbers of people when they became commodity products in the early 1980’s. 

At the time, networking and computing were very distinct technology ecosystems which combined in very limited and fixed ways. These two ecosystems were almost completely separated across the supply chain, organisational structure, architectural principles, implementation, and operation. 

The introduction of departmental servers and PC’s helped to circumvent and undermine this centralised control model but didn’t change the basic paradigm of computer and networks as entrenched and disparate ecosystems. In part this was due to the limited and expensive options available from the telecommunications carriers, who operated in highly regulated telecommunications marketplaces that suppressed competition and innovation. 

A great gap was opening in the industry: between the late 1980’s and the early 1990’s the cost of computing rapidly reduced but the cost of connectivity remained high. At the same time the number of physical devices in distributed computing systems exploded as did end-user need. Innovation created products that solved problems at the margin but did not resolve the fundamental gap long term. For example: PC terminal cards, black box protocol emulators, and products that overlaid departmental networks on top of the mainframe terminal networks.  Solutions were possible but, in many cases, very messy. 

Sun had wanted to promote a paradigm of broad computing availability enabled by network integration, but the promise seemed less deliverable than ever. 

The problems were not only in network-land. The sheer numbers of compute devices were causing their own problems, caused by two aspects of the IT marketplace at that time: 

  • Servers were stand-alone devices with their own cabinetry, power supplies and so forth, in part to operate in uncontrolled environments, they were still housed in many cases in data centre-controlled environments for security and ease of operation.
  • Cheap commodity hardware generated a simplistic architecture that instantiated server components physically dedicated to a solution function. E.g. For a cluster of 10 web-servers, you spun up 10 physical devices in a cluster. Likewise, with database servers, email servers, application servers and so forth.

These two factors produced huge numbers of boxes in “server farms”, and data centres struggled to cope with the exploding demand for power, heat and physical space as more and more organisations sought to implement connected computing solutions, and more and more applications were found to use this computing power within organisations. 

Solutions were found that provided the necessary functionality, but compatibility and interoperability were problematic.  The cost of supporting solutions that integrated disparate components grew rapidly. 

How did these problems get solved?  We will cover that in the next post.  Stay tuned. 

Take control of your Cloud.
Get a customised Cloud strategy today.

Learn More

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.