As enterprise data centers evolve, one constant remains: networks are growing at an unprecedented pace in both capacity and complexity. But most of today’s data centers are still invested in a fixed architecture more appropriate to the early 1990s, which is not equipped to handle the rapidly intensifying network traffic generated by new business applications, video, content streaming, and the unstoppable force of BYOD.
What is the solution to this mounting problem?
Hint: it’s not more aggregation to bigger (and more expensive) core switches.
Enterprise data centers need the ability to scale appropriately, with speed and agility, no matter what increased demands are added to the system. That’s what hyperscale offers, but current fixed architectures don’t make hyperscale easy or accessible, especially for enterprise customers on a limited budget. To get past that limitation, we need an architecture that supports the flexibility and agility of hyperscale in data centers, from the ground up. We need a new equation:
Unquestionably, virtualization has been a key component in the growth of data center networks. Compute virtualization has been widely adopted by data centers, resulting in billions of dollars in savings over the past decade. It has enabled elasticity, where workloads could be managed, re-configured or moved using software from a screen – with no human hands touching the hardware.
Similarly, network virtualization will save billions of dollars for data centers over the next decade and beyond. However, it must offer the same benefits which compute virtualization provided to the network. It must include every element of the network: switching, storage, server hardware and the physical connectivity that enables the elements to communicate with each other.
Connectivity virtualization refers to the ability of network operators to manage, re-configure or move and reconnect the physical connectivity between servers, switches, storage devices and routers. As with compute virtualization, this can be accomplished dynamically via software from a screen. Connectivity virtualization will further advance network intelligence and the hyperscale movement by illuminating the properties of the actual fiber optic cables – their exact connection points and capacity, and the new routing options opened up by dynamically reconfigurable optical cross-connects.
Core switches in a network can be replaced with optical cross-connects along with a central orchestration system that truly separates the control and management plane from the data plane. Software controlled fiber optics can provide programmable direct-attach capability from any port to any server, storage, switch or router port across the data center. Silicon photonics, together with the switching capability of PCIe bus, will further advance hyperscale solutions.
Earlier SDN technology created an abstraction layer on top of traditional switching hardware. This abstraction layer enabled a hypervisor-to-hypervisor packet switching architecture by creating tunnel-like pathways through the existing switches. Because this technology was still dependent on the pre-existing data center design, it was also subject to the limitations of both the traditional hardware and the incumbent fixed-architecture philosophy. Fiber optic based connectivity that is programmed via centralized software along with packet flow decisions made in a centralized orchestration system open up previously unreachable possibilities, and will lead to a new generation of far more agile SDN solutions.
Will this new architecture get rid of all switches? No – or not immediately – but it will reduce the need for switches within the data center, especially the most expensive ones, the core switches that centrally aggregate traffic from all the rows and racks. In time, just as a single physical server now contains many virtual servers, we will see the evolution of host hardware that consolidates switches along with those virtual servers into one piece of equipment. The first examples of this were switch blades alongside server blades in chasses, but with silicon photonics and high density fiber connectivity we are moving towards a more integrated switch and server solution. In essence, what we know as a server today will consume the switch, performing its own switching at the edge of the network, guided by an overall network orchestration system.
How Do We Get There?
Currently, most data center networks process packets at the Top of Rack switch, then forward them to a larger End of Row or Spine switch, only to forward them once more to the Core switches. If 75 percent of all network packets traverse three or more hops of switches, surrounded by the guardrails of often conflicting protocols, imagine what would happen if all those packets could reduce their journey by just one hop or packet processing switch? The impact would be significant on the number of switches required, amount of heat dissipation reduced, maintenance contract dollars saved and space saved.
Tomorrow’s networks must look at a distributed switching architecture at the edge of the network, and allow the rapid, low-latency highways that software controlled and configured fiber optic connections can provide to transport traffic from edge to edge. To reach hyperscale, we need to remove the “speed bumps” in order to reduce latency and cost, while providing fast, simple efficient networks.