Network downtime can be incredibly expensive: the price tag can be up to $60 million per year for North American organizations according to IHS Research. The bottom-line impact entails both hard costs, such as lower productivity or lost sales, and softer costs such as customer or employee frustration.
Obviously, the faster IT teams can resolve issues, the less downtime there is. Yet this fast issue resolution requires organizations to be able to see what’s happening on the network – you can’t fix what you can’t see.
Implementing an efficient network packet brokering architecture in the data center is a proven way to achieve this network visibility.
Moreover, organizations that take this step not only decrease the bottom-line business impact from downtime, but they also see direct IT benefits including lower total cost of ownership (TCO) for hardware and software, greater return-on-investment (ROI) from overall IT purchases, and a stronger digital advantage compared to the competition.
Some of the concrete benefits of an efficient packet brokering architecture include:
- Fewer Troubleshooting Trips – A complete visibility and monitoring architecture allows network operations (NetOps) teams to quickly access and assess any part of the data center architecture, from servers dangling off leaf nodes to core systems or high-performance compute (HPC) clusters. Not only can this remote management and visibility speed troubleshooting and remediation, it can also allow NetOps teams to proactively take action to head off impending issues by modifying policies, security rules and routes, and/or changing appropriate filters. Best of all, it eliminates the need to send technicians on expensive trips to investigate problems on-site; accurate remote management of devices can reduce troubleshooting costs by up to 70%.
- Improved Security – Packet brokers can also efficiently feed network traffic to specialized monitoring and security tools, allowing these systems to better do their job by eliminating blind spots and missed traffic. Better security means fewer data breaches and less downtime caused by denial-of-service attacks. In the long run, this translates to lower TCO.
- Opportunity for Better Analytics and Forensics – If the packet brokering architecture includes a full ecosystem of packet capture, storage and analytics solutions, it offers even more benefits. IT can now centrally manage and monitor individual packet brokers as well as collect key performance indicators like latency, perform session-level analysis and more, correlating and visualizing this data in real time on easy-to-use dashboards. Capture devices can copy traffic from each packet broker and store it for future forensic analysis by specialized security and network performance monitoring (NPM) tools, allowing IT and security operations (SecOps) teams to investigate network issues or security threats in greater detail.
- Greater Efficiency and Fast Identification of Problem – De-duplication, the ability to detect and eliminate duplicate packets, is a feature of advanced packet brokering tools that can significantly improve efficiency and help reduce network downtime and costs. De-duplication has two main benefits: first, it reduces the traffic sent to downstream tools, allowing those tools to operate at peak performance. Of course, network infrastructure devices such as switches and routers that are operating normally should not generate much duplicate traffic. However, it’s not uncommon for duplicate packets in segments of the network due to poor network design, a flaw in the network topology, misconfigured or potentially failing equipment. That brings in the second benefit: packet brokers that can detect and remove these duplicate packets will help IT quickly identify and fix underlying issues.
- Shorter Mean-Time-to-Resolution (MTTR) – Ultimately, the packet brokering architecture makes the entire network more efficient and reduces the MTTR across the board. Solving problems faster means less downtime and lower costs.
The Right Network Packet Broker for the Job
Each layer of the network will require a different packet broker and/or network test access point (TAP) device based on the speed and performance of that layer – one size does not fit all.
For example, switches at the leaf layer tend to be 10Gbps, although some are now being upgraded to 25Gbps. Spine/core layer switches have traditionally been 40Gbps, but many customers are quickly migrating to 100Gbps speeds to keep pace with today’s network requirements.
Organizations that are considering a 100Gbps upgrade (or who wish to future-proof a monitoring infrastructure to ensure it will be in place for many years) will need to select packet brokers that can process packets at 100Gbps speeds and are still compatible with tools that are restricted to lower speeds.
Not dropping packets at 100 Gbps is technically challenging, so consider equipment choice here carefully.
TAPs should be strategically placed along the most important traffic routes. This is easier (and cheaper) on a north-south spine as there are fewer links.
Tapping east-west traffic at leaf switches can be challenging – there are so many connections that exclusive use of TAPs is usually too expensive and difficult to manage. A better east-west choice is a mix of TAPs, virtual packet brokers and switch port analyzer (SPAN) ports.
An effective way to tap the network while controlling costs is to use a two-tier design where the outer layer of packet brokers aggregates and feeds the traffic from all TAPs to a more powerful central packet broker, which processes and distributes the necessary packets to security and performance monitoring tools.
The central or core packet broker also performs more intense operations such as smart filtering, packet truncation, de-duplication, and matching of data rates to what each specific tool can ingest. This allows IT to use less powerful, less expensive packet brokers to aggregate the traffic and only one higher-cost broker for processing.
An efficient and appropriate TAP/broker architecture will improve visibility, aid troubleshooting and ensure peak efficiency and operation for network tools and equipment.
By giving NetOps and SecOps teams actionable insight, these teams are better able to reduce MTTR, eliminate costly in-person troubleshooting trips, reduce downtime and lower network TCO.
For businesses that depend on the network – that is to say, most businesses – eliminating downtime through network visibility is critical to the bottom line.
If you enjoyed this article and want to receive more valuable industry content like this, click here to sign up for our digital newsletters!