Testing 10G Data Center Switches: Scaling 10 Times Higher

By David Newman, Network Test On January 18, 2010
No tags assigned.

If there’s one overarching conclusion I’ve drawn from three months of testing 10-gigabit top-of-rack data center switches, it’s that “switch” and “data center switch” are very different beasts.

Understanding the latter means testing new features like virtualization support and storage/data network convergence, while also driving unicast and multicast scalability benchmarking to new heights.

In a project recently published in Network World, we compared switches from six vendors, each with at least 24 10-gigabit Ethernet ports. We compared products in 10 areas: features; usability; power consumption; MAC address capacity; forward pressure; multicast group capacity; multicast group join/leave delay; link aggregation hashing fairness; and, of course, basic unicast and multicast performance.

Performance testing remains as important as ever, if not more so, when it comes to data center switching. That’s an important point: Buyers are interested in these new features, to be sure, but only in addition to switches’ long-time role as fast packet pushers.

In other words, the same industry-standard methods of benchmarking switching and routing performance (as defined in RFCs 2544, 2889 and 3918) remain vitally important in the context of data center switching. In fact, line-rate performance and low latency and jitter are even more important for many data center applications than for general-purpose enterprise networking.

Data center switches tend to have much longer features lists than their wiring-closet counterparts. We paid special attention to three areas. First is redundancy protocols. Some data center switches offer new methods to connect multiple servers and/or switches. Some methods even eliminate slower redundancy protocols, such as spanning tree. Others offer “active/active” connectivity across multiple links until a failure occurs, boosting bandwidth in a way that “active/standby” protocols such as spanning tree cannot. While new protocols are intriguing, it’s a good idea to test their resiliency and benchmark failover times before deploying them in your network.

Second, some switches allow the convergence of previously separate storage and data networks using technologies such as Fibre Channel or Fibre Channel over Ethernet (FCoE) on the same switch. The IEEE has defined several new protocols to accommodate the FCoE’s stringent delay and loss requirements. We didn’t test these protocols this time, since only one switch tested supported all these new mechanisms, but look forward to comparing data/storage performance in upcoming tests.

Finally, data center switches support virtualization in a variety of ways. Since virtual machines (VMs) often move between physical hosts in the data center, some switches offer the ability to have the VMs’ access control policies move with them. Other switches can carve up physical interfaces to appear as multiple logical links to different sets of VMs. And some support end-to-end management of physical and virtual switches, offering the same set of capabilities for both.

I’m looking forward to future tests comparing Fibre Channel and FCoE performance as well as even larger-scale tests of large modular data center systems. We’re still in the early days of data center switch testing, and things will only get bigger from here.

Newman is president of Network Test, an independent test lab and engineering services consultancy. He can be reached at dnewman@networktest.com.


comments powered by Disqus
× Spirent.com uses cookies to enhance and streamline your experience. By continuing to browse our site, you are agreeing to the use of cookies.