Building Next-Generation Data Center

12 module online course

Start now!

Reduce the Number of Server-to-Network Uplinks in Your Data Center

ipSpace.net » Documents » Optimize Your Data Center Infrastructure » Reduce the Number of Server-to-Network Uplinks in Your Data Center

An easy way to minimize the amount of transceivers, cables, and networking equipment in a data center is to minimize the number of uplinks per server. As easy as that sounds, the path to that goal is rarely trivial.

The first obstacle you might encounter while trying to do that are virtualization architects using obsolete vSphere design practices. The following diagram compares VMware recommendations for ESXi version 4 and newer recommendations for ESXi versions 5 and above.

With vSphere version 4 had no QoS, and thus VMware recommended using separate interfaces for VM traffic, for kernel traffic like VMotion, and for storage traffic, resulting in at least six network interfaces per server (considering minimum redundancy).

Even with vSphere 4 you could design better networks using QoS features available in Cisco Nexus 1000V, or virtual network interfaces available in UCS blade chassis. You’ll find more details in vSphere 6 Networking Deep Dive webinar.

In vSphere 5 VMware implemented rudimentary QoS called Network I/O Control (NIOC), and started recommending two 10GE interfaces per server with QoS configured on server uplinks to allocate bandwidth to VM traffic, storage and vMotion.

Do keep in mind that the current version of vSphere is vSphere 6 and vSphere 5 came out in 2011. Unfortunately, I was still hearing about six NICs per server years after vSphere 5 shipped.

What’s the impact of the changes in the recommended design as you go from an old server running vSphere 4 to a new more powerful server running vSphere 5 or 6? You can increase the virtualization ratio (for example, going from 10VMs per server to 50 VMs per server) while at the same time reducing the number of interfaces.

If you had a 48-port Gigabit Ethernet switch less than a decade ago, you could connect eight servers with 80 VM’s to it. Today you can connect 24 servers running over a thousand VM’s to a single 48-port 10GE switch – an enormous reduction in the network size and corresponding complexity.

After consolidating the server network interfaces, it’s time to redesign the storage network. Traditional servers had Ethernet interfaces dedicated to VM or management traffic, and separate Fibre Channel storage interfaces. Modern designs consolidate all traffic on a single set of uplinks and use IP-based storage or FCoE.

Using IP-based storage (NFS or iSCSI) instead of Fibre Channel results in the greatest reduction in port count and complexity:

  • The number of server uplinks (and corresponding optics and cables) is reduced by half;
  • The number of access (leaf) switches is reduced by half;
  • You’re removing a whole technology stack.
Traditional storage engineers don’t trust IP-based storage because they had no exposure to IP-based storage technologies. On the other hand, numerous large enterprises ran successful large-scale iSCSI or NFS deployments for years. To explore how you could configure QoS on an ESXi host to make iSCSI works well on vSphere 6, watch the vSphere 6 Networking Deep Dive webinar.

If you still need Fibre Channel support in your data center (for example, to connect tape backup units), try to consolidate the server uplinks. Replace the separate LAN and SAN interfaces in each server (a minimum of four ports) with two 10GE ports running FCoE.

Consolidating fiber channel and Ethernet between servers and access switches reduces the number of management points by a factor of two as you no longer need separate access switches for Fibre Channel and Ethernet.

If you decide to run FCoE between servers and access switches, separate the Fibre Channel and Ethernet traffic on access switches, and run separate Fibre Channel and Ethernet cores. Multihop FCoE introduces unnecessary complexity. You’ll find more details in Data Center Infrastructure for Networking Engineers webinar.

Some storage architects prefer to keep iSCSI networks separate from the data networks, or at least use different access-to-core links and dedicated core switches for the iSCSI part of the network.

However, if you already performed the other consolidation steps, it’s easy to set up the servers to forward VM data primarily through one access switch, and use the other access switch primarily for iSCSI or NFS (with each switch serving as a fully functional backup for the other switch).

End result: As long as all ports are operational, your VM data and your storage data never mix – they’re on two different VLANs and using separate access switches. But if one of the access switches crashes or a server-to-switch link fails, the traffic falls over to the other link/switch automatically.