What is NIC Teaming, and How Does It Increase Uptime?
Network Interface Card (NIC) teaming is a common technique of grouping physical network adapters to improve performance and redundancy. The major benefits of NIC teaming are load balancing (redistributing traffic over networks) and failover (ensuring network continuity in the event of system hardware failure) without the need for multiple physical connections. Essentially, NIC teaming is a strategic plan that can increase uptime.
What is NIC Teaming?
Plugging multiple network cables from a server to multiple physical switches is the method of achieving fault tolerance in a traditional networking setup for a physical server. However, in this case, load balancing is non-existent, even when the server has multiple Internet Protocol (IP) addresses active at all times.
NIC teaming, on the other hand, is a feature of Windows Server that allows the grouping of NICs into teams. The team members are the network adapters that are used to communicate with the switch. The team interfaces are the virtual network adapters created when making a team. Hence, NIC teaming maintains a connection to multiple physical switches but uses a single IP address. This ensures readily available load balancing and instant fault tolerance (instead of waiting for DNS records to timeout/update).
What Are the Benefits of NIC Teaming?
The major benefits that NIC teaming offers are better load balancing and increased fault tolerance.
Load balancing
In the case of NIC teaming, the network traffic is balanced across all active NICs equally. Hence, outgoing traffic is load balanced automatically between the available physical NICs, based on the destination address. The incoming traffic is controlled by the switch routing the traffic to the server. The server does not control the physical NIC traffic.
Fault tolerance
Another benefit offered by NIC teaming is higher fault tolerance. If one of the underlying physical NICs is broken down or if the cable of the corresponding NIC is unplugged, the host/server detects the fault condition and moves the traffic to another NIC automatically. This reduces the possibility of a breakdown of the entire network, thus improving the fault tolerance of the system.
What Are the NIC Teaming Modes?
The two NIC teaming modes are Switch Independent and Switch Dependent. They are explained below.
Switch Independent
As the name suggests, in the Switch Independent mode, the switches to which the NIC team members are connected do not know about the presence of the NIC team. Hence, those switches do not know how to distribute the network traffic to NIC team members, and instead, they distribute the inbound network traffic across NIC team members.
Using the Switch Independent mode with Dynamic distribution distributes the network traffic load based on the Transmission Control Protocol (TCP) port’s address hash. The dynamic load balancing algorithm redistributes flow to optimize team member bandwidth utilization to ensure that individual flow transmissions move from one active team member to another. The algorithm also considers the possibility that redistributing traffic causes out-of-order packet delivery and takes steps that can minimize that possibility.
Switch Dependent
In the Switch Dependent mode, the switch that is connected to NIC team members determines the distribution of the inbound network traffic among the NIC team members. The connected switch hence has independence in determining how to distribute traffic across NIC team members. All team members must be connected to the same physical switch or a multi-chassis switch that shares a switch ID. Switch Dependent mode further has the following two options:
- Static Teaming: Requires manual configuration of the switch as well as the host to identify the links that form the team. Since this configuration is static, there is no additional protocol that assists the switch and host to identify errors such as incorrectly plugged cables. This can cause the team to fail. Typically, this mode is supported by server-class switches.
- Link Aggregation Control Protocol (LACP): LACP teaming identifies links connected between the switch and the host dynamically. This in turn enables the automatic creation of the team. This mode is supported by all server-class switches, but network operators must enable LACP on the switch port. NIC teaming operates in LACP’s active mode with a short timer, and there is no mechanism for changing the timer or the LACP mode as of now.
Using Switch Dependent mode with Dynamic distribution distributes the network traffic load based on Transport Ports address hash that is modified by the dynamic load balancing algorithm. This algorithm redistributes flows, optimizes team member bandwidth utilization, and allows individual flow transmissions to move from one active team member to another. The algorithm also reduces the possibility of out-of-order deliveries but takes its possibility into account.
Load Balancing Modes
The load balancing distribution modes of NIC teaming are:
Address Hash
In this mode, a hash is created based on the address components of the packet. This hash is assigned to one of the available adapters, thus creating a reasonable balance across available adapters.
Windows PowerShell can be used to specify values for the hashing components like:
- Source and destination TCP ports and source and destination IP addresses.
- Source and destination address only.
- Source and destination Media Access Control (MAC) addresses only.
The TCP ports create a granular distribution of traffic streams resulting in smaller streams. However, this cannot be used for traffic that is not based on TCP or User Datagram Protocol (UDP). In such cases, the hash uses the IP address hash or the MAC address hash.
Hyper-V Port
In this mode, the NIC teams that are configured on Hyper-V hosts give independent MAC addresses to virtual machines (VMs). The MAC address of the VMs or the VM ports connected to the Hyper-V switch are used to divide network traffic between NIC team members. NIC teams created within VMs cannot be configured with the Hyper-V Port load balancing mode, and it instead needs the Address Hash mode.
Dynamic
In this mode, the outbound loads are distributed based on the TCP port and IP address. This mode rebalances loads in real-time to ensure that a given outbound flow moves back and forth between team members. The inbound loads are distributed in the same way as the Hyper-V port. It utilizes both aspects of Address Hash and Hyper-V and is the highest performing load balancing mode.
Linux NIC Teaming
NIC teaming is termed NIC bonding in Linux. The principles are the same – in NIC bonding, 2 or more network cards are ‘bonded’ together into a single virtual NIC. For NIC bonding to work, your network switch must support the EtherChannel link aggregation architecture. This should not be a problem since most network switches today support EtherChannel.
Linux NIC bonding has the following modes:
Mode | Policy | Fault Tolerance | Load Balancing | Features |
Mode=0 | Round-robin | Yes | Yes | Default mode where packets are transmitted/received in a round-robin fashion. |
Mode=1 | Active-backup | Yes | No | Only one secondary is active. If active NIC fails, then this secondary acts as the active NIC. |
Mode=2 | Exclusive OR (XOR) | Yes | Yes | Transmits based on the XOR formula. Once the connection between NIC and matched device is established, the same NIC is used to transmit/receive to the destination MAC. |
Mode=3 | Broadcast | Yes | No | All packets are sent on all secondary interfaces for an ultra-reliable network. |
Mode=4 | IEEE 802.3 ad Dynamic Link Aggregation | Yes | Yes | Creates aggregation groups that share the same duplex settings and speed. |
Mode=5 | Adaptive Transmit Load Balancing (TLB) | Yes | Yes | Outgoing traffic is distributed according to the current load on each secondary. |
Mode=6 | Adaptive Load Balancing (ALB) | Yes | Yes | Receive load balancing is achieved through Address Resolution Protocol negotiation. |
Microsoft NIC Teaming
The Microsoft Network Adapter Multiplexor protocol is a kernel mode device driver used for linking 2 or more NICs together. Thus, it makes NIC teaming in Windows possible.
As the Microsoft Network Adapter Multiplexor is a kernel-level driver, it resides in the same address space as the Windows operating system and any other kernel mode driver, making it a potential attack vector for malware. To address potential security concerns, Microsoft purposely disabled this driver on server hardware without multiple NICs. It is enabled automatically when you set up NIC teaming in hardware with 2 or more physical network interfaces.
Increase Uptime in Parallels RAS with High Availability Load Balancing (HALB), Redundancy, and Performance
Parallels® Remote Application Server (RAS) is a remote work solution that provides 24/7 virtual access to applications and desktops from any device.
Parallels RAS High Availability Load Balancing (HALB) is software that sits between Parallels Gateways and the user. Many HALB applications run simultaneously, one acting as primary and the others as secondaries, to reduce downtime. Primary and secondary appliances share a virtual IP address, and if the primary fails, the secondary takes over seamlessly without affecting the connection of the end-user.
Parallels RAS removes restrictions on the traffic routed to multiple gateways, allowing any active gateway to handle incoming traffic. It helps maximize throughput and reduces downtime potential as it supports running multiple HALB appliances simultaneously.
When setting up HALB, install the HALB appliance first, then add the HALB appliance from the Parallels RAS console.
Parallels RAS supports resource-based and round-robin load balancing to handle incoming traffic. With resource-based load balancing, incoming requests are routed to the gateway handling the least traffic. Round-robin load balancing, on the other hand, routes request gateways sequentially.