In this blog, we will focus on creating and configuring a vSphere Distributed Switch, particularly emphasizing on using Link Aggregation Control Protocol (LACP).

What will we discuss in How to use LACP in vSphere Distributed Switch part?

Protect Your Data with BDRSuite

Cost-Effective Backup Solution for VMs, Servers, Endpoints, Cloud VMs & SaaS applications. Supports On-Premise, Remote, Hybrid and Cloud Backup, including Disaster Recovery, Ransomware Defense & more!
  • What is Link Aggregation Control Protocol(LACP)
  • Requirements and Limitations of LACP in vSphere
  • How to configure LACP in vSphere Distributed Switch
  • How to migrate vmkernel vDS network to vDS LACP
  • How to active vmnics from vDS network to vDS LACP

In the LACP session, we will learn about different terms and ways to achieve Link Aggregation using LACP and other methods. We will also learn how to implement LACP in vDS and how to migrate your networks to the newly implemented vDS LACP.

We will learn how to configure LACP in vSphere Distributed Switch, but we also discuss the essential options when configuring the Link Aggregation Group(LAG).

In this first part, we will discuss:

Download Banner
  • What is Link Aggregation Control Protocol(LACP)
  • Requirements and Limitations of LACP in vSphere

What is Link Aggregation Control Protocol(LACP)

Before diving into the usage of a feature, it’s essential to understand its purpose and functionality.

In this section, we explore LACP, also known as Ether-Channel, Ethernet trunk, port channel, LACP, vPC, and Multi-Link Trunking.

It’s important to note that the availability and configuration of LACP may vary depending on the specific switch vendor and model you are using.

Link aggregation, which enables multiple physical links between network devices to function as a single logical link, can be achieved through both trunking and LACP. However, they differ in their approach:

  • Trunking: In trunk mode, individual ports within a link aggregation group are statically configured to form a trunk. The Switch treats these ports as a logical interface, facilitating traffic exchange between switches or network devices. Trunking doesn’t require negotiation between devices and operates independently of any specific protocol
  • LACP: LACP is a standard protocol that allows two devices to dynamically negotiate and form a link aggregation group. LACP utilizes control frames for negotiation and management, providing advanced load-balancing techniques beyond trunking. With LACP, switches can automatically detect and configure link aggregation with other LACP-enabled switches

In summary, while both trunking and LACP achieve link aggregation, trunking relies on static configuration without protocol negotiation. On the other hand, LACP is a dynamic, protocol-based approach that enables the automatic formation and management of link aggregation groups between devices.

Understanding these concepts will help us proceed with configuring and using LACP in the vSphere Distributed Switch.

The following example illustrates the setup of a vSphere Distributed Switch (vDS) with LACP using Link Aggregation Group (LAG) configuration and the connection to physical network interfaces and switch ports.

In this scenario, we have three ESXi hosts connected to the vDS. Each vmnic (virtual machine network interface card) from the ESXi hosts is connected to corresponding physical switch ports and configured with the Link Aggregation Control Protocol (LACP) at the switch level.

This configuration exemplifies implementing LACP in your environment, showcasing how the vDS, ESXi hosts, and physical switches are interconnected and configured to utilize LACP for improved network performance and redundancy.

Create and configure

A Link Aggregation Group (LAG) refers to a logical grouping of multiple physical network links or ports combined to form a single high-bandwidth connection. LAGs are commonly used to increase network capacity, enhance redundancy, and improve overall network performance.

The diagram above illustrates the concept of a LAG.

You have a vSphere Distributed Switch (vDS) with three ESXi hosts. Each ESXi host has multiple physical network interface cards (NICs) connected to the physical Switch. Instead of treating these individual NICs as separate connections, you can create a LAG by combining them into a single logical link.

For instance, you can configure LACP (Link Aggregation Control Protocol) on the physical Switch and the vDS to enable dynamic negotiation and management of the LAG. LACP will establish a LAG between the physical Switch and the vDS by bundling the multiple NICs together.

Once the LAG is formed, virtual machines and other network resources in your vSphere environment can utilize the combined bandwidth and redundancy provided by the LAG. This means that traffic can be distributed across the individual NICs within the LAG, allowing for increased throughput and improved resilience.

By using LAGs, you can effectively utilize multiple physical links as a single logical link, providing higher bandwidth and fault tolerance for the VMs networks.

Requirements and limitations to use LACP in vSphere Distributed Switches

  • LACP requires a vSphere Enterprise Plus license for the vDS feature
  • ESXi host only supports NIC teaming on a single physical switch or stacked switches
  • Link aggregation is not supported when using different trunked switches. To enable link aggregation, the Switch must be configured to perform 802.3ad link aggregation in static mode ON, while the virtual Switch should have its load balancing method set to Route based on IP hash
  • Enabling either Route based on IP hash without 802.3ad aggregation or vice versa causes disruptions in networking. Therefore, it is recommended first to change the virtual Switch. This results in the service console being unavailable. Still, the physical switch management interface remains accessible, enabling aggregation on the involved switch ports and restoring networking
  • Do not use for iSCSI software multipathing. iSCSI software multipathing requires just one uplink per vmkernel, and link aggregation gives it more than one
  • Do not use beacon probing with IP HASH load balancing
  • Do not configure standby or unused uplinks with IP HASH load balancing
  • VMware supports only one EtherChannel bond per Virtual Standard Switch (vSS)
  • ESXi supports LACP on vDS only
  • For more information, see Enabling or disabling LACP on an Uplink Port Group using the vSphere Web Client (2034277)
  • In vSphere Distributed Switch 5.5 and later, all load balancing algorithms of LACP are supported

You need to ensure that the load-balancing algorithm used in ESXi matches the load-balancing algorithm implemented on the physical Switch is essential. For inquiries regarding the specific load balancing algorithm employed by the physical Switch, please consult the vendor of the physical Switch.

Important Note: Due to network disruption, changes to link aggregation should be done during a maintenance window.

As with any networking change, there is a chance for network disruption, so a maintenance period is recommended for changes. This is especially true on a vSphere Distributed Switch (vDS) because vCenter owns the Distributed Switch, and the hosts alone cannot change the vDS if the connection to vCenter is lost.

Enabling LACP can complicate vCenter or host management recovery in production down scenarios because the LACP connection may need to be broken to move back to a Standard Switch if necessary (since LACP is not supported on a Standard Switch).

Limitations:

  • vSphere Distributed Switches are the only supported switch type for LACP configuration
  • LACP cannot be used for software iSCSI multipathing
  • Host Profiles do not include LACP configuration settings
  • LACP is not supported within guest operating systems, including nested ESXi hosts
  • LACP cannot be used together with the ESXi Dump Collector

Note: The management port must be connected to a vSphere Standard Switch to use this feature.

  • Port Mirroring cannot be used with LACP to mirror LACPDU packets used for negotiation and control
  • The teaming health check does not function for LAG ports, as the LACP protocol ensures the health of individual LAG ports. However, VLAN and MTU health checks can still be performed on LAG ports
  • Enhanced LACP support is limited to a single LAG per distributed port (dvPortGroup) or port group to handle the traffic
  • Up to 64 LAGs can be created on a distributed switch, with each host supporting up to 64 LAGs
    • The actual number of usable LAGs depends on the capabilities of the physical environment and the virtual network topology
    • For example, if the physical Switch allows a maximum of four ports in an LACP port channel, you can connect up to four physical NICs per host to a LAG
  • LACP is currently not supported with SR-IOV

Compatibility of Link Aggregation Control Protocol (LACP) with virtual Distributed Switch (vDS):

Create and configure

Now that you know about LACP, how it works, and the requirements or limitations in vSphere, we will finish this part one about LACP in vDS. We can now create a vDS with Link Aggregation Group(LAG).

Part two will discuss configuring LACP in vSphere Distributed Switch and migrating your existing portgroups(Standard or Distributed Switch) to a vDS with LAG.

Read More:
VMware for Beginners – What is vSphere Trust Authority: Part 17(b)

Follow our Twitter and Facebook feeds for new releases, updates, insightful posts and more.

Rate this post