Understanding Cisco UCS VIF Paths
In the Cisco UCS world where a virtual NIC on a virtual server is connected to a virtual port on a virtual switch by a virtual cable, it is not surprising that there can be confusion about what path packets are actually taking through the UCS infrastructure.
Similarly knowing the full data path through the UCS infrastructure is essential to understanding troubleshooting and testing failover.
I’m sure you have all seen the table below in UCS Manager where it details the path that each virtual NIC or HBA takes through the infrastructure. But what do all these values mean? And where are they in the infrastructure? That is the objective of this post.
Similarly knowing the full data path through the UCS infrastructure is essential to understanding troubleshooting and testing failover.
I’m sure you have all seen the table below in UCS Manager where it details the path that each virtual NIC or HBA takes through the infrastructure. But what do all these values mean? And where are they in the infrastructure? That is the objective of this post.
I will also detail the relevant CLI commands to confirm, and troubleshoot the complete VIF (Virtual Interface) Path.
If you have ever seen and understood the film “Inception” you should have no problem understanding Cisco UCS, where virtual machines are run on virtual hosts which run on virtual infrastructure and abstracted hardware but in all seriousness it’s really not that complicated.
The diagram below shows a Half width blade with a vNIC called eth0 created on a Cisco VIC (M81KR) with its primary path mapped to Fabric A. For simplicity only one IO Module to Fabric Interconnect link is shown in the diagram, as well as only one of the Host Interfaces (HIFs / Server facing ports) on the IO module. In this post I will focus in on eth0 which is assigned virtual circuit 749.
The diagram below shows a Half width blade with a vNIC called eth0 created on a Cisco VIC (M81KR) with its primary path mapped to Fabric A. For simplicity only one IO Module to Fabric Interconnect link is shown in the diagram, as well as only one of the Host Interfaces (HIFs / Server facing ports) on the IO module. In this post I will focus in on eth0 which is assigned virtual circuit 749.
Virtual Circuit
First column in figure 1, Virtual Circuit number, this is a unique value assigned to the virtual circuit which comprises the virtual NIC, the virtual cable (red dotted line in figure 2) and the virtual switch port. The virtual switch port and virtual circuit have the same identifier in this case 749.
First column in figure 1, Virtual Circuit number, this is a unique value assigned to the virtual circuit which comprises the virtual NIC, the virtual cable (red dotted line in figure 2) and the virtual switch port. The virtual switch port and virtual circuit have the same identifier in this case 749.
If you do not know which virtual circuit will be used for the particular MAC address you are interested in, or which Chassis and Server that virtual circuit resides on, you can use the below commands to find out.
The above output shows that the MAC address is behind Veth749, now in order to find out which Chassis and Server is using Veth749 issue the below command.
The Interface to which Veth749 is bound to is Ethernet 1/1/2 which equates to Chassis1 Server 2 (you can ignore the middle value) the description field also confirms the location and virtual interface name on the server (eth0)
As you know, (having read my blog post on “Adapter FEX”:-) ) the M81KR “PALO” adapter is actually a mezzanine fabric extender just like the IO Module (FEX) in the Chassis. What this means is when I create a virtual interface on the adapter, that interface is actually created and appears as a local interface on the Fabric Interconnect (FI), whether it’s a vNIC which appears as a Veth port on the FI or a vHBA which appears as a Vfc interface.
This means we will have many virtual circuits or “virtual cables” going down the physical cable, Cisco UCS obviously needs to be able to differentiate between all these “virtual cables”, and it does so by attaching a Virtual Network TAG (VN-TAG) to each virtual circuit. This way the Cisco UCS can track and switch packets between virtual circuits, even if both of those virtual circuits are using the same physical cable, which the laws of Ethernet would not normally allow.
As you know, (having read my blog post on “Adapter FEX”:-) ) the M81KR “PALO” adapter is actually a mezzanine fabric extender just like the IO Module (FEX) in the Chassis. What this means is when I create a virtual interface on the adapter, that interface is actually created and appears as a local interface on the Fabric Interconnect (FI), whether it’s a vNIC which appears as a Veth port on the FI or a vHBA which appears as a Vfc interface.
This means we will have many virtual circuits or “virtual cables” going down the physical cable, Cisco UCS obviously needs to be able to differentiate between all these “virtual cables”, and it does so by attaching a Virtual Network TAG (VN-TAG) to each virtual circuit. This way the Cisco UCS can track and switch packets between virtual circuits, even if both of those virtual circuits are using the same physical cable, which the laws of Ethernet would not normally allow.
Adapter Port
The Cisco VIC (M81KR) has two physical 10Gbs traces (paths / ports) one trace to Fabric A and one trace to Fabric B. This is how the VIC can provide hardware fabric failover and fabric load balancing to its virtual interfaces.These adapter ports are listed as 1/1 to Fabric A and 2/2 to Fabric B.
The Cisco VIC (M81KR) has two physical 10Gbs traces (paths / ports) one trace to Fabric A and one trace to Fabric B. This is how the VIC can provide hardware fabric failover and fabric load balancing to its virtual interfaces.These adapter ports are listed as 1/1 to Fabric A and 2/2 to Fabric B.
In the case of a Full Width blade, which can take 2 Mezzanine adapters this obviously doubles the number of paths to four.
In the case of the VIC 1240 and VIC 1280 which have 20Gbs and 40Gbs to each fabric respectively there is still only a single logical path to each fabric as the links are hardware port channels 2x10Gbs per fabric for the VIC 1240 and 4x10Gbs per fabric in the case of the VIC 1280.
In the case of the VIC 1240 and VIC 1280 which have 20Gbs and 40Gbs to each fabric respectively there is still only a single logical path to each fabric as the links are hardware port channels 2x10Gbs per fabric for the VIC 1240 and 4x10Gbs per fabric in the case of the VIC 1280.
The new M3 servers which have LAN on board (mLOM) provide additional on board paths.
Fex Host port
In the lab setup I am using, the FEX modules are 2104XP’s which have 8 internal Server facing ports (Sometimes referred to as Host Interfaces (HIFs)),which connect to the Blade slots, port 1 to blade slot 1, port 2 to blade slot 2 and so on
In the lab setup I am using, the FEX modules are 2104XP’s which have 8 internal Server facing ports (Sometimes referred to as Host Interfaces (HIFs)),which connect to the Blade slots, port 1 to blade slot 1, port 2 to blade slot 2 and so on
Fex Network Port
The 2104XP IO Modules also have 4 Network Interfaces (NIFs / Fex Uplinks) which connect to its upstream Fabric Interconnect.
The 2104XP IO Modules also have 4 Network Interfaces (NIFs / Fex Uplinks) which connect to its upstream Fabric Interconnect.
Fabric Interconnect
The 2 FI interfaces listed in Figure 1 are FI Server Port and FI Uplink
The 2 FI interfaces listed in Figure 1 are FI Server Port and FI Uplink
The server facing ports on the Fabric Interconnect are called FI Server Ports and can be confirmed in the output of the “Show Interface fex-fabric” command. The FI server ports are listed in the second column “Fabric Port”. In figure 5
The FI Uplink interface can be found by checking the pinning of the Veth interface.
As can be seen from the above figure, Veth749 is pinned to FI Uplink (Border Interface) Port-Channel 1
If using VIC1240 or VIC1280′s then there are mandatory hardware port channels for all the ports of those cards that are in the same port-port group (fabric) that’s why you see a port-channel Id as the adapter port in the VIF paths tab.
You can see this by connecting into the NXOS element of the FI and you can do. “Sh port-channel summary” and you will see you port-channels I.e Po1292 and the member ports will be the Host interfaces (HIFs) on the IOM that map to that blade slot.
Armed with all the above you should now have the information necessary to understand the packet flow within the UCS and be able to troubleshoot as well as monitor and understand failover.
Hope this helps.