August 18, 2016

Configuring the HPE 6125XLG Ethernet Blade Switch for use in a VMware environment - part 2

Background
Many people are using Flex Fabric for Ethernet (+FC) connectivity for their HP Blade environments. For better functionality and control we've chosen to use HPE 6125XLG blade switches instead and documenting how we achieved this. It's interesting to note that the 6125XLG is using the exact same hardware that is also used in the FlexFabric -20/40 F8.

Problem
I've found the documentation for the H3C line of switches is a bit confusing and some times wrong. Our switches are using a command set known as Comware7 while many examples are using Comware5.

Solution
We have configured our system with the following features:

  1. The switches are stacked and works as one big switch. See part 1 for a closer description.
  2. There are two 10GbE uplinks from each of these switches to two Cisco 6500 series switches.
  3. The trunk between the 6125XLGs and Cisco 6500 is setup with LACP.
  4. Spanning tree between switches is configured to RSTP
  5. CDP has been setup between switches and servers
  6. VMware ESXi is setup with distributed switch using LBT+NetIOC
  7. Logs are forwarded to logstash
  8. SNMP has been configured (for future use)
  9. NTP

There are two 6125XLG switches in the C7000 and each of the blades has one nic connected to each of these switches. The two switches has 4 10GbE ports connected to each other and these are normally used for stacking (IRF) and FCoE (you dedicate a pair for each). Each switch also has 8x 10GbE SFP+ ports and 4x 40GbE QSFP+ ports. It's recommended to use original HPE GBICs, but third party GBICs has also proven also work nicely. 
Logical view


1. Stacking

When you configure IRF you have 4 ports to choose from. You can either use two or four of these (you can dedicate two for FCoE if you need to). In this example we're using all four ports to aggregate the switches into one large one. In H3C language this is called Intelligent Resilient Framework.
 irf mac-address persistent timer
 irf auto-update enable
 undo irf link-delay
 irf member 1 priority 10
 irf member 2 priority 1

irf-port 1/1
 port group interface Ten-GigabitEthernet1/0/17
 port group interface Ten-GigabitEthernet1/0/18
 port group interface Ten-GigabitEthernet1/0/19
 port group interface Ten-GigabitEthernet1/0/20
#
irf-port 2/2
 port group interface Ten-GigabitEthernet2/0/17
 port group interface Ten-GigabitEthernet2/0/18
 port group interface Ten-GigabitEthernet2/0/19
 port group interface Ten-GigabitEthernet2/0/20
interface Ten-GigabitEthernet1/0/17
 description IRF
#
interface Ten-GigabitEthernet1/0/18
 description IRF
#
interface Ten-GigabitEthernet1/0/19
 description IRF
#
interface Ten-GigabitEthernet1/0/20
#
interface Ten-GigabitEthernet2/0/17
 description IRF
#
interface Ten-GigabitEthernet2/0/18
 description IRF
#
interface Ten-GigabitEthernet2/0/19
 description IRF
#
interface Ten-GigabitEthernet2/0/20
 description IRF
#

 2. Trunk ( stp, LACP, 4x 10GbE, CDP) 

On each of the two 6125 switches we establish a trunk facing the core Cisco switches. In our example we decided to use rstp for spanning tree. We use CDP instead of LLDP for our external facing interfaces.
 stp mode rstp
 stp global enable
#
interface Bridge-Aggregation1
 port link-type trunk
 port trunk permit vlan all
 link-aggregation mode dynamic


Interfaces on switch 1:
interface Ten-GigabitEthernet1/1/5
 port link-mode bridge
 description Trunk 6500
 port link-type trunk
 port trunk permit vlan all
 lldp compliance admin-status cdp txrx
 port link-aggregation group 1
#
interface Ten-GigabitEthernet1/1/6
 port link-mode bridge
 description Trunk 6500
 port link-type trunk
 port trunk permit vlan all
 lldp compliance admin-status cdp txrx
 port link-aggregation group 1
Interfaces on switch 2:
interface Ten-GigabitEthernet2/1/5
 port link-mode bridge
 description Trunk 6500
 port link-type trunk
 port trunk permit vlan all
 lldp compliance admin-status cdp txrx
 port link-aggregation group 1
#
interface Ten-GigabitEthernet2/1/6
 port link-mode bridge
 description Trunk 6500
 port link-type trunk
 port trunk permit vlan all
 lldp compliance admin-status cdp txrx
 port link-aggregation group 1
#

3. Interfaces facing ESXi hosts

Each of the ESXi hosts have a config for each of it's nics, one on each switch. Flow control is enabled by default on all ESXi nics so we also enable it on the switch. Since we are using LBT+NetIOC we are not using etherchannel / LACP on the ESXi ports (like most examples provided by HPe do).
interface Ten-GigabitEthernet1/0/1
 port link-mode bridge
 description xyz-esx-01
 port link-type trunk
 port trunk permit vlan all
 flow-control
 stp edged-port
 lldp compliance admin-status cdp txrx


interface Ten-GigabitEthernet2/0/1
 port link-mode bridge
 description xyz-esx-01
 port link-type trunk
 port trunk permit vlan all
 flow-control 
 stp edged-port
 lldp compliance admin-status cdp txrx
#

4. Management (clock, syslog, snmp,  ssh, ntp)


#
 clock timezone CET add 01:00:00
 clock summer-time CETDT 02:00:00 March last Sunday 03:00:00 October last Sunday 03:00:00
#
 info-center synchronous
 info-center logbuffer size 1024
 info-center loghost 10.20.30.40 port 20514
#
 snmp-agent
 snmp-agent local-engineid 800063A280BCEAFA031F8600000001
 snmp-agent community write privatecleartextpassword
 snmp-agent community read publiccleartextpassword
 snmp-agent sys-info version all
#
 ssh server enable
#
 ntp-service enable
 ntp-service unicast-server 1.2.3.4
#

Conclusion

Finding the right syntax that we needed to configure this switch was a bit challenging as many of the examples we found didn't work right out of the box since the command set is slightly different of different versions. After having overcome the initial obstruction we were able to configure the switch exactly as we needed. 


1 comment: