Many people are using Flex Fabric for Ethernet (+FC) connectivity for their HP Blade environments. For better functionality and control we've chosen to use HPE 6125XLG blade switches instead and documenting how we achieved this. It's interesting to note that the 6125XLG is using the exact same hardware that is also used in the FlexFabric -20/40 F8.
Problem
I've found the documentation for the H3C line of switches is a bit confusing and some times wrong. Our switches are using a command set known as Comware7 while many examples are using Comware5.
Solution
We have configured our system with the following features:
- The switches are stacked and works as one big switch. See part 1 for a closer description.
- There are two 10GbE uplinks from each of these switches to two Cisco 6500 series switches.
- The trunk between the 6125XLGs and Cisco 6500 is setup with LACP.
- Spanning tree between switches is configured to RSTP
- CDP has been setup between switches and servers
- VMware ESXi is setup with distributed switch using LBT+NetIOC
- Logs are forwarded to logstash
- SNMP has been configured (for future use)
- NTP
There are two 6125XLG switches in the C7000 and each of the blades has one nic connected to each of these switches. The two switches has 4 10GbE ports connected to each other and these are normally used for stacking (IRF) and FCoE (you dedicate a pair for each). Each switch also has 8x 10GbE SFP+ ports and 4x 40GbE QSFP+ ports. It's recommended to use original HPE GBICs, but third party GBICs has also proven also work nicely.
Logical view |
1. Stacking
When you configure IRF you have 4 ports to choose from. You can either use two or four of these (you can dedicate two for FCoE if you need to). In this example we're using all four ports to aggregate the switches into one large one. In H3C language this is called Intelligent Resilient Framework.irf mac-address persistent timerirf auto-update enableundo irf link-delayirf member 1 priority 10irf member 2 priority 1
irf-port 1/1port group interface Ten-GigabitEthernet1/0/17port group interface Ten-GigabitEthernet1/0/18port group interface Ten-GigabitEthernet1/0/19port group interface Ten-GigabitEthernet1/0/20#irf-port 2/2port group interface Ten-GigabitEthernet2/0/17port group interface Ten-GigabitEthernet2/0/18port group interface Ten-GigabitEthernet2/0/19port group interface Ten-GigabitEthernet2/0/20
interface Ten-GigabitEthernet1/0/17
description IRF
#
interface Ten-GigabitEthernet1/0/18
description IRF
#
interface Ten-GigabitEthernet1/0/19
description IRF
#
interface Ten-GigabitEthernet1/0/20
#
interface Ten-GigabitEthernet2/0/17
description IRF
#
interface Ten-GigabitEthernet2/0/18
description IRF
#
interface Ten-GigabitEthernet2/0/19
description IRF
#
interface Ten-GigabitEthernet2/0/20
description IRF
#
2. Trunk ( stp, LACP, 4x 10GbE, CDP)
On each of the two 6125 switches we establish a trunk facing the core Cisco switches. In our example we decided to use rstp for spanning tree. We use CDP instead of LLDP for our external facing interfaces.
Interfaces on switch 1:stp mode rstpstp global enable#interface Bridge-Aggregation1port link-type trunkport trunk permit vlan alllink-aggregation mode dynamic
interface Ten-GigabitEthernet1/1/5
port link-mode bridge
description Trunk 6500
port link-type trunk
port trunk permit vlan all
lldp compliance admin-status cdp txrx
port link-aggregation group 1
#
interface Ten-GigabitEthernet1/1/6
port link-mode bridge
description Trunk 6500
port link-type trunk
port trunk permit vlan all
lldp compliance admin-status cdp txrx
port link-aggregation group 1Interfaces on switch 2:
interface Ten-GigabitEthernet2/1/5
port link-mode bridge
description Trunk 6500
port link-type trunk
port trunk permit vlan all
lldp compliance admin-status cdp txrx
port link-aggregation group 1
#
interface Ten-GigabitEthernet2/1/6
port link-mode bridge
description Trunk 6500
port link-type trunk
port trunk permit vlan all
lldp compliance admin-status cdp txrx
port link-aggregation group 1
#
3. Interfaces facing ESXi hosts
Each of the ESXi hosts have a config for each of it's nics, one on each switch. Flow control is enabled by default on all ESXi nics so we also enable it on the switch. Since we are using LBT+NetIOC we are not using etherchannel / LACP on the ESXi ports (like most examples provided by HPe do).
interface Ten-GigabitEthernet1/0/1port link-mode bridgedescription xyz-esx-01port link-type trunkport trunk permit vlan allflow-controlstp edged-portlldp compliance admin-status cdp txrx
interface Ten-GigabitEthernet2/0/1
port link-mode bridge
description xyz-esx-01
port link-type trunk
port trunk permit vlan all
flow-control
stp edged-port
lldp compliance admin-status cdp txrx
#
4. Management (clock, syslog, snmp, ssh, ntp)
#clock timezone CET add 01:00:00clock summer-time CETDT 02:00:00 March last Sunday 03:00:00 October last Sunday 03:00:00#info-center synchronousinfo-center logbuffer size 1024info-center loghost 10.20.30.40 port 20514#snmp-agentsnmp-agent local-engineid 800063A280BCEAFA031F8600000001snmp-agent community write privatecleartextpasswordsnmp-agent community read publiccleartextpasswordsnmp-agent sys-info version all#ssh server enable#ntp-service enablentp-service unicast-server 1.2.3.4#
Very Usefull, thank you
ReplyDelete