After running a test comparing network performance between VMs over 10GbE and VMs on the same host, I got asked if I had enabled Jumbo Frames. I had not done that since I didn't think it would affect the result largely. I had measured more than 9Gbit/sec on the ethernet connection and there was a physical limitation of 10 anyway. The test between VMs on the same hosts had a measured bandwidth of almost 25Gbit and the point I was trying to make, was to show that there was a big difference of measured bandwidth of VMs on the same host compared to VMs on different hosts.
I still adjusted the MTU sizes of the infrastructure and reran the tests. The test over physical 10GbE was as expected a bit better than before showing a bandwidth of 9.75Gbit. The test between VMs on the same host did however show a lower measured bandwidth than before. It gave a result that was 4.5Gbit lower than when it was using the default MTU of 1500 bytes.
I found this to be strange, but got suggestions from Twitter speculating in that the VMware code may be better optimized for default MTU size. I also received info that IBM's virtualization platform LPAR recommends a specific MTU size of ~64k for same host VM communication (inter-LPAR communication).
I wrote a little script that would automatically set different MTU sizes and run the iperf bandwidth test:
As we can see, best performance was indeed achieved with the default MTU of 1500 for network communication between VMs residing on the same host.
No comments:
Post a Comment