On my UCS blades I'm running solely ESXi 5.1 and nothing else. I have 1000v's setup for each cluster. Every blade is a B200 M3 with the VIC 1240 card. Currently I just have every server setup with 2 interfaces, 1 for A, 1 for B. Recently after having some issues with saturating links and having performance issues on the VM's I decided to look into QOS. I've read mixed results, some sites say if you have the 1000V, to do all your QOS on there. What is best practice or recommended? Is it better to run 2 VNIC's and tag everything on the 1000V? Or since I have the Palo card break it out with 2X MGMT, 2X VM, 2X VMOTION? Then setup the QOS on the UCS? Thoughts? I read about a million articles last night and I still don't have a solid decision.
Are you a networking guy or a server guy? I understand it may be harder to implement and wrap your head around if you are a server guy and not familiar with QoS concepts and Nexus gear. However in saying that, if you are already running the N1Kv and have support for the product then I would be setting up QoS at that layer.
What happens if your design calls for VM traffic to be prioritised? Implementing this at the UCS layer would result in you breaking out separate vnics exclusively used for each VM traffic type. Granted this isn't as common but for virtualised UC is an example. Stick with your current config and implement QoS on the N1kv. Make sure you read the best practices guide for Nexus 1000v on Cisco UCS. You must create a QoS policy and change host control to full, otherwise any marking the N1kv performs will just be stripped by UCS. Don't forget the hungry beast here that needs to be controlled is vMotion traffic, though in general this shouldn't be occurring frequently unless your hosts are heavily utilised and DRS is moving VMs around a lot.
Most importantly though, you should do some investigation as to the performance issues on the VMs. Have you determined it is definitely network related due to congestion?
Please sign in to leave a comment.