1. Let’s log into the Nexus 1000v console and type show module. Here you can see that the Virtual Supervisor Module is active and that the ip address is 192.168.1.80.
2. The first thing we need to do is configure our vlans.
3. Now we need to create a port-profile. This port profile is of type ethernet which means that it is assigned to physical interfaces of your ESX host. If you are going to be connecting multiple ESX or ESXi hosts to the Nexus switch you must create separate ethernet port-profiles for each, especially if you are using etherchannel. The other type we can create is vethernet which is for virtual machine networks. Type in the commands as you see them below including any vlans that you want included in the trunk. System vlans are priority vlans which always remain in a forwarding state. These types of vlans should include management/service console, nexus control, nexus packet and iSCSI. Because system vlans are always in a forwarding state they actually by pass any QOS or access-lists. Along with defining system vlans here they must also be defined on the vethernet port-profile like in step 4. In the example below i’ve added vMotion as a system vlan for demo purposes but you would not normally add vMotion as a system vlan in a production network.
4. To create a virtual ethernet or vethernet port-profile enter the commands below. As this demo shows my vMotion network as a system vlan, I have to include system vlan 20 in the config. However on a normal virtual machine network I would not include the system vlan command. As soon as you type in state enabled and press enter, you will see the tasks running in vCenter to create the network.
5. So now that we’ve completed the preliminary steps we can now move over to vCenter and start to add our ESX/ESXi host into the Nexus environment. Open up vCenter and in the address bar click Inventory and select Networking. Expand your datacenter – vsm1 (Name of your Nexus Switch) – vsm1 (Name of your Nexus Switch). Right click on vsm1 device and select Add Host.
6. In this Demo I will select all network adapters to migrate over to my system-uplink port profile in my Nexus Switch. However in a production network you would always leave 1 or 2 network adapters in your previous settings so that you can take your time migrating between the old and new. This also protects you if you have made any config errors on your nexus that don’t work after you migrate. In this demo i’ll select all my vmnic’s and under the DVUplink port group select the “port-profile ethernet” profile that we create earlier – “system-uplink”. When you use the same port-profile on multiple vmnic’s on the same host you should also create an etherchannel on the Nexus Switch and on your uplink switch. Click Next.
7. Configure the port-channel on your nexus switch using the commands below. We’ll add the system-uplink port-profile to this port-channel.
8. Migrate your vmk networks such as Service Consoles, vMotion and iSCSI to the vethernet port-profiles you created earlier. Click Next.
9. If you have virtual machines configured on the ESX hosts you will have the option to migrate it’s network to one of the vethernet port-profiles you created earlier. If you have not migrated all your vmnic’s to the system-uplink port-profile you can leave your virtual machines as “Do not migrate and it will still work”. However if you have migrated all your vmnic’s to the system-uplink port-profile you will need to migrate your virtual machine port-groups. Click Assign Port Group.
10. Select the vethernet port-profile for your virtual machine and click Ok.
11. The test-network vethernet port-profile is now selected and the virtual machine network will be migrated to this. Click Next.
12. A summary graphic will be presented highlighting the new settings. Click Finish to complete the install.
13. Now if we jump into the console of our Nexus 1000v switch and type show module we can see that our host vmcluster2.vmlab.local has been added.
14. Log into your ESX server and type in VEM Status (VEM = Cisco Nexus 1000v Virtual Ethernet Module). As you can see VEM is loaded on vmnic1 and 2.
All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.