NSX-V Lab: Controller Deployment

Welcome to Part 5 of the NSX-V Lab Series. In the previous post, we covered how to license the NSX-V Manager appliance. 
In this post we’ll cover the deployment of the NSX controllers and how to adjust them for a lab environment.
A quick disclaimer do not adjust the controller resources for a production installation, doing so is unsupported.

Login to your vCenter, select networking and security from the home screen and then go to ‘Installation and Upgrade’ take a moment to browse the tabs in this section since we will be spending a fair bit of our time here during the initial build.
From the Management pane select ‘NSX Controller Nodes’ ?
Click ‘+ ADD’

Enter a complex password again with a minimum of 12 characters and click ‘Next.’

If you don’t set a complex password the wizard will let you complete all the other settings before giving you the following error when you click finish but you won’t initially get the error when clicking Next from the Password screen

Give the controller a name
Select the Datacenter where it will run.
Then select the Cluster/Resource pool where the VM will be located. This does not have to be on the hosts that are being prepared for NSX and for a customer with a management cluster thats where we would normally deploy the controllers. in my lab thats the physical hosts in the Lab cluster.

Select the datastore. For a production deployment if you have multiple redundant datastores you should place the controllers each on different datastores so that a storage outage on a single datastore won’t take down your control plane.
You can also select the Host and the Folder.
I don’t normally select a host since for production deployments we will create DRS rules later to ensure separation of the NSX controllers.
Now click ‘Select Network’

Select the management network this needs to have connectivity to the hosts.

Next we need to select the IP Pool so click ‘Select IP Pool’

We don’t currently have an IP pool so we can create one directly from here.
When we later setup VXLAN I will cover creating a pool via a different method but for simplicity since we have the option here we will use it so click ‘CREATE NEW IP POOL’

Give the IP Pool a Name, a Gateway and the Prefix length.
Optionally we can also set the primary and secondary DNS and the Suffix.
Next click ‘+ ADD’ under IP Pool Range

We need to allocate a range of IP’s for the controllers.
Since three controllers is the only supported number we only need to allocate three IP’s here so enter them in the box. they can be sequential as mine are or individual IP’s

Once you have the pool created select it and click ‘OK’

Review the settings then click ‘Finish’

The controller node will begin deploying. this can take a fair bit of time for the first controller so just sit back and let it run.

Once it’s finished you will get a green connected Status
Notice that the Controller Node shows as controller-2 this is because I cancelled a previous deployment yet the system will increment the controller number each time OCD suffers beware!
With one controller we are now in read only mode so we need a second even for a lab, so again click ‘+ Add’

Repeat the previous process but this time we just need to select the IP Pool we created earlier.

Once the second controller has powered up it will change to connected and each will show one peer to indicate that they have connected.
Also notice my nodes now show as 1 and 2 yes my OCD get the better of me 🙂 With two controllers we are now out of read only mode for my lab thats all the controllers I need however for a production deployment you must have three controllers, no more and no less so if it’s production go ahead and repeat the process to deploy the third controller.

You can also add common attributes on the controller page for DNS NTP and Syslog these are not required but recommended especially syslog.

So our controllers are up but this is a lab so we want to reduce the resource that are assigned to the VM’s however NSX-V controllers cannot be edited as the option is greyed out so we need to manipulate the vCenter database to remove the restriction.
Again Do Not do this in a production deployment!

SSH to your vCenter server and enable and run shell

shell.set – -enable True shell

shell

change to the Postgres Database with PSQL

/opt/vmware/vpostgres/current/bin/psql -U postgres

Then connect to the database

\connect VCDB

Run the command to identify the object ID’s

select * from VPX_DISABLED_METHODS;

for each VM run the command to delete the entries where MO_ID is the VM in question for example vm-445

delete from VPX_DISABLED_METHODS where entity_mo_id_val = ‘MO_ID’;

Once deleted we need to restart the vCenter VPXD service simply connect to SSH again and run each of the commands below.

service-control – -stop vmware-vpxd

service-control – -start vmware-vpxd

Log back into vCenter and the Edit Settings option is now available

I’m going to remove the reservations by enter 0 and reduce the CPU to 1 and the RAM to 1 GB

Once done power the VM’s back on, go back to NSX and wait for the green status indicator.

For production deployments you should ensure that the Controller nodes are not residing on the same physical host as an outage of that host will take down your entire control plane! Unlike NSX HA deployed objects the system does not automatically create DRS rules so we will go ahead and do that now.
Browse to the cluster where the controllers are deployed and go to the Configure tab then scroll down to VM/Host Rules then click Add.

Give the rule a name, change the Type to Separate Virtual Machines, then Add the Controllers as Members.
For a production solution the cluster should have a minimum of three hosts.

To force DRS to update now go to Monitor the vSphere DRS and click Run DRS Now.


We are now nearly ready to install NSX on the hosts. But before we do we have one more stop to make.
NSX-V Lab Part:6 NSX-V Exclude VM’s From Distributed Firewall

Leave a Reply

Your email address will not be published. Required fields are marked *