In my last post we covered the all-in-one installation of the OpenStack controller with the Nutanix shipped Acropolis Openstack drivers. The install created a single virtual machine, the Openstack Services VM (OVM). In this post I intend to talk about setting up a Network Topology using the Openstack dashboard and the Neutron service integration with Nutanix. I will be able to show how this gets reflected in the Acropolis Prism GUI. First, let’s create a public network for our VMs to reside on. Navigate via the Horizon dashboard to Admin > Networks…
1. Navigate to System > Networks on the Horizon dashboard and select +Create Network
Currently, only local and VLAN “Provider Network Types” are supported by the Nutanix Openstack drivers. In the screenshot below, I am creating a segmented network (ID 64), named public-network, in the default admin tenant. I specify the network as shared and external.
Do not use a VLAN/network assignment that has already been defined within the Nutanix cluster. Any network/subnet assignment should be done within Openstack using network parameters reserved specifically for your Openstack deployments.
Each network needs to have a subnet created with an associated DHCP pool. This DHCP pool information gets sent via the appropriate API call to Acropolis. Acropolis management associates the IP address with the vnic on the Acropolis VM. The Acropolis Openstack driver reads this configuration and, when the cloud instance gets powered on in Openstack, it will register the IP address with the Openstack VM. See setup screenshots below…
When creating a subnet, you must specify a DNS server.
3. Subnet creation – requires subnet name, network address (CIDR notation), gateway address, DHCP pool range, DNS servers
We can see the newly created network reflected in Acropolis via the Prism GUI in the screenshot below:
Once a network has been configured and you decide to add an additional cluster. That network will not be extended across the new cluster. You have a choice, you can either add a new network. Or, you can remove the network and re-add it so that it gets created across all the currently configured clusters.
Now that we have a network configured we can look at setting up cloud instances to run on it. To do that we need to set up the Glance Image Service and that’s the subject of my next post.
As of Acropolis Base Software (NOS) version 4.6, Nutanix released a set of Acropolis drivers that provide Openstack + Nutanix integration. These drivers allow an Openstack deployment to consume the Acropolis management infrastructure in a similar way to a cloud service or within a datacenter. I intend to use this series of blog postings to cover a walk-through of setting up the Nutanix Openstack drivers deployment and configuring cloud instances.
The integration stack works by having the Openstack controller installed in a separate Nutanix Openstack Services VM (Nutanix OVM). The Acropolis drivers can be installed into that same OVM. These drivers then interpose on the Openstack services for compute, image, network and volume. By subsequently translating Openstack requests, into the appropriate REST API calls in Acropolis management layer, a series of Nutanix clusters are then managed by the Openstack controller.
Openstack – Acropolis Driver integrated stack
The Acropolis drivers are installed in either one of :
All-In-One Mode: You use the OpenStack controller included in the Nutanix OVM to manage the Nutanix clusters. The Nutanix OVM runs all the OpenStack services and the Acropolis OpenStack drivers.
Driver-Only Mode: You use a remote (or upstream) OpenStack controller to manage the Nutanix clusters, and the Nutanix OVM includes only the Acropolis OpenStack drivers.
In either case, Nutanix currently only supports the Kilo release of Openstack.
I will go into further detail around Openstack and Acropolis architecture integration in future posts. For now let’s start by getting things set up. First requirement is to download the OVM image – from the Nutanix Portal – and then add it to the Acropolis Image Service….
$ wget http://download.nutanix.com/nutanix-open-stack/nutanix_openstack-2015.1.0-1.ovm.qcow2
and upload locally....
<acropolis> image.create ovm source_url=nfs://freenas/naspool/openstack/nutanix_openstack-2015.1.0-1.ovm.qcow2 container=Image-Store
Also, Prism allows upload from your desktop if preferred/possible
or, go direct via the internet...
<acropolis> image.create ovm source_url=http://download.nutanix.com/nutanix-open-stack/nutanix_openstack-2015.1.0-1.ovm.qcow2 container=Image-Store
Note : As I had already created several containers on my cluster, I needed to specify the name of the preferred container in the above syntax . Otherwise, the container name of default is expected, if only one container exists and no container name is supplied. The following error is shown otherwise…
kInvalidArgument: Multiple containers have been created, cannot auto select
Create the Openstack Services VM (OVM) – using Acropolis command line on a CVM on the Nutanix cluster. This can all be done very easily via the Prism GUI but for reasons of space I am going the CLI route. Refer to the Install Guide on the Nutanix Portal. Select Downloads > Tools & Firmware from the menus/drop-downs
Note: if you are unfamiliar with creating a network for your VMs to reside on then take a look here , where I discuss setting up VMs and associated disks and networking on the Nutanix platform.
For now let’s consider the all-in-one install mode , there are just three steps….
o Login to the VM using the supplied credentials (via ssh)
o Add the OVM
[root@nx-ovm]# ovmctl --add ovm --name nx-ovm --ip 10.68.64.172 --netmask 255.255.252.0 --gateway 10.68.64.1 --nameserver 22.214.171.124 --domain nutanix.com
o Add the Openstack Controller
[root@nx-ovm]# ovmctl --add controller --name kilo --ip 10.68.64.172
o Add the Nutanix clusters that you want to manage with OpenStack,
one cluster at a time.
[root@nx-ovm]# ovmctl --add cluster --name SAFC --ip 10.68.64.55 --username admin --password nutanix/4u --container_name DEFAULT-CTR --num_vcpus_per_core 4
Just to point out one or two things in the above CLI. The IP address for both OVM and Openstack controller are the same – should make sense as they are both part of the same VM in this case. Also, use the Cluster Virtual IP when registering the cluster. This is for HA reasons. Rather than use the IP of an individual CVM, use the failover IP for the cluster itself. You need to specify a container name if it is not default (or there are more than one).
If you remove or rename the container with which you added a Nutanix cluster, you must restart the services on the OpenStack Controller VM by running the following command:
ovmctl --restart services
You can now verify the base install using….
[root@nx-ovm ~]# ovmctl --show
Allinone - Openstack controller, Acropolis drivers
1 OVM name : nx-ovm
IP : 10.68.64.172
Netmask : 255.255.252.0
Gateway : 10.68.64.1
Nameserver : 126.96.36.199
Domain : nutanix.com
Openstack Controllers configuration:
1 Controller name : kilo
IP : 10.68.64.172
Auth strategy : keystone
Auth region : RegionOne
Auth tenant : services
Auth Nova password : ********
Auth Glance password : ********
Auth Cinder password : ********
Auth Neutron password : ********
DB Nova : mysql
DB Cinder : mysql
DB Glance : mysql
DB Neutron : mysql
DB Nova password : ********
DB Glance password : ********
DB Cinder password : ********
DB Neutron password : ********
RPC backend : rabbit
RPC username : guest
RPC password : ********
Image cache : disable
Nutanix Clusters configuration:
1 Cluster name : SAFC
IP : 10.68.64.55
Username : admin
Password : ********
Vnc : 49795
Vcpus per core : 4
Container name : DEFAULT-CTR
Services enabled : compute, volume, network
Version : 2015.1.0
Release : 1
Summary : Acropolis drivers for Openstack Kilo.
Additionally, pointing your browser at the IP address of the OVM – http://10.68.64.172 and navigating to Admin > System Information and selecting the Services Tab, you should see that all Openstack services are provided by the OVM IP address. Similarly, under the Compute, Block Storage and Network tabs it should also report that these services are being provided via the OVM
One other check would be to lookat Admin > Hypervisors, a Nutanix cluster reports as a single hypervisor in the Openstack config – see below:
I hope this is enough to get people started looking at and trying out Openstack deployments using Nutanix. In the next series of posts I will look at configuring images, setting up networks/subnets and then move on to creating instances.