As of Acropolis Base Software (NOS) version 4.6, Nutanix released a set of Acropolis drivers that provide Openstack + Nutanix integration. These drivers allow an Openstack deployment to consume the Acropolis management infrastructure in a similar way to a cloud service or within a datacenter. I intend to use this series of blog postings to cover a walk-through of setting up the Nutanix Openstack drivers deployment and configuring cloud instances.
The integration stack works by having the Openstack controller installed in a separate Nutanix Openstack Services VM (Nutanix OVM). The Acropolis drivers can be installed into that same OVM. These drivers then interpose on the Openstack services for compute, image, network and volume. By subsequently translating Openstack requests, into the appropriate REST API calls in Acropolis management layer, a series of Nutanix clusters are then managed by the Openstack controller.
The Acropolis drivers are installed in either one of :
All-In-One Mode: You use the OpenStack controller included in the Nutanix OVM to manage the Nutanix clusters. The Nutanix OVM runs all the OpenStack services and the Acropolis OpenStack drivers.
Driver-Only Mode: You use a remote (or upstream) OpenStack controller to manage the Nutanix clusters, and the Nutanix OVM includes only the Acropolis OpenStack drivers.
In either case, Nutanix currently only supports the Kilo release of Openstack.
I will go into further detail around Openstack and Acropolis architecture integration in future posts. For now let’s start by getting things set up. First requirement is to download the OVM image – from the Nutanix Portal – and then add it to the Acropolis Image Service….
$ wget http://download.nutanix.com/nutanix-open-stack/nutanix_openstack-2015.1.0-1.ovm.qcow2 and upload locally.... <acropolis> image.create ovm source_url=nfs://freenas/naspool/openstack/nutanix_openstack-2015.1.0-1.ovm.qcow2 container=Image-Store Also, Prism allows upload from your desktop if preferred/possible or, go direct via the internet... <acropolis> image.create ovm source_url=http://download.nutanix.com/nutanix-open-stack/nutanix_openstack-2015.1.0-1.ovm.qcow2 container=Image-Store
Note : As I had already created several containers on my cluster, I needed to specify the name of the preferred container in the above syntax . Otherwise, the container name of default is expected, if only one container exists and no container name is supplied. The following error is shown otherwise…
kInvalidArgument: Multiple containers have been created, cannot auto select
Create the Openstack Services VM (OVM) – using Acropolis command line on a CVM on the Nutanix cluster. This can all be done very easily via the Prism GUI but for reasons of space I am going the CLI route. Refer to the Install Guide on the Nutanix Portal. Select Downloads > Tools & Firmware from the menus/drop-downs
<acropolis> vm.create nx-ovm num_vcpus=2 memory=16G nx-ovm: complete <acropolis> vm.disk_create nx-ovm clone_from_image=ovm DiskCreate: complete <acropolis> vm.nic_create nx-ovm network=vlan.64 NicCreate: complete <acropolis> vm.on nx-ovm
Note: if you are unfamiliar with creating a network for your VMs to reside on then take a look here , where I discuss setting up VMs and associated disks and networking on the Nutanix platform.
For now let’s consider the all-in-one install mode , there are just three steps….
o Login to the VM using the supplied credentials (via ssh) o Add the OVM [root@nx-ovm]# ovmctl --add ovm --name nx-ovm --ip 10.68.64.172 --netmask 255.255.252.0 --gateway 10.68.64.1 --nameserver 8.8.8.8 --domain nutanix.com o Add the Openstack Controller [root@nx-ovm]# ovmctl --add controller --name kilo --ip 10.68.64.172 o Add the Nutanix clusters that you want to manage with OpenStack, one cluster at a time. [root@nx-ovm]# ovmctl --add cluster --name SAFC --ip 10.68.64.55 --username admin --password nutanix/4u --container_name DEFAULT-CTR --num_vcpus_per_core 4
Just to point out one or two things in the above CLI. The IP address for both OVM and Openstack controller are the same – should make sense as they are both part of the same VM in this case. Also, use the Cluster Virtual IP when registering the cluster. This is for HA reasons. Rather than use the IP of an individual CVM, use the failover IP for the cluster itself. You need to specify a container name if it is not default (or there are more than one).
Pro Tip
If you remove or rename the container with which you added a Nutanix cluster, you must restart the services on the OpenStack Controller VM by running the following command:
ovmctl --restart services
You can now verify the base install using….
[root@nx-ovm ~]# ovmctl --show Role: ----- Allinone - Openstack controller, Acropolis drivers OVM configuration: ------------------ 1 OVM name : nx-ovm IP : 10.68.64.172 Netmask : 255.255.252.0 Gateway : 10.68.64.1 Nameserver : 8.8.8.8 Domain : nutanix.com Openstack Controllers configuration: ------------------------------------ 1 Controller name : kilo IP : 10.68.64.172 Auth Auth strategy : keystone Auth region : RegionOne Auth tenant : services Auth Nova password : ******** Auth Glance password : ******** Auth Cinder password : ******** Auth Neutron password : ******** DB DB Nova : mysql DB Cinder : mysql DB Glance : mysql DB Neutron : mysql DB Nova password : ******** DB Glance password : ******** DB Cinder password : ******** DB Neutron password : ******** RPC RPC backend : rabbit RPC username : guest RPC password : ******** Image cache : disable Nutanix Clusters configuration: ------------------------------- 1 Cluster name : SAFC IP : 10.68.64.55 Username : admin Password : ******** Vnc : 49795 Vcpus per core : 4 Container name : DEFAULT-CTR Services enabled : compute, volume, network Version: -------- Version : 2015.1.0 Release : 1 Summary : Acropolis drivers for Openstack Kilo.
Additionally, pointing your browser at the IP address of the OVM – http://10.68.64.172 and navigating to Admin > System Information and selecting the Services Tab, you should see that all Openstack services are provided by the OVM IP address. Similarly, under the Compute, Block Storage and Network tabs it should also report that these services are being provided via the OVM
One other check would be to look at Admin > Hypervisors, a Nutanix cluster reports as a single hypervisor in the Openstack config – see below:
I hope this is enough to get people started looking at and trying out Openstack deployments using Nutanix. In the next series of posts I will look at configuring images, setting up networks/subnets and then move on to creating instances.
Have you been able to get networking to work? I cant seem to create either provider or tenant networks, ether cli or via the gui. I just get errors.
Hi Brian
If you are using the Openstack Drivers from Nutanix as part of your Openstack deployment, then please take a look at my subsequent post on configuring networks via the Neutron service integration we provide. If that’s the case and you are still having issues then let me know and we will make sure we assist further.
Thanks
ray