Category Archives: AHV

Nutanix Acropolis Hypervisor

The Creator Supreme! …..or How to be the Kenny Dalglish of Nutanix API automation

For the non European football fans out there, Kenny Dalglish,or “King Kenny” as he was known to both the Liverpool and Celtic faithful, was once described in a match commentary as “the creator supreme”.  In this short series of posts covering the REST capabilities for managing the Nutanix Enterprise Cloud, I hope to show its possible that we can all be a “creator supreme” just like King Kenny!

Swagger

The first place to look when working with the API is the REST API Explorer itself. You invoke the Explorer page by right-clicking the admin pull-down menu (top right of Prism home page) and selecting the REST API Explorer option.

It’s good practice to create a separate user login for the REST Explorer. The REST API Explorer is essentially the documentation to the API. It’s produced via swagger, which is an open standard that takes your API spec and generates interactive documentation. The documentation can be viewed and you are able to interactively test API calls via a browser.

Images

Let’s start by taking a look at a simple POST method that will list all currently available images:

POST /images/list

Select the above method in the images section of the REST API Explorer to expand the method details:

In order to try out the REST method, double click on the Model Schema box (right hand side above) – this will then be populated into the get_entities_request box on the left hand side. You can edit the entities according to how you want to retrieve the available information. For example here’s the bare minimum you need as a JSON payload to request information from the Images catalogue.

 {
   "kind": "image",
   "offset": 0,
   "length": 10
 }

Note that with our pagination we are starting at offset zero – so the first image – until the tenth image, defined by the length parameter. With the JSON payload entered as above we can press the Try it out! button and see the method in action.

The results of the method call are displayed below. The Curl syntax for invoking the method and json payload are shown, along with the individual Request URL and the Response Body. We can use the Curl syntax to programmatically call the method outside of the Explorer, either in Bash or Python, for example.

Once we begin to use the methods independently of the Explorer, then in addition to curl you should consider installing a JSON command line processor like jq, and use a JSON Linter to validate your JSON syntax for data payloads. How the tools might be used will be shown throughout this post

CURL

Lets recap the previous POST method but this time run it from the command line. In this instance we load the JSON payload (see above) from the file list_images_v3.json using the -d option. The -k option , or — insecure allows the command to proceed even though I am using self-signed SSL/TLS certs. The -s option simply disables all progress indicators.

curl -s --user api:<password> -k -X POST --header "Content-Type: application/json" --header "Accept: application/json" -d @list_images_v3.json "https://10.XX.XX.60:9440/api/nutanix/v3/images/list" | jq 

Piping the output from the curl command into jq, provides a formatted and syntax highlighted output that’s easier to read. To make this more obvious, let’s use some additional options to the jq command line and pull out just one image reference:

curl -s --user apiuser:<password> -k -X POST --header "Content-Type: application/json" --header "Accept: application/json" -d @list_images_v3.json "https://10.68.64.60:9440/api/nutanix/v3/images/list" | jq '.entities[] | select (.spec.name=="CentOS7-x86_64-Generic Cloud")'

 {
   "status": {
     "state": "COMPLETE",
     "name": "CentOS7-x86_64-Generic Cloud",
     "resources": {
       "retrieval_uri_list": [
         "https://127.0.0.1:9440/api/nutanix/v3/images//file"
       ],
       "image_type": "DISK_IMAGE",
       "architecture": "X86_64",
       "size_bytes": 8589934592
     },
     "description": "Generic Cloud"
   },
   "spec": {
     "name": "CentOS7-x86_64-Generic Cloud",
     "resources": {
       "image_type": "DISK_IMAGE",
       "architecture": "X86_64"
     },
     "description": "Generic Cloud"
   },
   "metadata": {
     "last_update_time": "2019-03-27T10:47:15Z",
     "kind": "image",
     "uuid": "04a18eb0-a3ed-4ff7-aa43-bdbb055a96ef",
     "spec_version": 0,
     "creation_time": "2019-03-27T10:47:15Z",
     "categories": {}
   }
 }

All well and good if you know the exact name of your image. What about if you don’t ? See below

curl -s --user apiuser:<password> -k -X POST --header "Content-Type: application/json" --header "Accept: application/json" -d @list_images_v3.json "https://10.68.64.60:9440/api/nutanix/v3/images/list" | jq '.entities[] | select (.spec.name | . and contains("CentOS"))'

{
  "status": {
    "state": "COMPLETE",
    "name": "CentOS7-x86_64-Minimal",
    "resources": {
      "retrieval_uri_list": [
        "https://127.0.0.1:9440/api/nutanix/v3/images//file"
      ],
      "image_type": "ISO_IMAGE",
      "architecture": "X86_64",
      "size_bytes": 713031680
    },
    "description": "Minimal"
  },
  "spec": {
    "name": "CentOS7-x86_64-Minimal",
    "resources": {
      "image_type": "ISO_IMAGE",
      "architecture": "X86_64"
    },
    "description": "Minimal"
  },
  "metadata": {
    "last_update_time": "2019-03-27T10:47:15Z",
    "kind": "image",
    "uuid": "dd482003-99f4-45df-9406-1dc9859418c4",
    "spec_version": 0,
    "creation_time": "2019-03-27T10:47:15Z",
    "categories": {}
  }
}
{
  "status": {
    "state": "COMPLETE",
    "name": "CentOS7-x86_64-Generic Cloud",
    "resources": {
      "retrieval_uri_list": [
        "https://127.0.0.1:9440/api/nutanix/v3/images//file"
      ],
      "image_type": "DISK_IMAGE",
      "architecture": "X86_64",
      "size_bytes": 8589934592
    },
    "description": "Generic Cloud"
  },
  "spec": {
    "name": "CentOS7-x86_64-Generic Cloud",
    "resources": {
      "image_type": "DISK_IMAGE",
      "architecture": "X86_64"
    },
    "description": "Generic Cloud"
  },
  "metadata": {
    "last_update_time": "2019-03-27T10:47:15Z",
    "kind": "image",
    "uuid": "04a18eb0-a3ed-4ff7-aa43-bdbb055a96ef",
    "spec_version": 0,
    "creation_time": "2019-03-27T10:47:15Z",
    "categories": {}
  }
}

One of the prime uses for this kind of command is to retrieve only the info required when populating a schema for another REST method (see below shortly). For example, you may only want a subset of entries and perhaps they need to be conveniently labelled:

curl -s --user apiuser:<password> -k -X POST --header "Content-Type: application/json" --header "Accept: application/json" -d @list_images_v3.json "https://10.68.64.60:9440/api/nutanix/v3/images/list" | jq '.entities[] | {name: .spec.name, type: .spec.resources.image_type, uuid: .metadata.uuid} | select (.name | . and contains("CentOS"))'
{
"name": "CentOS7-x86_64-Minimal",
"type": "ISO_IMAGE",
"uuid": "dd482003-99f4-45df-9406-1dc9859418c4"
}
{
"name": "CentOS7-x86_64-Generic Cloud",
"type": "DISK_IMAGE",
"uuid": "04a18eb0-a3ed-4ff7-aa43-bdbb055a96ef"
}

Upload

Let’s have a look at uploading an image to the image repository on your Prism Central instance. The following is the required schema :

cat upload_image_v3.json
{
     "spec": {
         "name": "test",
         "resources": {
             "version": {
                 "product_version": "test",
                 "product_name": "test"
             },
             "architecture": "X86_64",
             "image_type": "DISK_IMAGE",
             "source_uri": "http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img"
         }
     },
     "api_version": "3.1.0",
     "metadata": {
         "kind": "image"
     }
 }

which we can use as follows :

curl -s --user apiuser:<password> -k -X POST --header "Content-Type: application/json" --header "Accept: application/json" -d @upload_image_v3.json "https://10.68.64.60:9440/api/nutanix/v3/images" | jq .
 {
   "status": {
     "state": "PENDING",
     "execution_context": {
       "task_uuid": "f1456be3-21a8-45ab-9dc3-c323973e6f3f"
     }
   },
   "spec": {
     "name": "test",
     "resources": {
       "image_type": "DISK_IMAGE",
       "source_uri": "http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img",
       "version": {
         "product_version": "test",
         "product_name": "test"
       },
       "architecture": "X86_64"
     }
   },
   "api_version": "3.1",
   "metadata": {
     "owner_reference": {
       "kind": "user",
       "uuid": "00000000-0000-0000-0000-000000000000",
       "name": "admin"
     },
     "kind": "image",
     "spec_version": 0,
     "uuid": "b87c7183-8716-4051-8a35-da69fdbf1e60"
   }
 }

Tasks

Notice the PENDING status in the output above. We can follow the progress of the image upload by using the task_uuid entry in a tasks method call. The call can be run again and again until a task, in this case the upload, is complete

curl -s --user apiuser:<password> -k -X GET --header "Content-Type: application/json" --header "Accept: application/json" "https://10.68.64.60:9440/api/nutanix/v3/tasks/f1456be3-21a8-45ab-9dc3-c323973e6f3f" | jq .
 {
   "status": "RUNNING",
   "last_update_time": "2019-05-30T13:19:00Z",
   "logical_timestamp": 1,
   "entity_reference_list": [
     {
       "kind": "image",
       "uuid": "b87c7183-8716-4051-8a35-da69fdbf1e60"
     }
   ],
   "start_time": "2019-05-30T13:19:00Z",
   "creation_time": "2019-05-30T13:18:59Z",
   "start_time_usecs": 1559222340033005,
   "cluster_reference": {
     "kind": "cluster",
     "uuid": "e0cca748-66c4-45fb-95e2-10836439ea15"
   },
   "subtask_reference_list": [],
   "progress_message": "create_image_intentful",
   "creation_time_usecs": 1559222339906023,
   "operation_type": "create_image_intentful",
   "percentage_complete": 0,
   "api_version": "3.1",
   "uuid": "f1456be3-21a8-45ab-9dc3-c323973e6f3f"

Delete

Finally, lets delete the image. This is done by specifying the image UUID in the delete method call. We covered how to get a UUID for an image (or any entity really) above. So let’s just show the call

curl -s --user apiuser:<password> -k -X DELETE --header "Content-Type: application/json" --header "Accept: application/json" "https://10.68.64.60:9440/api/nutanix/v3/images/081e562f-6c26-4897-bc36-a74e4843bb57" | jq .
 {
     "status": 
     {
         "state": "DELETE_PENDING", 
         "execution_context": 
             {
                 "task_uuid": "b9ae5dce-c79d-4bca-b77d-b322949f71e5"
             }
     }, 
     "spec": "", 
     "api_version": "3.1", 
     "metadata": 
         {
             "kind": "image"
         }
 }

You can of course track the deletion progress via the tasks method using the task_uuid produced above.

Conclusion

Useful Resources recap:
HTTP Response status codes 
jq
curl
JSON Lint - The JSON Validator

Hopefully this will help get people started on their API path, we haven’t really scratched the surface of what can be done. Hopefully, this post has at least demystified where and how to make a start. In subsequent posts I hope to show more ways to glean info from the API Explorer itself and how to use it to build more complex REST methods. Until then, check out Nutanix Developer Community site. Good luck creators!

Using CALM Blueprints – Automation is the new punk!

Repeatability

I can imagine there are a lot of people like me who are continually setting up and tearing down environments in order to run application benchmarks, test out APIs or run various new features etc. The consequence of this is that I have umpteen sources of best practice notes for each and every technology stack I get involved with. What’s worse is that some of the configurations and their changes are identical across multiple applications. So often, I am digging around in directories entitled somewhat unhelpfully like “Notes” and “Best_Practices” or …. wait for it …..”Tunings”.

As part of an ongoing move towards Infrastructure as Code, I am on a mission now to get all of the crufty bits of info I keep here, there and everywhere, into a source code repository format. To that end I have been looking at using the Multi-VM blueprint functionality of Nutanix Calm (Automated Lifecycle Management). Calm allows me to create a blueprint and reuse all my original code snippets and config edits. They can be in Bash, Python or Powershell and so on. Once created, the blueprint can be stored in a repository on Github, for example. Then everytime I use that blueprint I get a repeatable deployment that is the same, each & every time I run it.

Here’s one I made earlier

Let’s take a look at building out a stack to benchmark Elasticsearch using esrally. I covered some of this in a my last post. I want to start off by discussing a few prerequisites that will be needed. First and foremost – the image used to create the virtual machines (VMs). I used CentOS 7 cloud images which will require ssh key based access for the default user (centos). This means I need to store both public and private keys in the various parts of the configuration. See below for the Configuration > DOWNLOADABLE IMAGE CONFIGURATION and Credentials sections in the blueprint

The blueprint automatically creates three virtual machines (VMs), one to host a single Elasticsearch instance, one for the Kibana instance and another that will run the esrally workload generator. See below for the basic layout of the blueprint. As the Kibana instance needs to know the address of the Elasticsearch instance, I need to create a dependency between the Elasticsearch and Kibana services. I do this by creating “an edge” between the services. This is delineated by the white line. That way the Kibana configuration/install only proceeds when the Elasticsearch configuration/install has completed. However, all underlying VMs are created simultaneously.

Services and dependencies

Each service requires a virtual machine in order to provide that service. So configure each VM with storage (vDISKS), network (NIC), ssh access (Credentials), along with any guest customisation and so on.  For the Search_Index (Elasticsearch) service, I built the Elasticsearch VMs to host six 200GB vdisks, and used the cloud-config already installed in the image to set access keys and permissions. See below…

Application Profiles and variables

The use of application profiles not only allows you to specify the platform (or substrate in Calm speak). You can also encapsulate variables which are then passed to that application. I am deploying to a Nutanix platform in this case. This works just as well however, with AWS, GCP and Azure. You can see from the application profile below the variables I have created. I could very quickly deploy several application stacks using this in a blueprint and each one could have a different java heap size. I could then make performance comparisons between the two. Each application stack would be exactly the same apart from the one changed variable. By extension I could add other variables I am interested in, like LVM stripe width or filesystem block size and so on.

Application Installation and Configuration

How variables in the application profiles get used, can be shown below in the package install task. The bulk of any configuration is done here. Tasks can be assigned to any action that are related to a service or the application profile. So a start, restart, stop or delete can have an associated task. For each service there’s a package install task and that’s where we use the application profile variables. Each of the services I configured have a package install task, below is the task for the Elasticsearch/Search_Index service 

The canvas (above) shows a number of ways to update or edit files based on various patterns. Note that all config file edits/updates are done in place. You should avoid using a CLI that relies on creating temporary files. Your package install script could end up trying to write/access files outside of the deployment environment. This is a potential security hole which Calm will not allow. Notice how the variable macros in the above package tasks are invoked below :

...
sudo sed -i 's/-Xms1g/-Xms@@{java_heap_size}@@g/' /etc/elasticsearch/jvm.options
...
sudo sed -i 's%path.data: /var/lib/elasticsearch%path.data: @@{elastic_data_path}@@%' /etc/elasticsearch/elasticsearch.yml
...

Calm internal macros are also available. For example: passing the address of one service into another – this is from the package task for the Data Visualisation service (kibana instance):

...
sudo sed -i 's%^#elasticsearch.hosts: \["http://localhost:9200"\]%elasticsearch.hosts: \["http://@@{Search_Index.address}@@:@@{elastic_http_port}@@"\]%' /etc/kibana/kibana.yml
...

or for cardinal numbers for unique VM names (see the VM configuration section of any service):

elastic-@@{calm_array_index}@@

Provisioning and Auditing

That’s the the blueprint complete. It should be saved without errors or warnings. Now it’s time to launch the blueprint to build the application stack. At this point you can name what will be your running application instance and change/set any runtime variables. Once launched the blueprint is queued, verified and then cloned ready to run. While its running you can audit the steps of the workflow in the blueprint:

 

Once the application is marked RUNNING, you can then either connect to individual VMs, or access an application via a browser. It’s common for all means of VM or application access to be placed in the blueprint description (Note: it also expands macro variables – see below):

The following is an example of the /etc/motd when logging into the VM installed with esrally

# ssh -i ./keys.pem -l centos 10.68.58.87
Last login: Wed Jul 17 15:46:57 2019 from 10.68.64.60

Configuration successfully written to /home/centos/.rally/rally.ini. Happy benchmarking!

More info about Rally:

* Type esrally --help
* Read the documentation at https://esrally.readthedocs.io/en/1.2.1/
* Ask a question on the forum at https://discuss.elastic.co/c/elasticsearch/rally

To get started:
esrally list tracks

Or....

esrally --pipeline=benchmark-only --target-hosts=10.68.58.177:9200 \
--track=eventdata --track-repository=eventdata --challenge=bulk-size-evaluation

Conclusion 

The final version (for now) of the blueprint is available to clone or download at:

https://github.com/rayhassan/calm-bp-elastic

Upload the blueprint to the Calm service on Prism Central. Then work through it as you read this post. Make your own changes if required. At the end (~10 minutes) you will have a running environment with which to test various Elasticsearch workloads. I intend to work through more blueprints related to other cloud native applications, with a view to developing larger scale deployments. Stay tuned,

 

Elasticsearch Sizing on Nutanix

One node, one index, one shard

The answer to the question : “how big should I size my Elasticsearch VMs and how what kind of performance will I get?”, always comes down to the somewhat disappointing answer of “It depends!?” It depends on the workload – be it index or search heavy, on the type of data being transformed and so on. 

The way to size your Elasticsearch environment is by finding your “unit of scale”, this is the performance characteristics you will get for your workload via a single shard index running in a single Virtual Machine (VM). Once you have a set of numbers for a particular VM config then you can scale throughput etc, via increasing the number of VMs and/or indexes to handle additional workload.

Virtual Machine Settings

The accepted sweet spot for VM sizing an indexing workload is something like 64GB RAM/ 8+ vCPUs. You can of course right size this further where necessary, thanks to virtualisation. I assign just below half the RAM (31GB) to the heap for the Elasticsearch instance. This is to ensure that the JVM uses compressed Ordinary Object Pointers (OOPs) on a 64 bit system. This heap memory also needs to be locked into RAM

# grep -v ^# /etc/elasticsearch/elasticsearch.yml

cluster.name: esrally
node.name: esbench

path.data: /elastic/data01    # <<< single striped data volume 
bootstrap.memory_lock: true   # <<< lock heap in RAM
network.host: 10.68.68.202
http.port: 9200
discovery.zen.minimum_master_nodes: 1  # <<< single node test cluster
xpack.security.enabled: false

# grep -v ^# /etc/elasticsearch/jvm.options
…
-Xms31g
-Xmx31g
…

From the section above , notice the single mount point for the path.data entry. I am using a 6 vdisk LVM stripe. While you can specify per-vdisk mount points in a comma separated list, unless you have enough indices to make sure all the spindles turn (all the time) then you are better off with logical volume management. You can ensure you are using compressed OOPs by checking for the following log entry at startup

[2017-08-07T11:06:16,849][INFO ][o.e.e.NodeEnvironment ] [esrally02] heap size [30.9gb], compressed ordinary object pointers [true]

Operation System Settings

Set the required kernel settings 

# sysctl -p 
…
vm.swappiness = 0
vm.overcommit_memory = 0
vm.max_map_count = 262144
…

Ensure file descriptors limits are increased

# ulimit –n 65536

verify...

curl –XGET http://10.68.68.202:9200/_nodes/stats/process?filter_path=**.max_file_descriptors
…
{"process":{"max_file_descriptors":65536}}}}
…

Disable swapping, either via the cli or remove swap entries from /etc/fstab

# sudo swapoff –a 

Elasticsearch Bulk Index Tuning

In order to improve indexing rate and increase shard segment size, you can disable refresh interval on an initial load.  Afterwards, setting this to 30s (default=1s) in production means larger segments sizes and potentially less merge pressure at a later date.

curl -X PUT "10.68.68.202:9200/elasticlogs/_settings" -H 'Content-Type: application/json' -d'
{
    "index" : {
        "refresh_interval" : "-1"
    }
}’

Recall that we only want a single shard index and no replication for our testing. We can achieve this by either disabling replication on the fly or creating a template that configures the desired settings at index creation 

Disable replication globally ...

curl -X PUT "10.68.68.202:9200/_settings" -H 'Content-Type: application/json' -d '{"index" : {"number_of_replicas" : 0}}’

or create a template - in this case, for a series of index name regex patterns...

# cat template.json
{
        “index_patterns": [ “ray*”, "elasticlogs”],
        "settings": {
                "number_of_shards": 1,
                "number_of_replicas": 0
        }
}
curl -s -X PUT "10.68.68.202:9200/_template/test_template" -H 'Content-Type: application/json' -d @template.json

Elasticsearch Benchmarking tools

esrally is a macrobenchmarking tool for elasticsearch. To install and configure – use the following quickstart guide. Full information is available here :

 https://github.com/elastic/rally

rally-eventdata-track –  is repository containing a Rally track for simulating event-based data use-cases. The track supports bulk indexing of auto-generated events as well as simulated Kibana queries.

 https://github.com/elastic/rally-eventdata-track

esrally --pipeline=benchmark-only --target-hosts=10.68.68.202:9200 
--track=eventdata --track-repository=eventdata --challenge=bulk-size-evaluation
eventdata bulk index - 5000 events/request highlighted @indexing rate of ~50k docs/sec
eventdata bulk index – 5000 events/request highlighted @indexing rate of ~50k docs/sec
httpd logs index test - highlighted @indexing rate ~80k docs/s
httpd logs index test – highlighted @indexing rate ~80k docs/s

Elasticsearch is just one of a great many cloud native applications that can run successfully on Nutanix Enterprise Cloud. I am seeing more and more opportunities to assist our account teams in the sizing and deployment of Elasticsearch. However, unlike other Search and Analytics platforms Elasticsearch has no ready made formula for sizing. This post will hopefully allow people to make a start on their Elasticsearch sizing on Nutanix and, in addition, help identify future steps to improve their performance numbers.

Further Reading

Elasticsearch Reference

Openstack + Nutanix : Nova and Cinder integration

Now that we have setup an allinone deployment of the Acropolis OVM, configured networking, and an image registry. It’s time to look at the steps required to launch virtual machine (VM) instances and setup appropriate storage.  The first steps to take are to provide the necessary network access rules for the VM’s if they don’t already exist. The easiest way to do this is to create rules to ensure SSH (port 22) access from any address range and to make the VMs pingable.

Compute > Access & Security > Security Groups

Compute > Access & Security > Security Groups

Compute > Access-Security > Security Groups

Compute > Access & Security > Security Groups

Next create an SSH key-pair that can be assigned to your instances and subsequently control VM remote login access to holders of the appropriate private key. I will show how this is used later in the post, when we launch an instance. First, select the Key Pairs tab in the Access & Security frame and save the resulting PEM file to be used when accessing your VMs.

access-kp-create

Create a named key-pair (for example fedora-kp) for the set of instances you will create.

As an example, I am going to create a single volume using the Cinder service, in order to show we can attach this to a running VM. In this instance, Cinder gets redirected to the Acropolis Volume API and the subsequent volume gets attached to the instance as an iSCSI block device.

volume-create

Next step will be to spin up a number of VM instances, I have given a generic instance prefix for the name, and I am choosing to boot a Fedora 23 Cloud image. You can see the Flavour Details in the side panel in the screenshot below – Note the root disk size is big enough to accommodate the base image.

instances-launch

I also need to specify the SSH key-pair I am using and the Network on which the instances get launched. See below :

instances-network

instances-kps

At this point I can go ahead and launch my instances. We can see the 10 instances chosen all get created below, along with the assigned IP addresses from the already defined network, the instance flavour, and the named key-pair ….

instance-list

So now, if we were to take a look at the Nutanix cluster backend via Prism, we can see those VM instances created on the cluster and how they are spread across the hypervisor hosts. That’s all down to Acropolis management and placement.

prims-vm-list

We can dig a little deeper into the Acropolis functionality and show how each of the steps taken by the Acropolis REST API calls have built and deployed the VMs on the backend. Here’s the list of VMs that were created as defined in the http://<CVM-IP>:2030 page.

2030-vm-list

And we can see the breakdown of the individual task steps and how long each one took and how long they might have queued for, and if they were ultimately successful and so on. The key take away from all this is that the speed of creation of the VM instances is largely down to the Acropolis management interfaces consumed by the REST API calls.

ergon-task-list

Let’s take one of those VMs and add some volumes to it, let’s add a data and a log volume to fedvm-10. First of all we need to create the iSCSI volumes

volume-attach

 

Then we can attach the volumes to the VM instance ….

attach-volume

We now have the two volumes attached to the VM ….

volume-attachment-list

The two volumes should show up as virtual disks under /dev in the VM itself. We can verify this by logging into the VM directly using the private key I created earlier as part of the key-pair assigned to this series of instances.

# ssh -i ./fedora-kp.pem fedora@10.68.56.29
Last login: Thu Apr 7 21:28:21 2016 from 10.68.64.172
[fedora@fedvm-10 ~]$ 

[fedora@fedvm-10 ~]$ sudo fdisk -l
Disk /dev/sda: 3 GiB, 3221225472 bytes, 6291456 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x6e3892a8

Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 6291455 6289408 3G 83 Linux


Disk /dev/sdb: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdc: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

So from here, we can format the newly assigned disks and mount them as needed.

That’s it for this post, hopefully this series of posts has gone a little way to clarify how a Nutanix cluster can be used to scale out an Openstack deployment to form a highly available on-premise cloud. The deployment of which is radically simplified by using Nutanix as the Compute, Volume, Image and Network backend.

In future posts I intend to look at deploying an upstream Openstack controller, have a play around with snapshots within Openstack and their use as images. Also, some additional troubleshooting perhaps. Let me know what you find useful.

Openstack + Nutanix: Glance Image Service

This post will cover the retrieval of base or cloud OS images via the Openstack Glance image service and how the Acropolis driver interacts with Glance and maintains the image data on the Nutanix Distributed Storage Fabric (DSF).

From the Openstack documentation:

  • The Glance image service includes discovering, registering and retrieving virtual machine images
  • has a RESTful API  – allows querying of image metadata and actual image retrieval
  • It has the ability to copy (or snapshot) a server image and then to store it promptly. Stored images then can be used as templates to get new servers up and running quickly, and can also be used to store and catalog unlimited backups.

The Acropolis driver interacts with the Glance service by redirecting an image from the Openstack controller to the Acropolis DSF. Aside from any image metadata (ie: image configuration details) being stored in Glance, the image itself is actually stored on the Nutanix cluster. We do not store any images in the OVM, either in the Glance store or anywhere else within the Openstack Controller.

Images are managed in Openstack via System > Images – see screenshot below for example list of available images in an Openstack environment

glance-images

Images in Openstack are mostly downloaded via HTTP URL. Though file upload does work.  The image creation workflow in the screenshot below shows a Fedora 23 Cloud image in QCOW2 format being retrieved. I have left the respective “Minimum Disk” (size) and “Minimum RAM” fields blank – so that no minimum is set for either.

image-create

You can confirm the images are loaded into the Nutanix Cluster backend by viewing the Image Configuration menu in Prism. The images in Prism are stored on a specific container in my case.

prism-images

Similarly, Prism will report the progress of the Image upload to the cluster through the event and progress monitoring facility on the main menu bar.

prism-images-tasks

If all you really need is a quick demo perhaps, then Openstack suggests the following OS image for test purposes. Use this simply to test and demonstrate the basic glance functionality via the command line. Works exactly the same if done via Horizon GUI, however.

[root@nx-ovm ~]# source keystonerc_admin
[root@nx-ovm ~(keystone_admin)]# glance image-create --name cirros-0.3.2-x86_64 \
--is-public true --container-format bare --disk-format qcow2 \
--copy-from http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img

+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | None                                 |
| container_format | bare                                 |
| created_at       | 2016-04-05T10:31:53.000000           |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | f51ab65b-b7a5-4da1-92d9-8f0042af8762 |
| is_public        | True                                 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros-0.3.2-x86_64                  | 
| owner            | 529638a186034e5daa11dd831cd1c863     |
| protected        | False                                |
| size             | 0                                    |
| status           | queued                               |
| updated_at       | 2016-04-05T10:31:53.000000           |
| virtual_size     | None                                 |
+------------------+--------------------------------------+

This is then reflected in the glance image list

[root@nx-ovm ~(keystone_admin)]# glance image-list
+--------------------------------------+----------------------------+-------------+------------------+------------+--------+
| ID                                   | Name                       | Disk Format | Container Format | Size       | Status |
+--------------------------------------+----------------------------+-------------+------------------+------------+--------+
| 44b4c9ab-b436-4b0c-ac8d-97acbabbbe60 | CentOS 7 x86_84            | qcow2       | bare             | 8589934592 | active |
| 033f24a3-b709-460a-ab01-f54e87e0e25b | cirros-0.3.2-x86_64        | qcow2       | bare             | 41126400   | active |
| f9b455b2-6fba-46d2-84d4-bb5cfceacdc7 | Fedora 23 Cloud            | qcow2       | bare             | 234363392  | active |
| 13992521-f555-4e6b-852b-20c385648947 | Ubuntu 14.04 - Cloud Image | qcow2       | bare             | 2361393152 | active |
+--------------------------------------+----------------------------+-------------+------------------+------------+--------+

One other thing to be aware of is that all network, image, instance and volume manipulation should only be done via the Openstack dashboard. All the Openstack elements created this way can not subsequently be changed or edited with the Acropolis Prism GUI. Both management interfaces are independent of one another. In fact the Openstack Services VM (OVM) was intentionally designed this way to be completely stateless. Though obviously this could change in future product iterations, if it was deemed to be a better solution going forward.

I have included the Openstack docs URL with additional image locations for anyone wanting to pull images of their own to work with. This is an excellent reference location for potential cloud instance images for both Linux distros and Windows:

http://docs.openstack.org/image-guide/

Next up, we will have a look at using the Acropolis Cinder plugin for Block Storage and the Nova Compute service integration.