Tag Archives: REST API

The Creator Supreme! …..or How to be the Kenny Dalglish of Nutanix API automation

For the non European football fans out there, Kenny Dalglish,or “King Kenny” as he was known to both the Liverpool and Celtic faithful, was once described in a match commentary as “the creator supreme”.  In this short series of posts covering the REST capabilities for managing the Nutanix Enterprise Cloud, I hope to show its possible that we can all be a “creator supreme” just like King Kenny!

Swagger

The first place to look when working with the API is the REST API Explorer itself. You invoke the Explorer page by right-clicking the admin pull-down menu (top right of Prism home page) and selecting the REST API Explorer option.

It’s good practice to create a separate user login for the REST Explorer. The REST API Explorer is essentially the documentation to the API. It’s produced via swagger, which is an open standard that takes your API spec and generates interactive documentation. The documentation can be viewed and you are able to interactively test API calls via a browser.

Images

Let’s start by taking a look at a simple POST method that will list all currently available images:

POST /images/list

Select the above method in the images section of the REST API Explorer to expand the method details:

In order to try out the REST method, double click on the Model Schema box (right hand side above) – this will then be populated into the get_entities_request box on the left hand side. You can edit the entities according to how you want to retrieve the available information. For example here’s the bare minimum you need as a JSON payload to request information from the Images catalogue.

 {
   "kind": "image",
   "offset": 0,
   "length": 10
 }

Note that with our pagination we are starting at offset zero – so the first image – until the tenth image, defined by the length parameter. With the JSON payload entered as above we can press the Try it out! button and see the method in action.

The results of the method call are displayed below. The Curl syntax for invoking the method and json payload are shown, along with the individual Request URL and the Response Body. We can use the Curl syntax to programmatically call the method outside of the Explorer, either in Bash or Python, for example.

Once we begin to use the methods independently of the Explorer, then in addition to curl you should consider installing a JSON command line processor like jq, and use a JSON Linter to validate your JSON syntax for data payloads. How the tools might be used will be shown throughout this post

CURL

Lets recap the previous POST method but this time run it from the command line. In this instance we load the JSON payload (see above) from the file list_images_v3.json using the -d option. The -k option , or — insecure allows the command to proceed even though I am using self-signed SSL/TLS certs. The -s option simply disables all progress indicators.

curl -s --user api:<password> -k -X POST --header "Content-Type: application/json" --header "Accept: application/json" -d @list_images_v3.json "https://10.XX.XX.60:9440/api/nutanix/v3/images/list" | jq 

Piping the output from the curl command into jq, provides a formatted and syntax highlighted output that’s easier to read. To make this more obvious, let’s use some additional options to the jq command line and pull out just one image reference:

curl -s --user apiuser:<password> -k -X POST --header "Content-Type: application/json" --header "Accept: application/json" -d @list_images_v3.json "https://10.68.64.60:9440/api/nutanix/v3/images/list" | jq '.entities[] | select (.spec.name=="CentOS7-x86_64-Generic Cloud")'

 {
   "status": {
     "state": "COMPLETE",
     "name": "CentOS7-x86_64-Generic Cloud",
     "resources": {
       "retrieval_uri_list": [
         "https://127.0.0.1:9440/api/nutanix/v3/images//file"
       ],
       "image_type": "DISK_IMAGE",
       "architecture": "X86_64",
       "size_bytes": 8589934592
     },
     "description": "Generic Cloud"
   },
   "spec": {
     "name": "CentOS7-x86_64-Generic Cloud",
     "resources": {
       "image_type": "DISK_IMAGE",
       "architecture": "X86_64"
     },
     "description": "Generic Cloud"
   },
   "metadata": {
     "last_update_time": "2019-03-27T10:47:15Z",
     "kind": "image",
     "uuid": "04a18eb0-a3ed-4ff7-aa43-bdbb055a96ef",
     "spec_version": 0,
     "creation_time": "2019-03-27T10:47:15Z",
     "categories": {}
   }
 }

All well and good if you know the exact name of your image. What about if you don’t ? See below

curl -s --user apiuser:<password> -k -X POST --header "Content-Type: application/json" --header "Accept: application/json" -d @list_images_v3.json "https://10.68.64.60:9440/api/nutanix/v3/images/list" | jq '.entities[] | select (.spec.name | . and contains("CentOS"))'

{
  "status": {
    "state": "COMPLETE",
    "name": "CentOS7-x86_64-Minimal",
    "resources": {
      "retrieval_uri_list": [
        "https://127.0.0.1:9440/api/nutanix/v3/images//file"
      ],
      "image_type": "ISO_IMAGE",
      "architecture": "X86_64",
      "size_bytes": 713031680
    },
    "description": "Minimal"
  },
  "spec": {
    "name": "CentOS7-x86_64-Minimal",
    "resources": {
      "image_type": "ISO_IMAGE",
      "architecture": "X86_64"
    },
    "description": "Minimal"
  },
  "metadata": {
    "last_update_time": "2019-03-27T10:47:15Z",
    "kind": "image",
    "uuid": "dd482003-99f4-45df-9406-1dc9859418c4",
    "spec_version": 0,
    "creation_time": "2019-03-27T10:47:15Z",
    "categories": {}
  }
}
{
  "status": {
    "state": "COMPLETE",
    "name": "CentOS7-x86_64-Generic Cloud",
    "resources": {
      "retrieval_uri_list": [
        "https://127.0.0.1:9440/api/nutanix/v3/images//file"
      ],
      "image_type": "DISK_IMAGE",
      "architecture": "X86_64",
      "size_bytes": 8589934592
    },
    "description": "Generic Cloud"
  },
  "spec": {
    "name": "CentOS7-x86_64-Generic Cloud",
    "resources": {
      "image_type": "DISK_IMAGE",
      "architecture": "X86_64"
    },
    "description": "Generic Cloud"
  },
  "metadata": {
    "last_update_time": "2019-03-27T10:47:15Z",
    "kind": "image",
    "uuid": "04a18eb0-a3ed-4ff7-aa43-bdbb055a96ef",
    "spec_version": 0,
    "creation_time": "2019-03-27T10:47:15Z",
    "categories": {}
  }
}

One of the prime uses for this kind of command is to retrieve only the info required when populating a schema for another REST method (see below shortly). For example, you may only want a subset of entries and perhaps they need to be conveniently labelled:

curl -s --user apiuser:<password> -k -X POST --header "Content-Type: application/json" --header "Accept: application/json" -d @list_images_v3.json "https://10.68.64.60:9440/api/nutanix/v3/images/list" | jq '.entities[] | {name: .spec.name, type: .spec.resources.image_type, uuid: .metadata.uuid} | select (.name | . and contains("CentOS"))'
{
"name": "CentOS7-x86_64-Minimal",
"type": "ISO_IMAGE",
"uuid": "dd482003-99f4-45df-9406-1dc9859418c4"
}
{
"name": "CentOS7-x86_64-Generic Cloud",
"type": "DISK_IMAGE",
"uuid": "04a18eb0-a3ed-4ff7-aa43-bdbb055a96ef"
}

Upload

Let’s have a look at uploading an image to the image repository on your Prism Central instance. The following is the required schema :

cat upload_image_v3.json
{
     "spec": {
         "name": "test",
         "resources": {
             "version": {
                 "product_version": "test",
                 "product_name": "test"
             },
             "architecture": "X86_64",
             "image_type": "DISK_IMAGE",
             "source_uri": "http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img"
         }
     },
     "api_version": "3.1.0",
     "metadata": {
         "kind": "image"
     }
 }

which we can use as follows :

curl -s --user apiuser:<password> -k -X POST --header "Content-Type: application/json" --header "Accept: application/json" -d @upload_image_v3.json "https://10.68.64.60:9440/api/nutanix/v3/images" | jq .
 {
   "status": {
     "state": "PENDING",
     "execution_context": {
       "task_uuid": "f1456be3-21a8-45ab-9dc3-c323973e6f3f"
     }
   },
   "spec": {
     "name": "test",
     "resources": {
       "image_type": "DISK_IMAGE",
       "source_uri": "http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img",
       "version": {
         "product_version": "test",
         "product_name": "test"
       },
       "architecture": "X86_64"
     }
   },
   "api_version": "3.1",
   "metadata": {
     "owner_reference": {
       "kind": "user",
       "uuid": "00000000-0000-0000-0000-000000000000",
       "name": "admin"
     },
     "kind": "image",
     "spec_version": 0,
     "uuid": "b87c7183-8716-4051-8a35-da69fdbf1e60"
   }
 }

Tasks

Notice the PENDING status in the output above. We can follow the progress of the image upload by using the task_uuid entry in a tasks method call. The call can be run again and again until a task, in this case the upload, is complete

curl -s --user apiuser:<password> -k -X GET --header "Content-Type: application/json" --header "Accept: application/json" "https://10.68.64.60:9440/api/nutanix/v3/tasks/f1456be3-21a8-45ab-9dc3-c323973e6f3f" | jq .
 {
   "status": "RUNNING",
   "last_update_time": "2019-05-30T13:19:00Z",
   "logical_timestamp": 1,
   "entity_reference_list": [
     {
       "kind": "image",
       "uuid": "b87c7183-8716-4051-8a35-da69fdbf1e60"
     }
   ],
   "start_time": "2019-05-30T13:19:00Z",
   "creation_time": "2019-05-30T13:18:59Z",
   "start_time_usecs": 1559222340033005,
   "cluster_reference": {
     "kind": "cluster",
     "uuid": "e0cca748-66c4-45fb-95e2-10836439ea15"
   },
   "subtask_reference_list": [],
   "progress_message": "create_image_intentful",
   "creation_time_usecs": 1559222339906023,
   "operation_type": "create_image_intentful",
   "percentage_complete": 0,
   "api_version": "3.1",
   "uuid": "f1456be3-21a8-45ab-9dc3-c323973e6f3f"

Delete

Finally, lets delete the image. This is done by specifying the image UUID in the delete method call. We covered how to get a UUID for an image (or any entity really) above. So let’s just show the call

curl -s --user apiuser:<password> -k -X DELETE --header "Content-Type: application/json" --header "Accept: application/json" "https://10.68.64.60:9440/api/nutanix/v3/images/081e562f-6c26-4897-bc36-a74e4843bb57" | jq .
 {
     "status": 
     {
         "state": "DELETE_PENDING", 
         "execution_context": 
             {
                 "task_uuid": "b9ae5dce-c79d-4bca-b77d-b322949f71e5"
             }
     }, 
     "spec": "", 
     "api_version": "3.1", 
     "metadata": 
         {
             "kind": "image"
         }
 }

You can of course track the deletion progress via the tasks method using the task_uuid produced above.

Conclusion

Useful Resources recap:
HTTP Response status codes 
jq
curl
JSON Lint - The JSON Validator

Hopefully this will help get people started on their API path, we haven’t really scratched the surface of what can be done. Hopefully, this post has at least demystified where and how to make a start. In subsequent posts I hope to show more ways to glean info from the API Explorer itself and how to use it to build more complex REST methods. Until then, check out Nutanix Developer Community site. Good luck creators!

Nutanix: Cloud-like DevOps powering NoSQL for BigData

The popularity of NoSQL has increasingly come about as developers want to use the same in-memory data structures in their applications and have them map directly into a database persistence layer. For example, storing data in XML or JSON format is often hierarchical and potentially does not lend itself to being easily stored in row based tables. It becomes more complicated if the data also contains lists and objects. Not having to convert these in-memory structures into relational database structures is a major advantage in terms of time to value. Such considerations have been made all the more acute by the rise of the web as a platform for services. There’s also an economic aspect, like the prohibitive infrastructure costs required to scale up traditional RDBMS to support high availability etc. Compare this to such Web-Scale or cloud aware apps like NoSQL, which expects to “just drop in” commodity hardware at the infrastructure layer and scale out horizontally on demand.

So if we were to consider the requirements from a modern hyper-converged infrastructure (HCI) that employed the same Web-Scale paradigms used by modern cloud-aware applications. Then to deploy apps, like a NoSQL database for example, the first thing I would want to do is virtualise. This means a right-sized, sandboxed environment (ie a virtual machine) to run individual NoSQL instances. If there was a need to scale up, then it’s a simple case of increasing RAM and CPU. As the application landscape grows over time and starts to scale out, there’s increased need for more nodes/VMs.  Hence, any HCI platform needs cloud like provisioning of nodes. So providing faster time to deploy and time to value. The ability to auto-discover and add new nodes by the click of a button is quite compelling. In short, horizontal scale out needs to be easily undertaken. Say, in the middle of the production day, while running the month end workload?

Intelligent, automated data tiering, locality and balancing via post-process techniques like Mapreduce is another key requirement. As any database working set grows over time, ie: more users will mean more queries, new tables, indexes, aggregations, etc. So the ability to maintain a responsive I/O profile via SSD, as more I/O is periodically obtained from disk, will be key. If all VMs are then able to get local access to their data via SSD from a global storage fabric so much the better. While we are here, consider how you would migrate to a new(er) hardware fleet with and without a distributed storage fabric. Far easier to just drop in units of converged compute/storage and then migrate VMs to it. Compare how that would work with a large white box server estate spread across numerous racks in a DC? There’s yet another aspect of economics to all this. In that auto tiering of the storage layer means the current “working set” data is held at the most performant (and by comparison more expensive) layer. While colder data sits on cheaper spinning disk.

Another advantage of a distributed storage fabric is one of data service features. Take point in time (PIT) backups of sharded DBs, which can sometimes be a complicated issue. In which case, a data service that supports VM centric snapshots of key VMs in a consistency group can avoid another potential pain point. Also, rapid cloning of preconfigured VMs will improve deployment times and speaks to the DevOps workflows that many IT shops have increasingly adopted. Consider how easy might it be to create dev/QA environments with production style data using such mechanisms? What about burst workloads? The ability to migrate VMs between public and private cloud would bring further benefits, both as a means to provide offsite backups or move VMs between geographies.

Bear in mind there isn’t 20+ years of ecosystem software (or even tribal knowledge perhaps?) in the NoSQL community – unlike in traditional RDBMS. For this reason continual monitoring is a major requirement. The ability to support a floor to ceiling overview of VMs, hypervisor and hardware platform in terms of performance, alerts and events is paramount. We mentioned briefly above how working set size and IO throughput could affect end user experience. So the ability to predict trends in such behaviour means timely decisions about when to scale or shard an application can be made.  No discussion of any DevOps processes is complete without including REST API and/or Powershell automation capabilities. Automation is key in terms of workflow agility, allowing routine tasks to be performed repeatedly with a well understood outcome. Dev/QA environments can benefit greatly from the features already described. In addition, via the API, developers can build self-service portal software allowing them to spin up new environments in a matter of minutes.

In previous roles I worked with customers running UNIX based failover clusters protecting traditional SQL RDBMS and ERP software. Think Solaris and SUN Cluster, underpinning Oracle and SAP installs.  While running this kind of ‘Big Iron’ was considered ‘state of the art’. Coming up fast on the inside was ‘Big Data’ and with it a complete rethink on how to achieve massive scale. Traditionally, systems had scaled vertically by adding more CPU and RAM to the host platform, and horizontally by adding system boards to a midframe chassis. This came at a price and often a staggering level of administrative complexity. While Web-Scale technologies may not have completely replaced this approach yet, large scale big iron systems will continue to become more niche as time goes on in my opinion.

So, coming back to the beginning of this post. HCI is not only about scaling just to support Big Data workloads, it’s also about creating lower time to value and radical ease of use synergies with the application that sits on top of the stack. Having a HCI platform designed from the ground up with the same underlying principles as modern Web-Scale applications, means we are able to remove the operational delays and complexity that tend to act as drag anchors in today’s rapid deployment environments. IT departments are then free to focus on innovations that help the business succeed.