Tag Archives: Prism

Openstack + Nutanix: Glance Image Service

This post will cover the retrieval of base or cloud OS images via the Openstack Glance image service and how the Acropolis driver interacts with Glance and maintains the image data on the Nutanix Distributed Storage Fabric (DSF).

From the Openstack documentation:

  • The Glance image service includes discovering, registering and retrieving virtual machine images
  • has a RESTful API  – allows querying of image metadata and actual image retrieval
  • It has the ability to copy (or snapshot) a server image and then to store it promptly. Stored images then can be used as templates to get new servers up and running quickly, and can also be used to store and catalog unlimited backups.

The Acropolis driver interacts with the Glance service by redirecting an image from the Openstack controller to the Acropolis DSF. Aside from any image metadata (ie: image configuration details) being stored in Glance, the image itself is actually stored on the Nutanix cluster. We do not store any images in the OVM, either in the Glance store or anywhere else within the Openstack Controller.

Images are managed in Openstack via System > Images – see screenshot below for example list of available images in an Openstack environment

glance-images

Images in Openstack are mostly downloaded via HTTP URL. Though file upload does work.  The image creation workflow in the screenshot below shows a Fedora 23 Cloud image in QCOW2 format being retrieved. I have left the respective “Minimum Disk” (size) and “Minimum RAM” fields blank – so that no minimum is set for either.

image-create

You can confirm the images are loaded into the Nutanix Cluster backend by viewing the Image Configuration menu in Prism. The images in Prism are stored on a specific container in my case.

prism-images

Similarly, Prism will report the progress of the Image upload to the cluster through the event and progress monitoring facility on the main menu bar.

prism-images-tasks

If all you really need is a quick demo perhaps, then Openstack suggests the following OS image for test purposes. Use this simply to test and demonstrate the basic glance functionality via the command line. Works exactly the same if done via Horizon GUI, however.

[root@nx-ovm ~]# source keystonerc_admin
[root@nx-ovm ~(keystone_admin)]# glance image-create --name cirros-0.3.2-x86_64 \
--is-public true --container-format bare --disk-format qcow2 \
--copy-from http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img

+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | None                                 |
| container_format | bare                                 |
| created_at       | 2016-04-05T10:31:53.000000           |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | f51ab65b-b7a5-4da1-92d9-8f0042af8762 |
| is_public        | True                                 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros-0.3.2-x86_64                  | 
| owner            | 529638a186034e5daa11dd831cd1c863     |
| protected        | False                                |
| size             | 0                                    |
| status           | queued                               |
| updated_at       | 2016-04-05T10:31:53.000000           |
| virtual_size     | None                                 |
+------------------+--------------------------------------+

This is then reflected in the glance image list

[root@nx-ovm ~(keystone_admin)]# glance image-list
+--------------------------------------+----------------------------+-------------+------------------+------------+--------+
| ID                                   | Name                       | Disk Format | Container Format | Size       | Status |
+--------------------------------------+----------------------------+-------------+------------------+------------+--------+
| 44b4c9ab-b436-4b0c-ac8d-97acbabbbe60 | CentOS 7 x86_84            | qcow2       | bare             | 8589934592 | active |
| 033f24a3-b709-460a-ab01-f54e87e0e25b | cirros-0.3.2-x86_64        | qcow2       | bare             | 41126400   | active |
| f9b455b2-6fba-46d2-84d4-bb5cfceacdc7 | Fedora 23 Cloud            | qcow2       | bare             | 234363392  | active |
| 13992521-f555-4e6b-852b-20c385648947 | Ubuntu 14.04 - Cloud Image | qcow2       | bare             | 2361393152 | active |
+--------------------------------------+----------------------------+-------------+------------------+------------+--------+

One other thing to be aware of is that all network, image, instance and volume manipulation should only be done via the Openstack dashboard. All the Openstack elements created this way can not subsequently be changed or edited with the Acropolis Prism GUI. Both management interfaces are independent of one another. In fact the Openstack Services VM (OVM) was intentionally designed this way to be completely stateless. Though obviously this could change in future product iterations, if it was deemed to be a better solution going forward.

I have included the Openstack docs URL with additional image locations for anyone wanting to pull images of their own to work with. This is an excellent reference location for potential cloud instance images for both Linux distros and Windows:

http://docs.openstack.org/image-guide/

Next up, we will have a look at using the Acropolis Cinder plugin for Block Storage and the Nova Compute service integration.

Sharded MongoDB config on Nutanix (2) : High Availability

One of the prime availability considerations for any horizontal scale out application, like a MongoDB cluster, is how that cluster behaves under a failure event. We have seen (in the MongoDB case) how replica sets are configured with additional secondary instances to handle the failure of a primary instance in a replica set. We also create a mini quorum of configuration database servers and query routers to give redundancy to the cluster “infrastructure”. However, the Nutanix XCP environment provides further protection through certain features of the Acropolis management interface. Your key VMs need to be enabled to run under high availability. This is so that when the underlying hypervisor host fails for any reason, these VM’s failover to another host with sufficient CPU and RAM resources. The screenshot below shows how this (Tech Preview) feature can be enabled (pre NOS 4.5) on a per-VM basis :

enable-HA

The underlying migration functionality is also used for the manual placement of key VMs. As an example, let’s consider the following layout, where two of the configdb VMs in a MongoDB cluster are co-located on the same AHV host:

mongodb-colocated-vms

Notice in the screen capture above, there are two configdb VMs on host “D”. This means that ideally we want to migrate a MongoDB Config DB to another AHV host. Let’s move the VM mongo-configdb02 to AHV host “C”…

mongodb-migrate-VM

Note that the migration process could have automatically chosen an appropriate AHV host to receive the VM. In the above case however, we have instead specified the desired host ourselves.

We can monitor the progress and duration of any migration via the VM tasks frame in Prism:

mongodb-vm-tasks-migration

As always, this workflow can also be done manually (or scripted) through the acli interface. In this example I am migrating the VM running a query router (mongos process)….

<acropolis> vm.migrate mongos01 host=10.68.64.41 
mongos01: complete 
<acropolis>

As of the time of writing this post. Acropolis Base Software (NOS) 4.5 has been released and this feature has become part of general availability (GA). It can now be enabled cluster wide:

ha-enable-menu-4

enable-ha-4

Nutanix customers are strongly recommended to enable this feature when they require HA functionality for their VMs.

In my next post I will be completing this short blog series on sharded MongoDB configs on Nutanix. I intend to cover how Nutanix Acropolis managed snapshots and cloning are employed to create backups and then use them to perform rapid build out of potential dev/QA type environments. Stay tuned.

 

 

 

Sharded MongoDB config on Nutanix (1) : Deployment

So far I have posted on MongoDB deployments either as standalone or as part of a replica set. This is fine when you can size your VM memory to hold the entire database working set. However, if your VM’s RAM will not accommodate the working set in memory, you will need to shard to aggregate RAM from multiple replica sets and form a MongoDB cluster.

Having already discussed using clones of gold image VMs to create members for a replica set, then the most basic of MongoDB clusters requires at least two replica sets. On top of which we need a number of MongoDB “infrastructure” VMs that make MongoDB cluster operation possible. These entail a minimum of three (3) Configuration Databases (mongod –configsvr) per cluster and around one (1) Query Router (mongos) for every two shards. Here is the layout of a cluster deployment on my lab system:

2shard-system

In the above lab deployment, for availability considerations, I avoid co-locating any primary replica VM on the same physical host, and likewise any of the Query Router or ConfigDB VMs. One thing to bear in mind is that sharding is done on a per collection basis. Simply put, the idea behind sharding is that you split the collections across the replica sets and then by connecting to a mongos process you are routed to the appropriate shard holding the part of the collection that can serve your query. The following commands show the syntax to create one of the three required configdb’s (ran on three separate VMs, and need to be started first), and a Query Router, or mongos process (where we add the IP addresses of each configdb server VM) :

Config DB Servers – each ran as:
mongod --configsvr --dbpath /data/configdb --port 27019

Query Router - ran as:
mongos --configdb 10.68.64.142:27019,10.68.64.143:27019,10.68.64.145:27019

- the above IP addresses in mongos command line are the addresses of each config DB.

This brings up an issue if you are not cloning replica VMs from “blank” gold VMs. By cloning a new replica set from a current working replica set, ie: so that you essentially have each replica set holding a full copy of all your databases and their collections. Then when you come to add such a replica set as a shard, you generate the error condition shown below.

Here’s the example of what can happen when you attempt to shard and your new replica set (rs02)  is simply cloned off a current running replica set (rs01):

mongos> sh.addShard("rs02/192.168.1.52")
{s
 "ok" : 0,
 "errmsg" : "can't add shard rs02/192.168.1.52:27017 because a local database 'ycsb' 
exists in another rs01:rs01/192.168.1.27:27017,192.168.1.32:27017,192.168.1.65:27017"
}

This is the successful workflow adding both shards (the primary of each replica set) via the mongos router VM:

$ mongo --host localhost --port 27017
MongoDB shell version: 3.0.3
connecting to: localhost:27017/test
mongos>
 
mongos> sh.addShard("rs01/10.68.64.111")
{ "shardAdded" : "rs01", "ok" : 1 }
mongos> sh.addShard("rs02/10.68.64.110")
{ "shardAdded" : "rs02", "ok" : 1 }

We next need to enable sharding on the database and subsequently shard on the collection we want to distribute across the replica sets available. The choice of shard key is crucial to future MongoDB cluster performance. Issues such as read and write scaling, cardinality etc are covered here. For my test cluster I am using the _id field for demonstration purposes.

mongos> sh.enableSharding("ycsb")
{ "ok" : 1 }

mongos> sh.shardCollection("ycsb.usertable", { "_id": 1})
{ "collectionsharded" : "ycsb.usertable", "ok" : 1 }

The balancer process will run for the period of time needed to migrate data between the available shards. This can take anywhere from a number of hours to a number of days depending on the size of the collection, the number of shards, the current workload etc. Once complete however, this results in the following sharding status output. Notice  the “chunks” of the usertable collection held in the ycsb database are now shared across both shards (522 chunks in each shard) :

 mongos> sh.status()
--- Sharding Status ---
 sharding version: {
 "_id" : 1,
 "minCompatibleVersion" : 5,
 "currentVersion" : 6,
 "clusterId" : ObjectId("55f96e6c5dfc4a5c6490bea3")
}
 shards:
 { "_id" : "rs01", "host" : "rs01/10.68.64.111:27017,10.68.64.131:27017,10.68.64.144:27017" }
 { "_id" : "rs02", "host" : "rs02/10.68.64.110:27017,10.68.64.114:27017,10.68.64.137:27017" }
 balancer:
 Currently enabled: yes
 Currently running: no
 Failed balancer rounds in last 5 attempts: 0
 Migration Results for the last 24 hours:
 No recent migrations
 databases:
 { "_id" : "admin", "partitioned" : false, "primary" : "config" }
 { "_id" : "enron_mail", "partitioned" : false, "primary" : "rs01" }
 { "_id" : "mydocs", "partitioned" : false, "primary" : "rs01" }
 { "_id" : "sbtest", "partitioned" : false, "primary" : "rs01" }
 { "_id" : "ycsb", "partitioned" : true, "primary" : "rs01" }
 ycsb.usertable
 shard key: { "_id" : 1 }
 chunks:
 rs01 522
 rs02 522
 too many chunks to print, use verbose if you want to force print
 { "_id" : "test", "partitioned" : false, "primary" : "rs02" }

Additional Links:

 

 

 

 

 

 

Using Nutanix snapshots to backup MongoDB

In an earlier post I described how Nutanix clones can speed up deployment workflows for MongoDB replica sets. In this post we will cover the Nutanix VM-centric snapshot functionality to backup a MongoDB database instance. Even when the database is sharded, the ability to take a snapshot of a set of VMs at once (as part of a consistency group) ensures that a point in time (PIT) backup can easily be taken.

Let’s first consider the 3 member replica set as described in my last post. In order to take an OS level backup we would need to quiesce the I/O to one of the secondary members of the replica set. The command sequence below is for MMAPv1 storage engine only. The db.fsyncLock() flushes all pending writes and locks the instance for further writes until we run db.fsyncUnlock(). Even though we have journaling enabled, we have separated the I/O for data and journal files to different volumes (as part of our standard config). Hence the need to quiesce I/O to make sure we take a consistent snapshot.

rs01:SECONDARY> db.fsyncLock()
{
 "info" : "now locked against writes, use db.fsyncUnlock() to unlock",
 "seeAlso" : "http://dochub.mongodb.org/core/fsynccommand",
 "ok" : 1
}

Nutanix snapshots work at the VM level. This means we take a snapshot of all the vDisks in the VM hosting a MongoDB secondary at the same time. However, MongoDB can’t guarantee backup consistency if the journal and data are located in separate volumes. Hence, we still need db.fsyncLock()/db.fsyncUnlock().

<acropolis> vm.snapshot_create mongodb03
SnapshotCreate: complete

<acropolis> vm.snapshot_create mongodb03 snapshot_name_list=mongodb03-snap1
SnapshotCreate: complete

<acropolis> vm.snapshot_list mongodb03
Snapshot name Snapshot UUID
mongodb03-snap1 41c77ddc-8cc7-49bf-a250-23d52031b76e
mongodb03_2015-09-08T10:04:00.805780 7cb77f7f-1eb6-4103-8909-e8e50c318151

Above we have taken two snapshots (by way of example) and show the naming conventions returned for both a named snapshot and the default. See the rest of the workflow below…

<acropolis> vm.clone mongodb04 clone_from_snapshot=mongodb03-snap1
mongodb03-backup: complete

<acropolis> vm.restore mongodb03 mongodb03-snap1
VmRestore: complete

<acropolis> snapshot.delete mongodb03-snap1
Delete 1 snapshots? (yes/no) y
Please type 'yes' or 'no': yes
mongodb03-snap1: complete

We can carry out the same workflow via Nutanix Prism. I have taken a screenshot of the available commands (see below) in the VM Snapshots frame. The command sequence is issued via the  VM tab in the GUI. The desired CLI arguments such as clone and snapshot names for example, are entered in the resulting popups. For reasons of brevity I have not included all the required screenshots.

vm-snapshot-actions

Having taken our snapshot, we can now unlock the replica – make sure you have kept the session where you called the db.fsyncLock() command open until this time.

rs01:SECONDARY> db.fsyncUnlock()
{ "ok" : 1, "info" : "unlock completed" }
rs01:SECONDARY>

Any newly created clones from the Nutanix VM snapshot can be used to seed a test environment with production strength data. Bearing in mind that the backup/clone was made from a VM that formed part of a replica set, we would like to be able to start the MongoDB instance in standalone mode. You can do this as follows ….

  • power off the newly created clone (mongodb04)
  • Login and create a sub-directory in the MongoDB data directory in order to save the local db files to…
[root@mongodb04 data]# mkdir local-old
[root@mongodb04 data]# mv local.* local-old/
  • edit /etc/mongod.conf : comment out the line naming the replica set and make sure the bind_ip reflects the IP address assigned to the VM (via DHCP in my case)
[mongod@mongodb04 ~]$ ip a | grep eth0
2: eth0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 inet 10.68.64.138/24 brd 10.68.64.255 scope global eth0

[mongod@mongodb04 ~]$ egrep "bind_ip|replSet" /etc/mongod.conf
bind_ip=127.0.0.1,10.68.64.138
#replSet=rs01
  • Start up the mongod (now as a standalone) instance with the newly edited configuration file
  • Drop into a mongo shell. You can then verify the consistency of the copied data (in my example I am using a synthetic database created by the YCSB benchmark software) …
[mongod@mongodb04 ~]$ mongo
MongoDB shell version: 3.0.3
connecting to: test
>
> show dbs
enron_mail 3.952GB
local 0.078GB
ycsb 207.853GB
>
> use ycsb
switched to db ycsb
>
> show collections
system.indexes
usertable
>
> db.usertable.findOne()
{
 "_id" : "user6284781860667377211",
 "field5" : BinData(0,"PTo4JCo4JiskPDU2PiIzPikmMycrNSQjJzo3NiEqJjImIyY0PSE5KS43ICM4Liw+MCclLyE1JCA3PDM3OCsoMz0nKD8rISYrKCw8Pik4JT0jIyYuKDkoOy4gJCkoPSMkOywnNA=="),
 "field4" : BinData(0,"PyI/NSIqITEhKzQxPTwlNyIzJy0zLz05IzUvLDkwLi0uNjQwKysnMSQ5Lj8hMDozPDo+JTg/IDw6NTI9PzAkNjIjLTw8OyYvOTsgLTM2IjcvPCk4Ij84Ny45LiYhJC0mKT0qKw=="),
 "field3" : BinData(0,"MTkwLj0sPTYtPTg7OTUmIz0xJSM3OzM+Kyw3PyckPDYnMyotMC84JDIjMzEnITktJi4mMz0rLTkzODkoPCUkMT4pJzEpKiM8JDUlKjUxLiY5PzIxIiM0OCgpLTUpLyQ1JCogLw=="),
 "field2" : BinData(0,"Pz4jNTcnLDgwKS04ISc7Oi01IDkrNDclPTsuMyElOSU/KywvMzY6PDU0Jyo3NyYjKig8NzMrOTs7LiQiMy0hKioiPiMkPTgyLDAyPSoxJS4tOiUyLzE9Pz8rJy0zPjo2PCszNA=="),
 "field9" : BinData(0,"KTknMSs2PyEiMjUiPz4qKiM9MS0iIj4vMyMyNCsjPSUkJyYsJz0nLD0/NSY3MyQ7KCo/OikiNyU3MjQ8MCArLj8wMyM0MiAgJiUzOCc5KjYiJTw8Izw+OzA7NCAiPzA5ISYxOg=="),
 "field8" : BinData(0,"KCgoLTQ4ICE0IDY0ODAyIS86LiYrKzQmIyIpJy0nKz0gICYpMTQsLT86PyAiKjs/Kzw3MSciJDYkJTssLiMsLiUtLzMtOiYpND8nMCQpOzopMyQ1OyYsNy8uPysxNyg1PCQ6MA=="),
 "field7" : BinData(0,"ITM1KiUmKCcnKjA8IT8xPCIjKzc4Ijs+IyYhICMlKCsjPy09MygxKTohPDgjLjIzMzwpPj45Oyg2JTgsLjYlJS0xPCc+IiolKDw0JiQkLyg4NiohID0xPjIyMic9KDgqKD0zKg=="),
 "field6" : BinData(0,"IjovMTotODUxPD0mPikqLyo4IzonKjc+LDwuLCEhICghOzYxJTw5PiAzIzktNjA9OzUqNiovMTszMDMqLDQmNjk8IDA0NjAvPz8mPygnITU+OSY8JCIzLDAgOSEmNyA1KCQ+Jw=="),
 "field1" : BinData(0,"ID0sMjg8JiAiKz4gITYjNCkxLSs+IC8/JTAiNCA8PignMTEjNyogKyU0Lz4lNz0mIzohNCQ8LSwhICw/OT82PCYyPjk2KSElIDo5MSU2Kig9NiUnOjwpLi0vOjIqJz0sKDExOw=="),
 "field0" : BinData(0,"OikgJykuOTchIjUlISghLCkwOis/Pyw4ISo7Jyw6Kz46NTwlODIsKDQqLT0uNi0rOTcsOzA4Lzo3PDcyMjozLiItKSY1JSsoLSIkPio7KC4xOzs7MzY7Oj0nOzU+IyEuJDMwJA==")
}

The VM can now be easily migrated anywhere across the Nutanix cluster using the features of the Nutanix Distributed filesystem (NDFS). If additional availability is required then we can clone “blank” gold image VMs and form a new replica set. To avoid confusion the naming convention of the new replica set(s) can reflect the intended use case perhaps : dev, test, QA etc.

That ‘One Click’ upgrade again, in full

One way of demonstrating the concept of ‘Invisible Infrastructure’ is the ability to complete a full system upgrade with minimal service interruption. In this post I will show the “One Click” upgrade facility that’s available on the Nutanix platform.  This facility allows the admin to upgrade the Nutanix Operating System (NOS), the hypervisor, any required storage firmware and appropriate version of  Nutanix Cluster Check (NCC) for the target NOS release.

You can choose to either upload the NOS upgrade tarball or have it automatically downloaded to a landing area. Just check the Enable Automatic downloads box. Here I am uploading the software to the platform.

You can choose to either upload the NOS upgrade tarball or have it automatically downloaded to a landing area. Just check the Enable Automatic Download box. Here I am uploading the software to the platform.

Similar to the NOS version, the hypervisor can also be upgraded to a newer version when available.

Similar to the NOS version, the hypervisor can also be upgraded to a newer version when available.

You can either select to run the preupgrade checks standalone without performing an upgrade or just select to upgrade directly, in which case the checks are run prior.

You can either select to run the preupgrade checks standalone without performing an upgrade or just select to upgrade directly, those same checks will be run before the start of the upgrade in any case.

Selecting upgrade will show the progress of the various stages of the upgrade as they occur.

Selecting upgrade will show the progress of the various stages of the upgrade as they occur. CVMs are upgraded sequentially and only one CVM is rebooted at a time. A CVM is always back in the cluster membership before the next CVM is restarted.

kvm-preupgrade

You can choose to upgrade the underlying hypervisor as well at this stage.

You can choose to upgrade the underlying hypervisor as well at this stage.

As always you can monitor progress in the Prism main window. Here we see the upgrade process has been completed successfully.

As always you can check progress in the Prism main window. Here we see the upgrade process has completed successfully.

kvm-upgrade-events

Nutanix Prism also shows the individual task info ie task stage, CVM/host involved, time taken etc.

Nutanix Prism also shows the individual task info ie task stage, CVM/host involved, time taken etc.

The Nutanix platform upgrade takes care of all the intermediate steps and just works, regardless of the size of the cluster. There’s minimal impact and disruption as the upgrade takes place and it enables you to carry out such tasks within normal working hours, and not losing a weekend to the usual rigours of a traditional hardware upgrade cycle.

Webscalin’ – adding Nutanix nodes

Most modern web-scale applications (NoSQL, Search, Big Data, etc) are achieving massive elastic scale though horizontal scale out techniques. The admins for such apps require the ability to add nodes and storage for the required scale out without interruption to service. The workflow for adding a node to a Nutanix cluster allows such seamless addition, without any of the complex storage operations such as multipathing, zoning/masking, etc. A node is simply added to the chassis, the autodiscovery service detects the new node and the user is then simply asked to push a button to complete the process. The following are some screenshots of the prescribed workflow…

Connect to the nodes lights out management or IPMI webapp via a browser (enter the IPMI address) and login using the ADMIN credentials. You may need enable java im yor browser and configure java to allow the IPMI address.

After inserting the new node into the chassis slot, connect to the nodes lights out management or IPMI webapp via a browser (enter the IPMI address) and login using the ADMIN credentials. You may need enable Java in your browser and configure Java to allow the IPMI address.

Launch the remote console to access the Hypervisor

Launch the Console to enable remote access the Hypervisor.

Using the menu bar power on the node (if needed) otherwise login and configure network addressing.

Using the ‘Power Control’ drop down on the Menu bar across the top of the frame- Power On the node (if needed). You can at this point set up any L2 networking such vlan tagging etc.

Select 'Expand Cluster' from the right drop down menus in the Prism GUI. The node should be auto-discovered.

Select ‘Expand Cluster’ from the right drop down menus in the Prism GUI. The node should be auto-discovered.

Configure the required network addresses and select 'Save' to add the node to the cluster.

Configure the required network addresses and select ‘Save’ to add the node to the cluster.

The progress of the node addition can be monitored in the Prism GUI. Note that the hypervisor was automatically upgraded in order to maintain the same software functionality across the cluster nodes.

The progress of the node addition can be monitored in the Prism GUI. Note that the hypervisor was automatically upgraded in order to maintain the same software functionality across the cluster nodes.

That’s it, once the node is added and the metadata is re-balanced across all the nodes,  then the new nodes storage (HDD/SSD) is added to the storage pool with the rest of the cluster nodes. At which point all containers (datastores) are automatically mounted onto the newly added host and the new host is ready to receive guests! This kind of ease of use story is becoming paramount in terms of  time to value for many webscale applications. Its all well and good having applications on top of NoSQL DBs that allow for rapid development and deployment. However, if the upfront planning for the underlying architecture holds everything back for days if not weeks, then modern DevOps style operations are much harder to achieve..

Switch to Simplicity …

With the recent announcement by Nutanix of the Xtreme Computing Platform (XCP) built on a KVM based hypervisor and the Acropolis management solution. I thought I would use this step change in technology as the basis for my inaugural blog! What I would like to highlight is how much simpler this has made deploying applications in virtual machines, particularly on a KVM platform. As most of us that have had some exposure to KVM, we know that KVM is in fact the amalgamation of three distinct open source projects. These are:

QEMU (Quick Emulator). An emulator and virtualizer for Linux.  KVM leverages QEMU specifically for CPU emulation, executing virtual machine operations directly on the host CPU to achieve near native performance.

KVM kernel modules: Loadable kernel components which provide the virtualization infrastructure (other than the CPU).  Specifically, kvm.ko provides the core virtualization infrastructure and a processor-specific module (kvm-intel.ko or kvm-amd.ko) interacts with QEMU.

libvirt: An API for the management of virtualization environments

Let’s take a look at how a VM is created using the Nutanix Prism GUI…

Selecting the Network Create box in the VM tab: we are assigning a vlan tag (64) and leaving the network to be externally managed – ie: the current (external to Nutanix) network infrastructure manages the network (such as DHCP etc.)

Selecting the Network Create box in the VM tab: we are assigning a vlan tag (64) and leaving the network externally managed – ie: the current (external to Nutanix) network infrastructure manages the network (such as DHCP etc.)

Next select +VM Create and fill out the details as required above. We will add a NIC, a boot Disk and attach the CDROM image in the next steps.

Next select +VM Create and fill out the details as required above. We will add a NIC, a boot Disk and attach the CDROM image in the next steps.

Add a NIC from the previously created L2 network (VLAN 64)

Add a NIC from the previously created L2 network (VLAN 64)

Attach the CDROM image by selecting CLONE FROM NDFS FILE and specifying the path to the image. Images are stored on a specifically created for the purpose NFS container.

Attach the CDROM image by selecting CLONE FROM NDFS FILE. Specify the PATH to the image. Images are stored on a NFS container – specifically created for that purpose

Add Disk – create a 100GB vDisk to act as the permanent boot disk that will be stored on DEFAULT-CTR.

Add Disk – create a 100GB vDisk to act as the permanent boot disk stored on DEFAULT-CTR.

Power the VM and launch the console from the Prism GUI. The VM should power on and install.

Power the VM and launch the console from the Prism GUI. The VM should power on and install.

The finished product (remember to “eject” the cdrom) …

The finished product (remember to “eject” the cdrom)

Next, I am going to step through the manual creation of a VM using the standard APIs and show how the complexity of which, has been abstracted by doing things the Nutanix way. First off, we are going to need a virtual disk image:

$ qemu-img create -f qcow2 libvirt-example.qcow 4G

Formatting ‘libvirt-example.qcow’, fmt=qcow2 size=4294967296 encryption=off cluster_size=65536 lazy_refcounts=off

Here’s the syntax to create a very basic VM using the libvirt API. I am specifying the cdrom image, the virtual disk location, a name for the VM and the connection to the local libvirt instance:

$ sudo virt-install \
–cdrom=/var/lib/libvirt/images/ttylinuxvirtio_x86_64-16.1.iso \
–disk=/var/lib/libvirt/images/libvirt-example.qcow,format=qcow2 \
–name=libvirt-example –ram=512 –connect qemu:///system

You can obtain the above ttylinux image here. Note also that libvirt has created a default network for the VM:

$ sudo virsh net-list –all
Name                    State     Autostart             Persistent
————————————————————-
default                 active    yes                        yes

Next, we can create another VM but this time using the QEMU interface. In this example we create a VNC endpoint to connect to the VM after start up:

sudo qemu-system-x86_64 -enable-kvm -name qemu-example \
-m 1G -hda /var/lib/libvirt/images/qemu-example.qcow2 \
–cdrom /var/lib/libvirt/images/ttylinux-virtio_x86_64-16.1.iso \
-vnc 127.0.0.1:1

These images can of course be managed by utilities such as virt-manager, virt-viewer, etc. Equally, I have not shown the full complexity of the command line options, exposed by the standard KVM APIs. I have shown though, how the Nutanix software simplifies and abstracts away the complexity of these APIs that most provisioning and orchestration stacks have to deal with. The Nutanix platform does provide a management API and a command line syntax to build out your VMs but I will leave that for another post in the future. Thanks for reading.