Tag Archives: Acropolis

Using Nutanix clones to deploy MongoDB replica set

In this post I am going to look at setting up a replica set to support high availability in a MongoDB environment. Replica sets contain a primary MongoDB database and a number of additional secondary replica databases. Any one of the allowed replicas can become primary in the event that the original primary fails for whatever reason. Replica set membership count is usually an odd number in order that new primary elections are not tied.

Building out an HA MongoDB setup on Nutanix is relatively easy to do. Each MongoDB instance is hosted in a separate, sandboxed environment. In our case a virtual machine (VM). Each VM is then located on a separate physical hypervisor host. I have a gold image VM that has a MongoDB instance installed along recommended best practice guidelines. This VM gets cloned as required when I need to build out a new MongoDB environment. So for a 3 member replica set I need 3 clones.

three-replicaset

From a cluster CVM node type:

$ acli 

<acropolis> vm.clone mongodb01,mongodb02,mongodb03 clone_from_vm=mongodb30-gold
mongodb01: complete
mongodb02: complete
mongodb03: complete
 
<acropolis> vm.list
...
mongodb01: 2b9498c1-502e-454e-93c8-931a45a321b6
mongodb02: 9a445d26-caf9-4ddf-9d8e-296ea8b6e19e
mongodb03: 9a5512fa-3d19-4ddc-8cac-11721f999459
...

<acropolis> vm.on mongodb01,mongdb02,mongodb03
mongodb01: complete
mongodb01: complete
mongodb01: complete

After powering on the VMs, check that mongod starts correctly on default port 27017 on each VM. First thing to make sure is that the mongod process is listening on the correct address. I have set my VMs to use DHCP and this is the address that the service needs to listen on.

# ip a

2: eth0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 link/ether 52:54:00:db:17:76 brd ff:ff:ff:ff:ff:ff
 inet 10.68.64.111/24 brd 10.68.64.255 scope global eth0


# cat /etc/mongod.conf | grep -i bind_ip
 bind_ip=127.0.0.1,10.68.64.111

# service mongod restart
# service mongod status

Once all of the VMs are up and running on their respective address:port tuples, make sure that we enable firewall access via iptables. Each VM, that will form part of the replica set, needs to allow access to the other members via mongod port 27017. So for a replica set with members 10.68.64.111, 10.68.64.114, 10.68.64.113, then for each member, in this example 10.68.64.111, run…

# iptables -A INPUT -s 10.68.64.113 -p tcp --destination-port 27017 -m state \
--state NEW,ESTABLISHED -j ACCEPT
# iptables -A INPUT -s 10.68.64.114 -p tcp --destination-port 27017 -m state \
--state NEW,ESTABLISHED -j ACCEPT

# service iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]
# service iptables reload

abridged iptables -L output after the above changes….

Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- 10.68.64.113 anywhere tcp dpt:27017 state NEW,ESTABLISHED
ACCEPT tcp -- 10.68.64.114 anywhere tcp dpt:27017 state NEW,ESTABLISHED

Check access by performing a series of bi-directional tests between all the replica set members:

<10.68.64.111>$ mongo --host 10.68.64.113 --port 27017
MongoDB shell version: 3.0.3
connecting to: 10.68.64.113:27017/test
>
> quit()

Should any of the connection tests fail then revisit the iptables entries. Usual troubleshooting applies with telnet or nc, netstat etc.

In order to create the replica set, connect via ssh to each VM and edit the mongod.conf to include the replSet functionality:

$ grep -i replSet /etc/mongod.conf
replSet=rs01

Restart the mongod process (sudo service mongod restart) and then start a mongo shell session, the first member of the set (primary) needs to run :

$ mongo
MongoDB shell version: 3.0.3
connecting to: test
> rs.initiate()
{
 "info2" : "no configuration explicitly specified -- making one",
 "me" : "10.68.64.111:27017",
 "ok" : 1
}
rs01:PRIMARY>

You can use the shell commands rs.conf() and rs.status() to check the replica set at any point. We’ll look at one of these outputs after completing the replica set creation. Next, from the same mongo shell session, add the other two replica nodes:

rs01:PRIMARY> rs.add("10.68.64.113")
{ "ok" : 1 }

rs01:PRIMARY> rs.add("10.68.64.114")
{ "ok" : 1 }

Potential error scenarios

  •  if you didn’t clone the VMs for the replica set from a blank gold image but rather from a VM already running a replicated mongodb configuration. Then the replication commands report errors similar to this :
{
 "info2" : "no configuration explicitly specified -- making one",
 "me" : "10.68.64.111:27017",
 "info" : "try querying local.system.replset to see current configuration",
 "ok" : 0,
 "errmsg" : "already initialized",
 "code" : 23
}

On the proviso that this is a greenfield install, delete the local db config files in the data directory and re-run the rs.initiate()

  • if the firewall rules are not set correctly then the following error message is thrown:
 "errmsg" : "Quorum check failed because not enough voting nodes responded; 
required 2 but only the following 1 voting nodes responded: 10.68.64.111:27017; 
the following nodes did not respond affirmatively: 
10.68.64.131:27017 failed with Failed attempt to connect to 10.68.64.131:27017; 
couldn't connect to server 10.68.64.131:27017 (10.68.64.131), 
connection attempt failed",

Ensure that the firewall rules allow proper access between the VM’s.

  • if replication is not enabled correctly in the mongod configuration files on each host of the replica set :
"errmsg" : "Quorum check failed because not enough voting nodes responded; 
required 2 but only the following 1 voting nodes responded: 10.68.64.110:27017; 
the following nodes did not respond affirmatively: 
10.68.64.114:27017 failed with not running with --replSet",

Once the replica set configuration is complete, check the setup by running rs.status() or rs.conf() to confirm :

rs01:PRIMARY> rs.conf()
{
 "_id" : "rs01",
 "version" : 3,
 "members" : [
 {
 "_id" : 0,
 "host" : "10.68.64.111:27017",
 "arbiterOnly" : false,
 "buildIndexes" : true,
 "hidden" : false,
 "priority" : 1,
 "tags" : {

 },
 "slaveDelay" : 0,
 "votes" : 1
 },
 {
 "_id" : 1,
 "host" : "10.68.64.113:27017",
 "arbiterOnly" : false,
 "buildIndexes" : true,
 "hidden" : false,
 "priority" : 1,
 "tags" : {

 },
 "slaveDelay" : 0,
 "votes" : 1
 },
 {
 "_id" : 2,
 "host" : "10.68.64.114:27017",
 "arbiterOnly" : false,
 "buildIndexes" : true,
 "hidden" : false,
 "priority" : 1,
 "tags" : {

 },
 "slaveDelay" : 0,
 "votes" : 1
 }
 ],
 "settings" : {
 "chainingAllowed" : true,
 "heartbeatTimeoutSecs" : 10,
 "getLastErrorModes" : {

 },
 "getLastErrorDefaults" : {
 "w" : 1,
 "wtimeout" : 0
 }
 }
}

From the output above we can see the full replica set membership, both the member function and status. Things like priority settings and whether or not the replica is hidden to user applications queries etc. Also, whether a replica is a full mongod instance or an arbiter (simply there to mitigate against primary election ties). Or, if any of the replicas have a delay enabled (used for backup/reporting duties).

In an earlier post I have shown the available mongo shell commands to calculate the working set for the database. For read intensive workloads, where your working set is sized to fit available RAM in the mongod server VMs; a replica set deployment can be used to run MongoDB and support high availability.

That ‘One Click’ upgrade again, in full

One way of demonstrating the concept of ‘Invisible Infrastructure’ is the ability to complete a full system upgrade with minimal service interruption. In this post I will show the “One Click” upgrade facility that’s available on the Nutanix platform.  This facility allows the admin to upgrade the Nutanix Operating System (NOS), the hypervisor, any required storage firmware and appropriate version of  Nutanix Cluster Check (NCC) for the target NOS release.

You can choose to either upload the NOS upgrade tarball or have it automatically downloaded to a landing area. Just check the Enable Automatic downloads box. Here I am uploading the software to the platform.

You can choose to either upload the NOS upgrade tarball or have it automatically downloaded to a landing area. Just check the Enable Automatic Download box. Here I am uploading the software to the platform.

Similar to the NOS version, the hypervisor can also be upgraded to a newer version when available.

Similar to the NOS version, the hypervisor can also be upgraded to a newer version when available.

You can either select to run the preupgrade checks standalone without performing an upgrade or just select to upgrade directly, in which case the checks are run prior.

You can either select to run the preupgrade checks standalone without performing an upgrade or just select to upgrade directly, those same checks will be run before the start of the upgrade in any case.

Selecting upgrade will show the progress of the various stages of the upgrade as they occur.

Selecting upgrade will show the progress of the various stages of the upgrade as they occur. CVMs are upgraded sequentially and only one CVM is rebooted at a time. A CVM is always back in the cluster membership before the next CVM is restarted.

kvm-preupgrade

You can choose to upgrade the underlying hypervisor as well at this stage.

You can choose to upgrade the underlying hypervisor as well at this stage.

As always you can monitor progress in the Prism main window. Here we see the upgrade process has been completed successfully.

As always you can check progress in the Prism main window. Here we see the upgrade process has completed successfully.

kvm-upgrade-events

Nutanix Prism also shows the individual task info ie task stage, CVM/host involved, time taken etc.

Nutanix Prism also shows the individual task info ie task stage, CVM/host involved, time taken etc.

The Nutanix platform upgrade takes care of all the intermediate steps and just works, regardless of the size of the cluster. There’s minimal impact and disruption as the upgrade takes place and it enables you to carry out such tasks within normal working hours, and not losing a weekend to the usual rigours of a traditional hardware upgrade cycle.

Webscalin’ – adding Nutanix nodes

Most modern web-scale applications (NoSQL, Search, Big Data, etc) are achieving massive elastic scale though horizontal scale out techniques. The admins for such apps require the ability to add nodes and storage for the required scale out without interruption to service. The workflow for adding a node to a Nutanix cluster allows such seamless addition, without any of the complex storage operations such as multipathing, zoning/masking, etc. A node is simply added to the chassis, the autodiscovery service detects the new node and the user is then simply asked to push a button to complete the process. The following are some screenshots of the prescribed workflow…

Connect to the nodes lights out management or IPMI webapp via a browser (enter the IPMI address) and login using the ADMIN credentials. You may need enable java im yor browser and configure java to allow the IPMI address.

After inserting the new node into the chassis slot, connect to the nodes lights out management or IPMI webapp via a browser (enter the IPMI address) and login using the ADMIN credentials. You may need enable Java in your browser and configure Java to allow the IPMI address.

Launch the remote console to access the Hypervisor

Launch the Console to enable remote access the Hypervisor.

Using the menu bar power on the node (if needed) otherwise login and configure network addressing.

Using the ‘Power Control’ drop down on the Menu bar across the top of the frame- Power On the node (if needed). You can at this point set up any L2 networking such vlan tagging etc.

Select 'Expand Cluster' from the right drop down menus in the Prism GUI. The node should be auto-discovered.

Select ‘Expand Cluster’ from the right drop down menus in the Prism GUI. The node should be auto-discovered.

Configure the required network addresses and select 'Save' to add the node to the cluster.

Configure the required network addresses and select ‘Save’ to add the node to the cluster.

The progress of the node addition can be monitored in the Prism GUI. Note that the hypervisor was automatically upgraded in order to maintain the same software functionality across the cluster nodes.

The progress of the node addition can be monitored in the Prism GUI. Note that the hypervisor was automatically upgraded in order to maintain the same software functionality across the cluster nodes.

That’s it, once the node is added and the metadata is re-balanced across all the nodes,  then the new nodes storage (HDD/SSD) is added to the storage pool with the rest of the cluster nodes. At which point all containers (datastores) are automatically mounted onto the newly added host and the new host is ready to receive guests! This kind of ease of use story is becoming paramount in terms of  time to value for many webscale applications. Its all well and good having applications on top of NoSQL DBs that allow for rapid development and deployment. However, if the upfront planning for the underlying architecture holds everything back for days if not weeks, then modern DevOps style operations are much harder to achieve..