Tag Archives: workingset

Sharded MongoDB config on Nutanix (1) : Deployment

So far I have posted on MongoDB deployments either as standalone or as part of a replica set. This is fine when you can size your VM memory to hold the entire database working set. However, if your VM’s RAM will not accommodate the working set in memory, you will need to shard to aggregate RAM from multiple replica sets and form a MongoDB cluster.

Having already discussed using clones of gold image VMs to create members for a replica set, then the most basic of MongoDB clusters requires at least two replica sets. On top of which we need a number of MongoDB “infrastructure” VMs that make MongoDB cluster operation possible. These entail a minimum of three (3) Configuration Databases (mongod –configsvr) per cluster and around one (1) Query Router (mongos) for every two shards. Here is the layout of a cluster deployment on my lab system:

2shard-system

In the above lab deployment, for availability considerations, I avoid co-locating any primary replica VM on the same physical host, and likewise any of the Query Router or ConfigDB VMs. One thing to bear in mind is that sharding is done on a per collection basis. Simply put, the idea behind sharding is that you split the collections across the replica sets and then by connecting to a mongos process you are routed to the appropriate shard holding the part of the collection that can serve your query. The following commands show the syntax to create one of the three required configdb’s (ran on three separate VMs, and need to be started first), and a Query Router, or mongos process (where we add the IP addresses of each configdb server VM) :

Config DB Servers – each ran as:
mongod --configsvr --dbpath /data/configdb --port 27019

Query Router - ran as:
mongos --configdb 10.68.64.142:27019,10.68.64.143:27019,10.68.64.145:27019

- the above IP addresses in mongos command line are the addresses of each config DB.

This brings up an issue if you are not cloning replica VMs from “blank” gold VMs. By cloning a new replica set from a current working replica set, ie: so that you essentially have each replica set holding a full copy of all your databases and their collections. Then when you come to add such a replica set as a shard, you generate the error condition shown below.

Here’s the example of what can happen when you attempt to shard and your new replica set (rs02)  is simply cloned off a current running replica set (rs01):

mongos> sh.addShard("rs02/192.168.1.52")
{s
 "ok" : 0,
 "errmsg" : "can't add shard rs02/192.168.1.52:27017 because a local database 'ycsb' 
exists in another rs01:rs01/192.168.1.27:27017,192.168.1.32:27017,192.168.1.65:27017"
}

This is the successful workflow adding both shards (the primary of each replica set) via the mongos router VM:

$ mongo --host localhost --port 27017
MongoDB shell version: 3.0.3
connecting to: localhost:27017/test
mongos>
 
mongos> sh.addShard("rs01/10.68.64.111")
{ "shardAdded" : "rs01", "ok" : 1 }
mongos> sh.addShard("rs02/10.68.64.110")
{ "shardAdded" : "rs02", "ok" : 1 }

We next need to enable sharding on the database and subsequently shard on the collection we want to distribute across the replica sets available. The choice of shard key is crucial to future MongoDB cluster performance. Issues such as read and write scaling, cardinality etc are covered here. For my test cluster I am using the _id field for demonstration purposes.

mongos> sh.enableSharding("ycsb")
{ "ok" : 1 }

mongos> sh.shardCollection("ycsb.usertable", { "_id": 1})
{ "collectionsharded" : "ycsb.usertable", "ok" : 1 }

The balancer process will run for the period of time needed to migrate data between the available shards. This can take anywhere from a number of hours to a number of days depending on the size of the collection, the number of shards, the current workload etc. Once complete however, this results in the following sharding status output. Notice  the “chunks” of the usertable collection held in the ycsb database are now shared across both shards (522 chunks in each shard) :

 mongos> sh.status()
--- Sharding Status ---
 sharding version: {
 "_id" : 1,
 "minCompatibleVersion" : 5,
 "currentVersion" : 6,
 "clusterId" : ObjectId("55f96e6c5dfc4a5c6490bea3")
}
 shards:
 { "_id" : "rs01", "host" : "rs01/10.68.64.111:27017,10.68.64.131:27017,10.68.64.144:27017" }
 { "_id" : "rs02", "host" : "rs02/10.68.64.110:27017,10.68.64.114:27017,10.68.64.137:27017" }
 balancer:
 Currently enabled: yes
 Currently running: no
 Failed balancer rounds in last 5 attempts: 0
 Migration Results for the last 24 hours:
 No recent migrations
 databases:
 { "_id" : "admin", "partitioned" : false, "primary" : "config" }
 { "_id" : "enron_mail", "partitioned" : false, "primary" : "rs01" }
 { "_id" : "mydocs", "partitioned" : false, "primary" : "rs01" }
 { "_id" : "sbtest", "partitioned" : false, "primary" : "rs01" }
 { "_id" : "ycsb", "partitioned" : true, "primary" : "rs01" }
 ycsb.usertable
 shard key: { "_id" : 1 }
 chunks:
 rs01 522
 rs02 522
 too many chunks to print, use verbose if you want to force print
 { "_id" : "test", "partitioned" : false, "primary" : "rs02" }

Additional Links:

 

 

 

 

 

 

Getting started with MongoDB shell and pymongo

In my last blog article I described how to setup a MongoDB instance in a VM. In order to use that VM and run various diagnostic commands, we are going to need a some data to play with. The easiest way to get data is to use a data science archive . I was able to find the Enron Mail Corpus in mongodump format (credit for this must go to Bryan Nehl). This then becomes trivially easy to import the ~500,000 emails in the corpus in MongoDB document format. See below.

We uncompress and extract the downloaded tarfile to get the dump directory structure…

...
drwxr-xr-x. 3 mongod mongod 4096 Jan 18 2012 dump
-rw-r--r--. 1 mongod mongod 1459855360 Feb 2 2012 enron_mongo.tar
...

and using the mongorestore utility to load the database – we don’t specify additional cli options as mongorestore will look for the dump directory structure in the current directory by default:

$ mongorestore
2015-07-28T14:03:20.079+0100 using default 'dump' directory
2015-07-28T14:03:20.108+0100 building a list of dbs and collections to restore fr om dump dir
2015-07-28T14:03:20.131+0100 no metadata file; reading indexes from dump/enron_ma 
il/system.indexes.bson
2015-07-28T14:03:20.140+0100 restoring enron_mail.messages from file dump/enron_m 
ail/messages.bson
2015-07-28T14:03:23.124+0100 [##......................] enron_mail.messages 142.0 MB/1.4 GB (10.2%)
2015-07-28T14:03:26.124+0100 [#####...................] enron_mail.messages 337.5 MB/1.4 GB (24.2%)
2015-07-28T14:03:29.124+0100 [########................] enron_mail.messages 499.2 MB/1.4 GB (35.9%)
2015-07-28T14:03:32.124+0100 [###########.............] enron_mail.messages 645.9 MB/1.4 GB (46.4%)
2015-07-28T14:03:35.124+0100 [##############..........] enron_mail.messages 828.4 MB/1.4 GB (59.5%)
2015-07-28T14:03:38.124+0100 [#################.......] enron_mail.messages 1003.7 MB/1.4 GB 
(72.1%)
2015-07-28T14:03:41.124+0100 [####################....] enron_mail.messages 1.1 GB/1.4 GB (83.5%)
2015-07-28T14:03:44.124+0100 [######################..] enron_mail.messages 1.3 GB/1.4 GB 
(94.6%)
2015-07-28T14:03:45.326+0100 restoring indexes for collection enron_mail.messages from metadata
2015-07-28T14:03:45.372+0100 finished restoring enron_mail.messages
2015-07-28T14:03:45.372+0100 done

We can now see the database in a local mongo shell session :

> show dbs
enron_mail 1.435GB
local 0.000GB

To remove the database for any reason. For example, say you need to run subsequent benchmarks that reload a test database. Then run the following command to drop the current database prior to reloading it afresh.

from the mongo shell using sbtest database as an example ….

> use sbtest
switched to db sbtest
> db.runCommand( { dropDatabase: 1 } )
{ "dropped" : "sbtest", "ok" : 1 }
>

Configuration and sizing

The following commands can be used to size a database working set. This is useful in terms platform design and capacity planning. The db.serverStatus() command gives a great deal of information about the running instance. We will only concern ourselves with the memory component at this point. Note that it is imperative for good performance that the working set and associated indexes are always held in RAM. So, for pre 3.0 versions of MongoDB then :

db.serverStatus({workingSet:1}).workingSet
...
"pagesInMemory" : 91521
...

Multiply working set pages by PAGESIZE to get size in bytes

# getconf PAGESIZE
4096

The db.stats() command provides the size of the indexes in use

db.stats().IndexSize
7131826688

So for our example this can be calculated as follows :

(915211 * 4096) + 7131826688 ~ 6GB

As of MongoDB 3.0 the working set section is no longer available – the document now returns:

> db.serverStatus().mem
{ 
 "bits" : 64,
 "resident" : 20466,
 "virtual" : 148248,
 "supported" : true,
 "mapped" : 73725,
 "mappedWithJournal" : 147450
}

The above sizes (in bold) are in megabytes (MB), and correspond respectively to the virtual memory of the mongod process, the amount of mapped memory and the amount of mapped memory including the memory used for journaling. These numbers can be used in order to allocate sufficient RAM to your guest VM database host.

The following db.hostInfo() command reveals among other things, the instruction set supported by the VM, the various operating system limit settings and whether NUMA is disabled:

> db.hostInfo()
{
 "system" : {
 "currentTime" : ISODate("2015-07-28T13:45:36.093Z"),
 "hostname" : "mongowt01",
 "cpuAddrSize" : 64,
 "memSizeMB" : 64427,
 "numCores" : 8,
 "cpuArch" : "x86_64",
 "numaEnabled" : false
 },
 "os" : {
 "type" : "Linux",
 "name" : "CentOS release 6.6 (Final)",
 "version" : "Kernel 2.6.32-504.el6.x86_64"
 },
 "extra" : {
 "versionString" : "Linux version 2.6.32-504.el6.x86_64 
 (mockbuild@c6b9.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC) ) #1 SMP Wed Oct 15 04:27:16 UTC 2014",
 "libcVersion" : "2.12",
 "kernelVersion" : "2.6.32-504.el6.x86_64",
 "cpuFrequencyMHz" : "2799.998",
 "cpuFeatures" : "fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb lm constant_tsc rep_good unfair_spinlock pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm xsaveopt fsgsbase smep erms",
 "pageSize" : NumberLong(4096),
 "numPages" : 16493566,
 "maxOpenFiles" : 65536
 },
 "ok" : 1
}

Backups/Snapshots

In order to take a crash consistent backup then the following command sequence is required before and after the backup :

> db.fsyncLock()
{
 "info" : "now locked against writes, use db.fsyncUnlock() to unlock",
 "seeAlso" : "http://dochub.mongodb.org/core/fsynccommand",
 "ok" : 1
}

perform host level OS backup or better still, take VM-centric snapshot and then …

> db.fsyncUnlock()
{ "ok" : 1, "info" : "unlock completed" }
>

Delving into the database structure, show collections will list the document collections within a database (in this case, the previously loaded enron_mail db) and you can use that information to inspect individual documents:

> show collections
messages

These next commands can be used to retrieve a document or set of documents. The document below has been edited to retain the privacy of the original sender.

> db.messages.findOne()
"_id" : ObjectId("4f16fc97d1e2d32371003e27"),
"body" : "the scrimmage is still up in the air...\n\n\nwebb said that they didnt want to scrimmage...\n\nthe aggies are scrimmaging each other... (the aggie teams practiced on \nSunday)\n\nwhen I called the aggie captains to see if we could use their field.... they \nsaid that it was tooo smalll for us to use...\n\n\nsounds like bullsh*t to me... but what can we do....\n\n\nanyway... we will have to do another practice Wed. night.... and I dont' \nknow where we can practice.... any suggestions...\n\n\nalso, we still need one more person...",
"subFolder" : "notes_inbox",
<snip>
db.messages.findOne({_id: "4f16fc97d1e2d32371003e27" })
db.messages.find().limit(3)

Just for the record – the database can be manually shutdown using:

>use admin
>db.shutdownServer()

Performance issues

db.currentOp() is one of the commands available from the database profiler that allows admins to locate any queries or write operations that are running slow.

> db.currentOp()
{
 "inprog" : [
 {
 "desc" : "conn374",
 "threadId" : "0x16574c340
 "connectionId" : 374,
 "opid" : 1032339378,
 "active" : true,
 "secs_running" : 0,
 "microsecs_running" : NumberLong(105738),
 "op" : "insert",
 "ns" : "sbtest.sbtest6",
 "insert" : {
<snip>

A badly behaving database operation can be killed using :

> db.killOp(1032339378)
{ "info" : "attempting to kill op" }

In order to see the five most recent operations that took 100 milliseconds (the default) or more, you can enable profiling – see below (output shortened)

setProfilingLevel() arguments are 0 for no profiling, 1 for only slow operations, or 2 for all operations. You can add a second argument to change the threshold for what is considered a slow db operation, for example this can be reduced to 10 ms.

> db.setProfilingLevel(2)
{ "was" : 0, "slowms" : 100, "ok" : 1 }

> db.system.profile.find()
{ 
"op" : "insert", 
"ns" : "sbtest.sbtest6", 
"query" : { 
 "_id" : 2628714, 
 "k" : 4804469, 
 "c" : "42025084972-52016328459-02616906732-06037924356-25803606931-90180435635-33434735556-64942463775-51942983544-69579223058", 
 "pad" : "83483501744-16275794559-91512432879-42096600452- 97899816846" 
 }, 
 "ninserted" : 1, 
 "keyUpdates" : 0, 
 "writeConflicts" : 0,
 "numYield" : 0,
 "locks" : {
 "Global" : {
 "acquireCount" : { 
 "w" : NumberLong(720) 
 }
 },
 "Database" : { 
 "acquireCount" : { 
 "w" : NumberLong(720)
 } 
 },
 "Collection" : {
 "acquireCount" : {
 "w" : NumberLong(720)
 }
 }
 }, 
"millis" : 0,
 "execStats" : { },
 "ts" : ISODate("2015-07-28T16:37:01.959Z"),
 "client" : "10.68.64.112",
 "allUsers" : [ ],
 "user" : "" }
<snipped>

So far we have simply been working on a previously created database. If we wanted to generate a workload, we would need to use a well known synthetic workload generator such as sysbench or YCSB (more on these in a future post). One other alternative, is using the pymongo device driver to connect to a MongoDB instance. Then use standard Python idioms to call the MongoDB API. To install the pymongo driver, either install the pre-packaged version from the EPEL repo (for RHEL based Linux) or download the git repo and build the driver manually.

sudo yum -y install epel-release
sudo yum -y install python-pip
sudo pip install pymongo

or...

git clone git://github.com/mongodb/mongo-python-driver.git
cd mongod-python-driver
python setup.py install

The following python interpreter session shows the basics of connecting to a MongoDB instance and how to load documents into a collection. This could be extended to do various read and write based workloads depending on what you are looking to test or characterise.

create a database client connection :

>>> from pymongo import MongoClient
>>> uri = 'mongodb://10.68.64.111:27017'
>>> conn = MongoClient(uri)

create a document collection object:

>>> collection = conn.mydocs.docs

Inserting documents:

>>> doc1 = {'author': 'Ray Hassan', 'title': 'My first doc'}
 >>> conn.mydocs.docs.insert_one(doc1)
<pymongo.results.InsertOneResult object at 0x11b9780>
>>> doc2 = {'author': 'Ray Hassan', 'title': 'My 2nd doc'}
>>> conn.mydocs.docs.insert_one(doc2)
<pymongo.results.InsertOneResult object at 0x11b90f0>

retrieving documents via a python list:

>>> cursor = collection.find()
>>> for doc in cursor: print doc
...
{u'_id': ObjectId('55b8ec5bd7cf7a74c8bdd3bf'), u'author': u'Ray Hassan', u'title': u'My first doc'}
{u'_id': ObjectId('55b8ec69d7cf7a74c8bdd3c0'), u'author': u'Ray Hassan', u'title': u'My 2nd doc'}

If we wanted to improve the performance of a particular query we can use the explain() command. First lets take a look at the explain() output from a query that uses a document without an index

>>> collection.find({'author': 'Ray Hassan'}).explain()
{u'executionStats': {u'executionTimeMillis': 0, u'nReturned': 4, u'totalKeysExamined': 0, u'allPlansExecution': [], u'executionSuccess': True, u'executionStages': {u'docsExamined': 4, u'restoreState': 0, u'direction': u'forward',u'saveState': 0, u'isEOF': 1, u'needFetch': 0, u'nReturned': 4, u'needTime': 1, u'filter': {u'author': {u'$eq': u'Ray 
Hassan'}}, u'executionTimeMillisEstimate': 0, u'invalidates': 0, u'works': 6, u'advanced': 4, u'stage': u'COLLSCAN'}, u'totalDocsExamined': 4}, u'queryPlanner': {u'parsedQuery': {u'author': {u'$eq': u'Ray Hassan'}}, u'rejectedPlans': [], u'namespace': u'mydocs.docs', u'winningPlan': {u'filter': {u'author': {u'$eq': u'Ray Hassan'}}, u'direction': u'forward', u'stage': u'COLLSCAN'}, u'indexFilterSet': False, u'plannerVersion': 1}, u'serverInfo': {u'host': u'mongowt01', u'version': u'3.0.3', u'port': 27017, u'gitVersion': u'b40106b36eecd1b4407eb1ad1af6bc60593c6105 modules: enterprise'}

Without an index the query above has to perform a full scan of the collection (COLLSCAN) and we get 4 documents returned (nReturned). In order to improve the performance of the query we could consider adding an index to one of the document fields.

>>>from pymongo import ASCENDING, DESCENDING
>>> collection.create_index([('author', ASCENDING)])
u'author_1'

>>> collection.find({'author': 'Ray Hassan'}).explain()
{u'executionStats': {u'executionTimeMillis': 0, u'nReturned': 4, u'totalKeysExamined': 4, u'allPlansExecution': [], u'executionSuccess': True, u'executionStages': {u'restoreState': 0, u'docsExamined': 4, u'saveState': 0, u'isEOF': 1, u'inputStage': {u'matchTested': 0, u'restoreState': 0, u'direction': u'forward', u'saveState': 0, u'indexName': 
u'author_1', u'dupsTested': 0, u'isEOF': 1, u'needFetch': 0, u'nReturned': 4, u'needTime': 0, u'seenInvalidated': 0, u'dupsDropped': 0, u'keysExamined': 4, u'indexBounds': {u'author': [u'["Ray Hassan", "Ray Hassan"]']}, u'executionTimeMillisEstimate': 0, u'isMultiKey': False, u'keyPattern': {u'author': 1}, u'invalidates': 0, u'works': 4, u'advanced': 4, u'stage': u'IXSCAN'}, u'needFetch': 0, u'nReturned': 4, u'needTime': 0, 
u'executionTimeMillisEstimate': 0, u'alreadyHasObj': 0, u'invalidates': 0, u'works': 5, u'advanced': 4, u'stage': u'FETCH'}, u'totalDocsExamined': 4}, u'queryPlanner': {u'parsedQuery': {u'author': {u'$eq': u'Ray Hassan'}}, u'rejectedPlans': [], u'namespace': u'mydocs.docs', u'winningPlan': {u'inputStage': {u'direction': u'forward', u'indexName': u'author_1', u'indexBounds': {u'author': [u'["Ray Hassan", "Ray Hassan"]']}, u'isMultiKey': False, u'stage': u'IXSCAN', u'keyPattern': {u'author': 1}}, u'stage': u'FETCH'}, u'indexFilterSet': False, u'plannerVersion': 1}, 
u'serverInfo': {u'host': u'mongowt01', u'version': u'3.0.3', u'port': 27017, u'gitVersion': u'b40106b36eecd1b4407eb1ad1af6bc60593c6105 modules: enterprise'}}

In the above output we can see since adding an index that we now perform an Index scan (IXSCAN) – if appropriately chosen, this can reduce the number of documents returned in a query. In our case (a very trivial example) this has not been the case. Ordinarily for a larger (or perhaps better?) example this would tend to be more performant.

The above merely touches on what can be done based on NoSQL workload testing requirements. I do hope however, that  you find it a good place start.