Tag Archives: tasks

The Creator Supreme! …..or How to be the Kenny Dalglish of Nutanix API automation

For the non European football fans out there, Kenny Dalglish,or “King Kenny” as he was known to both the Liverpool and Celtic faithful, was once described in a match commentary as “the creator supreme”.  In this short series of posts covering the REST capabilities for managing the Nutanix Enterprise Cloud, I hope to show its possible that we can all be a “creator supreme” just like King Kenny!

Swagger

The first place to look when working with the API is the REST API Explorer itself. You invoke the Explorer page by right-clicking the admin pull-down menu (top right of Prism home page) and selecting the REST API Explorer option.

It’s good practice to create a separate user login for the REST Explorer. The REST API Explorer is essentially the documentation to the API. It’s produced via swagger, which is an open standard that takes your API spec and generates interactive documentation. The documentation can be viewed and you are able to interactively test API calls via a browser.

Images

Let’s start by taking a look at a simple POST method that will list all currently available images:

POST /images/list

Select the above method in the images section of the REST API Explorer to expand the method details:

In order to try out the REST method, double click on the Model Schema box (right hand side above) – this will then be populated into the get_entities_request box on the left hand side. You can edit the entities according to how you want to retrieve the available information. For example here’s the bare minimum you need as a JSON payload to request information from the Images catalogue.

 {
   "kind": "image",
   "offset": 0,
   "length": 10
 }

Note that with our pagination we are starting at offset zero – so the first image – until the tenth image, defined by the length parameter. With the JSON payload entered as above we can press the Try it out! button and see the method in action.

The results of the method call are displayed below. The Curl syntax for invoking the method and json payload are shown, along with the individual Request URL and the Response Body. We can use the Curl syntax to programmatically call the method outside of the Explorer, either in Bash or Python, for example.

Once we begin to use the methods independently of the Explorer, then in addition to curl you should consider installing a JSON command line processor like jq, and use a JSON Linter to validate your JSON syntax for data payloads. How the tools might be used will be shown throughout this post

CURL

Lets recap the previous POST method but this time run it from the command line. In this instance we load the JSON payload (see above) from the file list_images_v3.json using the -d option. The -k option , or — insecure allows the command to proceed even though I am using self-signed SSL/TLS certs. The -s option simply disables all progress indicators.

curl -s --user api:<password> -k -X POST --header "Content-Type: application/json" --header "Accept: application/json" -d @list_images_v3.json "https://10.XX.XX.60:9440/api/nutanix/v3/images/list" | jq 

Piping the output from the curl command into jq, provides a formatted and syntax highlighted output that’s easier to read. To make this more obvious, let’s use some additional options to the jq command line and pull out just one image reference:

curl -s --user apiuser:<password> -k -X POST --header "Content-Type: application/json" --header "Accept: application/json" -d @list_images_v3.json "https://10.68.64.60:9440/api/nutanix/v3/images/list" | jq '.entities[] | select (.spec.name=="CentOS7-x86_64-Generic Cloud")'

 {
   "status": {
     "state": "COMPLETE",
     "name": "CentOS7-x86_64-Generic Cloud",
     "resources": {
       "retrieval_uri_list": [
         "https://127.0.0.1:9440/api/nutanix/v3/images//file"
       ],
       "image_type": "DISK_IMAGE",
       "architecture": "X86_64",
       "size_bytes": 8589934592
     },
     "description": "Generic Cloud"
   },
   "spec": {
     "name": "CentOS7-x86_64-Generic Cloud",
     "resources": {
       "image_type": "DISK_IMAGE",
       "architecture": "X86_64"
     },
     "description": "Generic Cloud"
   },
   "metadata": {
     "last_update_time": "2019-03-27T10:47:15Z",
     "kind": "image",
     "uuid": "04a18eb0-a3ed-4ff7-aa43-bdbb055a96ef",
     "spec_version": 0,
     "creation_time": "2019-03-27T10:47:15Z",
     "categories": {}
   }
 }

All well and good if you know the exact name of your image. What about if you don’t ? See below

curl -s --user apiuser:<password> -k -X POST --header "Content-Type: application/json" --header "Accept: application/json" -d @list_images_v3.json "https://10.68.64.60:9440/api/nutanix/v3/images/list" | jq '.entities[] | select (.spec.name | . and contains("CentOS"))'

{
  "status": {
    "state": "COMPLETE",
    "name": "CentOS7-x86_64-Minimal",
    "resources": {
      "retrieval_uri_list": [
        "https://127.0.0.1:9440/api/nutanix/v3/images//file"
      ],
      "image_type": "ISO_IMAGE",
      "architecture": "X86_64",
      "size_bytes": 713031680
    },
    "description": "Minimal"
  },
  "spec": {
    "name": "CentOS7-x86_64-Minimal",
    "resources": {
      "image_type": "ISO_IMAGE",
      "architecture": "X86_64"
    },
    "description": "Minimal"
  },
  "metadata": {
    "last_update_time": "2019-03-27T10:47:15Z",
    "kind": "image",
    "uuid": "dd482003-99f4-45df-9406-1dc9859418c4",
    "spec_version": 0,
    "creation_time": "2019-03-27T10:47:15Z",
    "categories": {}
  }
}
{
  "status": {
    "state": "COMPLETE",
    "name": "CentOS7-x86_64-Generic Cloud",
    "resources": {
      "retrieval_uri_list": [
        "https://127.0.0.1:9440/api/nutanix/v3/images//file"
      ],
      "image_type": "DISK_IMAGE",
      "architecture": "X86_64",
      "size_bytes": 8589934592
    },
    "description": "Generic Cloud"
  },
  "spec": {
    "name": "CentOS7-x86_64-Generic Cloud",
    "resources": {
      "image_type": "DISK_IMAGE",
      "architecture": "X86_64"
    },
    "description": "Generic Cloud"
  },
  "metadata": {
    "last_update_time": "2019-03-27T10:47:15Z",
    "kind": "image",
    "uuid": "04a18eb0-a3ed-4ff7-aa43-bdbb055a96ef",
    "spec_version": 0,
    "creation_time": "2019-03-27T10:47:15Z",
    "categories": {}
  }
}

One of the prime uses for this kind of command is to retrieve only the info required when populating a schema for another REST method (see below shortly). For example, you may only want a subset of entries and perhaps they need to be conveniently labelled:

curl -s --user apiuser:<password> -k -X POST --header "Content-Type: application/json" --header "Accept: application/json" -d @list_images_v3.json "https://10.68.64.60:9440/api/nutanix/v3/images/list" | jq '.entities[] | {name: .spec.name, type: .spec.resources.image_type, uuid: .metadata.uuid} | select (.name | . and contains("CentOS"))'
{
"name": "CentOS7-x86_64-Minimal",
"type": "ISO_IMAGE",
"uuid": "dd482003-99f4-45df-9406-1dc9859418c4"
}
{
"name": "CentOS7-x86_64-Generic Cloud",
"type": "DISK_IMAGE",
"uuid": "04a18eb0-a3ed-4ff7-aa43-bdbb055a96ef"
}

Upload

Let’s have a look at uploading an image to the image repository on your Prism Central instance. The following is the required schema :

cat upload_image_v3.json
{
     "spec": {
         "name": "test",
         "resources": {
             "version": {
                 "product_version": "test",
                 "product_name": "test"
             },
             "architecture": "X86_64",
             "image_type": "DISK_IMAGE",
             "source_uri": "http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img"
         }
     },
     "api_version": "3.1.0",
     "metadata": {
         "kind": "image"
     }
 }

which we can use as follows :

curl -s --user apiuser:<password> -k -X POST --header "Content-Type: application/json" --header "Accept: application/json" -d @upload_image_v3.json "https://10.68.64.60:9440/api/nutanix/v3/images" | jq .
 {
   "status": {
     "state": "PENDING",
     "execution_context": {
       "task_uuid": "f1456be3-21a8-45ab-9dc3-c323973e6f3f"
     }
   },
   "spec": {
     "name": "test",
     "resources": {
       "image_type": "DISK_IMAGE",
       "source_uri": "http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img",
       "version": {
         "product_version": "test",
         "product_name": "test"
       },
       "architecture": "X86_64"
     }
   },
   "api_version": "3.1",
   "metadata": {
     "owner_reference": {
       "kind": "user",
       "uuid": "00000000-0000-0000-0000-000000000000",
       "name": "admin"
     },
     "kind": "image",
     "spec_version": 0,
     "uuid": "b87c7183-8716-4051-8a35-da69fdbf1e60"
   }
 }

Tasks

Notice the PENDING status in the output above. We can follow the progress of the image upload by using the task_uuid entry in a tasks method call. The call can be run again and again until a task, in this case the upload, is complete

curl -s --user apiuser:<password> -k -X GET --header "Content-Type: application/json" --header "Accept: application/json" "https://10.68.64.60:9440/api/nutanix/v3/tasks/f1456be3-21a8-45ab-9dc3-c323973e6f3f" | jq .
 {
   "status": "RUNNING",
   "last_update_time": "2019-05-30T13:19:00Z",
   "logical_timestamp": 1,
   "entity_reference_list": [
     {
       "kind": "image",
       "uuid": "b87c7183-8716-4051-8a35-da69fdbf1e60"
     }
   ],
   "start_time": "2019-05-30T13:19:00Z",
   "creation_time": "2019-05-30T13:18:59Z",
   "start_time_usecs": 1559222340033005,
   "cluster_reference": {
     "kind": "cluster",
     "uuid": "e0cca748-66c4-45fb-95e2-10836439ea15"
   },
   "subtask_reference_list": [],
   "progress_message": "create_image_intentful",
   "creation_time_usecs": 1559222339906023,
   "operation_type": "create_image_intentful",
   "percentage_complete": 0,
   "api_version": "3.1",
   "uuid": "f1456be3-21a8-45ab-9dc3-c323973e6f3f"

Delete

Finally, lets delete the image. This is done by specifying the image UUID in the delete method call. We covered how to get a UUID for an image (or any entity really) above. So let’s just show the call

curl -s --user apiuser:<password> -k -X DELETE --header "Content-Type: application/json" --header "Accept: application/json" "https://10.68.64.60:9440/api/nutanix/v3/images/081e562f-6c26-4897-bc36-a74e4843bb57" | jq .
 {
     "status": 
     {
         "state": "DELETE_PENDING", 
         "execution_context": 
             {
                 "task_uuid": "b9ae5dce-c79d-4bca-b77d-b322949f71e5"
             }
     }, 
     "spec": "", 
     "api_version": "3.1", 
     "metadata": 
         {
             "kind": "image"
         }
 }

You can of course track the deletion progress via the tasks method using the task_uuid produced above.

Conclusion

Useful Resources recap:
HTTP Response status codes 
jq
curl
JSON Lint - The JSON Validator

Hopefully this will help get people started on their API path, we haven’t really scratched the surface of what can be done. Hopefully, this post has at least demystified where and how to make a start. In subsequent posts I hope to show more ways to glean info from the API Explorer itself and how to use it to build more complex REST methods. Until then, check out Nutanix Developer Community site. Good luck creators!

ELK on Nutanix : Elasticsearch

In this second post on using Ansible to deploy the ELK stack on Nutanix, I will cover my initial draft at a playbook for Elasticsearch (ES).  Recall from my previous post, the playbook layout looks like:

[ansible@ansible-host01 roles]$ tree elastic
elastic
├── files
│   └── elasticsearch.repo
├── handlers
│   └── main.yml
├── tasks
│   └── main.yml
├── templates
│   ├── elasticsearch.default.j2
│   ├── elasticsearch.in.sh.j2
│   └── elasticsearch.yml.j2
└── vars
 └── main.yml

There’s also an additional role at play here, config – which is the basic config for the underlying VM guest OS, which we also need to look at :

[ansible@ansible-host01 roles]$ tree config
config
├── files
├── handlers
├── tasks
│   └── main.yml
├── templates
└── vars
 └── main.yml

the common role is where I set things via the Ansible sysctl module, or add entries to files (using lineinfile) in order to set max memory and ulimits etc. It’s generic system configuration, so for example:

#installing java runtime pkgs (pre-req for ELK)
- name: install java 8 runtime
 become: true
 yum: name=java state=installed
 tags: config

#set system max/min numbers...
- name: set maximum map count in sysctl/systemd
 become: true
 sysctl: name=vm.max_map_count value={{ os_max_map_count }} state=present
 tags: config

...

- name: set soft limits for open files
 become: true
 lineinfile: dest=/etc/security/limits.conf line="{{ elasticsearch_user }} soft nofile {{ elasticsearch_max_open_files }}" insertafter=EOF backup=yes
 tags: config

- name: set max locked memory
 become: true
 lineinfile: dest=/etc/security/limits.conf line="{{ elasticsearch_user }} - memlock {{ elasticsearch_max_locked_memory }}" insertafter=EOF backup=yes
 tags: config

...

Here might be a good time to touch upon how Ansible allows you to set variables. Within the directory of each role there’s a subdir called vars and all the variables needed for that role are contained in the YAML file (main.yml). Here’s a snippet:

# can use vars to set versioning and user 
elasticsearch_version: 1.7.0
elasticsearch_user: elasticsearch
...

# here's how we can specify the data volumes that ES will use 
elasticsearch_data_dir: /esdata/data01,/esdata/data02,/esdata/data03,/esdata/data04,/esdata/data05,/esdata/data06

...

# Virtual memory settings - ES heap is set to half my current VM RAM
# but no greater than 32GB for performance reasons
elasticsearch_heap_size: 16g
elasticsearch_max_locked_memory: unlimited
elasticsearch_memory_bootstrap_mlockall: "true"

....

# Good idea not to go with the ES default names of Franz Kafka etc
elasticsearch_cluster_name: nx-elastic
elasticsearch_node_name: nx-esnode01

# My initial nodes will be both cluster quorum members and data "workhorse" nodes.
# I will # separate duties as I scale. Also I set the min master nodes to 1 so that 
# my ES cluster comes up while initially testing a single index 
elasticsearch_node_master: "true"
elasticsearch_node_data: "true"
elasticsearch_discovery_zen_minimum_master_nodes: 1

We’ll see how we use these variables as we cover more features. Next up I used some nice features like shell and also register variables to be able to provide conditional behaviour for package install :

- name: check for previous elasticsearch installation
 shell: if [ -e /usr/share/elasticsearch/lib/elasticsearch-{{ elasticsearch_version }}.jar ]; then echo yes; else echo no; fi;
 register: version_exists
 always_run: True
 tags: elastic

- name: uninstalling previous version if applicable
 become: true
 command: yum erase -y elasticsearch
 when: version_exists.stdout == 'no'
 ignore_errors: true
 tags: elastic

and similarly for the marvel plugin :

- name: check marvel plugin installed
 become: true
 stat: path={{ elasticsearch_home_dir }}/plugins/marvel
 register: marvel_installed
 tags: elastic

- name: install marvel plugin
 become: true
 command: "{{ elasticsearch_home_dir }}/bin/plugin -i elasticsearch/marvel/latest"
 notify:
 - restart elasticsearch
 when: not marvel_installed.stat.exists
 tags: elastic

The Marvel plugin stanza above also makes use of the stat module – this is a really great module. It returns all kinds of goodness you would normally expect from a stat() system call and yet you are doing it in your Ansible playbook.  There are a couple more things I will cover and then leave the rest for when I talk about Kibana and Logstash in a follow up post. First up then are templates. Ansible uses Jinja2 templating in order to transform a file and install it on your host, you can create a file with appropriate templating as below. The variables in {{ .. }} are from the roles ../var directory containing the yaml file already described earlier.

Note : I stripped all comment lines for sake of brevity:

[ansible@ansible-host01 templates]$ pwd
/home/ansible/elk/roles/elastic/templates
[ansible@ansible-host01 templates]$ grep -v ^# elasticsearch.yml.j2
{% if elasticsearch_cluster_name is defined %}
cluster.name: {{ elasticsearch_cluster_name }}
{% endif %}

...

{% if elasticsearch_node_name is defined %}
node.name: {{ elasticsearch_node_name }}
{% endif %}

...

{% if elasticsearch_node_master is defined %}
node.master: {{ elasticsearch_node_master }}
{% endif %}
{% if elasticsearch_node_data is defined %}
node.data: {{ elasticsearch_node_data }}
{% endif %}

...

{% if elasticsearch_memory_bootstrap_mlockall is defined %}
bootstrap.mlockall: {{ elasticsearch_memory_bootstrap_mlockall }}
{% endif %}
....

The template  file when run in the play is then transformed using the provided variables and copied into place on my  ELK host target VM…

- name: copy elasticsearch defaults file
 become: true
 template: src=elasticsearch.default.j2 dest=/etc/sysconfig/elasticsearch owner={{ elasticsearch_user }} group={{ elasticsearch_group }} mode=0644
 notify:
 - restart elasticsearch
 tags: elastic

So let’s see how our playbook runs and what the output looks like

[ansible@ansible-host01 elk]$ ansible-playbook -i ./production site.yml \
--tags "config,elastic" --ask-sudo-pass
SUDO password:

PLAY [elastic-hosts] **********************************************************

GATHERING FACTS ***************************************************************
ok: [10.68.64.117]

TASK: [config | install java 8 runtime] ***************************************
ok: [10.68.64.117]

TASK: [config | set swappiness in sysctl/systemd] *****************************
ok: [10.68.64.117]

TASK: [config | set maximum map count in sysctl/systemd] **********************
ok: [10.68.64.117]

TASK: [config | set hard limits for open files] *******************************
ok: [10.68.64.117]

TASK: [config | set soft limits for open files] *******************************
ok: [10.68.64.117]

TASK: [config | set max locked memory] ****************************************
ok: [10.68.64.117]

TASK: [config | Install wget package (Fedora based)] **************************
ok: [10.68.64.117]

TASK: [elastic | install elasticsearch signing key] ***************************
changed: [10.68.64.117]

TASK: [elastic | copy elasticsearch repo] *************************************
ok: [10.68.64.117]

TASK: [elastic | check for previous elasticsearch installation] ***************
changed: [10.68.64.117]

TASK: [elastic | uninstalling previous version if applicable] *****************
skipping: [10.68.64.117]

TASK: [elastic | install elasticsearch pkgs] **********************************
skipping: [10.68.64.117]

TASK: [elastic | copy elasticsearch configuration file] ***********************
ok: [10.68.64.117]

TASK: [elastic | copy elasticsearch defaults file] ****************************
ok: [10.68.64.117]

TASK: [elastic | set max memory limit in systemd file (RHEL/CentOS 7+)] *******
changed: [10.68.64.117]

TASK: [elastic | set log directory permissions] *******************************
ok: [10.68.64.117]

TASK: [elastic | set data directory permissions] ******************************
ok: [10.68.64.117]

TASK: [elastic | ensure elasticsearch running and enabled] ********************
ok: [10.68.64.117]

TASK: [elastic | check marvel plugin installed] *******************************
ok: [10.68.64.117]

TASK: [elastic | install marvel plugin] ***************************************
skipping: [10.68.64.117]

NOTIFIED: [elastic | restart elasticsearch] ***********************************
changed: [10.68.64.117]

PLAY RECAP ********************************************************************
10.68.64.117 : ok=19 changed=4 unreachable=0 failed=0

[ansible@ansible-host01 elk]$

I can verify that my ES cluster is working by querying the Cluster API – note that the red status is down to the fact I have no other cluster nodes yet on which to replicate the index shards:

# curl -XGET http://localhost:9200/_cluster/health?pretty
{
 "cluster_name" : "nx-elastic",
 "status" : "red",
 "timed_out" : false,
 "number_of_nodes" : 1,
 "number_of_data_nodes" : 1,
 "active_primary_shards" : 0,
 "active_shards" : 0,
 "relocating_shards" : 0,
 "initializing_shards" : 0,
 "unassigned_shards" : 0,
 "delayed_unassigned_shards" : 0,
 "number_of_pending_tasks" : 0,
 "number_of_in_flight_fetch" : 0
}

You can use further API queries to verify that the desired configuration is in place and at that point you have a solid, repeatable deployment with a known outcome ie: you are doing DevOps.

Using Ansible to deploy ELK stack on Nutanix

Just recently my colleague Andrew Nelson (@vmwnelson) posted an article on setting up Ansible on the Nutanix platform. I am also using Ansible to develop playbooks and the like to deploy the ELK stack components (Elasticsearch-Logstash-Kibana) on a block here at Nutanix. My initial aim is to setup a single index in an Elasticsearch (single node for now) cluster and use Logstash to pipe in data to be indexed. On top of that I intend to use Kibana and the Marvel plugin to measure at which point my index begins to struggle (based on stuff like OS level resource consumption, etc) as viewed from Marvel.

From a virtual machine perspective I have a Fedora 22 based gold image. From this base image I clone one VM to be the Ansible master that I will run playbooks (orchestration) from, and another VM which I will deploy my ELK stack to. This second “target” VM has had 7 vDisks added to it. The idea here being that Elasticsearch (ES) can use a comma separated list of vDisks (in my case I created them as six Linear LVM volumes). These are written to in a round robin fashion by ES and so the data gets “striped”. Nutanix vDisks are already redundant so we are getting a kind of RAID 10 for free! Here’s how my disk layout looks once configured and mounted (I am using XFS as my filesystem) on the target VM:

[root@elkhost01 ~]# df -h
/dev/mapper/esdata05-esdata05 200G 271M 200G 1% /esdata/data05
/dev/mapper/esdata03-esdata03 200G 291M 200G 1% /esdata/data03
/dev/mapper/esdata04-esdata04 200G 273M 200G 1% /esdata/data04
/dev/mapper/esdata02-esdata02 200G 271M 200G 1% /esdata/data02
/dev/mapper/esdata06-esdata06 200G 291M 200G 1% /esdata/data06
/dev/mapper/eslog-eslog 100G 150M 100G 1% /var/log/elasticsearch
/dev/mapper/esdata01-esdata01 200G 279M 200G 1% /esdata/data01

and

[root@elkhost01 ~]# lvs
 LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
 esdata01 esdata01 -wi-ao---- 200.00g
 esdata02 esdata02 -wi-ao---- 200.00g
 esdata03 esdata03 -wi-ao---- 200.00g
 esdata04 esdata04 -wi-ao---- 200.00g
 esdata05 esdata05 -wi-ao---- 200.00g
 esdata06 esdata06 -wi-ao---- 200.00g
 eslog eslog -wi-ao---- 100.00g

The next step is to install and configure Ansible. First off, configure an ansible user on both the orchestration host and target host and sync ssh keys between the two – (there’s a module that does ssh key exchange in Ansible and I will cover that at some stage)  – like so:

on both VMs :

useradd ansible
passwd ansible

# generate pub and priv keys ....
ssh-keygen -t rsa

If using strictmodes (default) in sshd_config file 
then ensure correct perms on .ssh directory and files 

chmod 700 ~/.ssh 
chmod 600 ~/.ssh/authorized_keys
[ansible@elkhost01 ~]$ ls -l ~/.ssh
total 12
-rw-------. 1 ansible ansible 404 Oct 1 13:38 authorized_keys
-rw-------. 1 ansible ansible 1675 Oct 1 13:31 id_rsa
-rw-------. 1 ansible ansible 402 Oct 1 13:31 id_rsa.pub

Exchange public keys (copy into remote hosts authorized_keys file) 
for passwordless access

[ansible@ansible-host01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub 10.68.64.117
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
ansible@10.68.64.126's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh '10.68.64.117'"
and check to make sure that only the key(s) you wanted were added.

[ansible@ansible-host01 ~]$ 

[ansible@ansible-host01 ~]$ ssh 10.68.64.117
Last login: Thu Oct 1 13:38:35 2015 from 10.68.64.113
[ansible@elkhost01 ~]$

Once you have passwordless ssh configured between your hosts – go ahead and install Ansible on the orchestration host:

# yum install ansible -y

Once installed, there are a few post install steps and tests to make sure that Ansible is working. First off set up a Ansible hosts inventory file that will eventually contain all the hostnames broken out by deployment type. The default location for this file is  /etc/ansible/hosts. In this instance I have chosen to specify a non standard name/location in order keep my hosts file within my proposed playbook.

[ansible@ansible-host01 elk]$ pwd
/home/ansible/elk
[ansible@ansible-host01 elk]$ cat production
# file: production

[elastic-hosts]
10.68.64.117

[kibana-hosts]
10.68.64.117

[nginx-hosts]
10.68.64.126

And if the passwordless ssh setup is correct – we can test as follows :

[ansible@ansible-host01 elk]$ ansible all -m ping
10.68.64.117 | success >> {
 "changed": false,
 "ping": "pong"
}

Ansible machine configuration is done via playbooks, which are based on YAML syntax. There’s a great best practice guide here. I have followed that same best practice guide on playbook directory layout below …

elk
├── elastic.yml
├── group_vars
├── host_vars
├── kibana.yml
├── production 
├── roles
│   ├── common
│   │   ├── files
│   │   ├── handlers
│   │   ├── tasks
│   │   │   └── main.yml
│   │   ├── templates
│   │   └── vars
│   │   └── main.yml
│   ├── elastic
│   │   ├── files
│   │   │   └── elasticsearch.repo
│   │   ├── handlers
│   │   │   └── main.yml
│   │   ├── tasks
│   │   │   └── main.yml
│   │   ├── templates
│   │   │   ├── elasticsearch.default.j2
│   │   │   ├── elasticsearch.in.sh.j2
│   │   │   └── elasticsearch.yml.j2
│   │   └── vars
│   │   └── main.yml
│  └-- kibana
│      ├── files
│      ├── handlers
│      │   └── main.yml
│      ├── tasks
│      │   └── main.yml
│      ├── templates
│      │   └── kibana4.service.j2
│      └── vars
│      └── main.yml
├── site.yml

I am going to cover the individual roles for elasticsearch, logstash and kibana in subsequent posts. For now there’s a main site wide playbook :

[ansible@ansible-host01 elk]$ cat site.yml
---
# file: site.yml
- include: elastic.yml
- include: kibana.yml
- include: logstash.yml
#- include: log-forwarder.yml
#- include: redis.yml
#- include: nginx.yml

Which is then broken up into individual service specific playbooks :

[ansible@ansible-host01 elk]$ cat elastic.yml
---
#file: elastic.yml
- hosts: elastic-hosts
 roles:
 - common
 - elastic
[ansible@ansible-host01 elk]$ cat kibana.yml
---
#file: kibana.yml
- hosts: kibana-hosts
 roles:
 - kibana

I will discuss the individual roles and their associated tasks etc next time. For now this should be enough to get basic Ansible functionality going.