Ansible issues

  1. is not a valid attribute for a Play
    When ever you get the above error firstly crosscheck that the ansible attribute you have mentioned is correct. If it is correct then the issue probably is that you have created tasks as follows:

    ---
       - vars_prompt:
            - name: "var1"
              prompt: "Please pass variable"
              private: no
    
       - fail: msg="var1 is not passed or blank"
         when: var1 is undefined or ( var1 is defined and storeid == "" )

    when it should be as follows:

    ---
       vars_prompt:
          - name: "var1"
            prompt: "Please pass variable"
            private: no
    
       tasks:
         - fail: msg="var1 is not passed or blank"
           when: var1 is undefined or ( var1 is defined and storeid == "" )

    the example referenced is just a task. It is not a valid playbook because it is missings a hosts declaration and the module call is not under a tasks section.

  2. ERROR! conflicting action statements: fail, command
    I get this error if I have a task as follows:
    – name: deploy
    win_get_url:
    url: ‘http://server_ip/builds/build.zip’
    dest: ‘D:\build.zip’
    win_unzip:
    src: D:\build.zip
    dest: D:\You cannot have multiple actions listed inside a single task like this. Instead you need to do this:

        - name: deploy get url
          win_get_url:
             url: 'http://server_ip/builds/build.zip'
             dest: 'D:\build.zip'
        - name: deploy unzip
          win_unzip:
             src: D:\build.zip
             dest: D:\
  3. Ansible task to check API status
    Here I am checking ES cluster health:

        - name: Get ES cluster health
          uri:
            url: http://{{inventory_hostname}}:9200/_cluster/health
            return_content: yes
          register: cluster_status
    
        - set_fact:
            es_cluster_health: "{{ cluster_status.content | from_json }}"
    
        - fail: msg="ES cluster not healthy"
          when: "es_cluster_health.status != 'yellow'"
    
    

    You can compare the status with any string you want. Here I am comparing it with string “yellow”

Advertisements

Salt stack formulas:

  1. Add all the configurations in pillar.sls into the target file:

{%- if salt['pillar.get']('elasticsearch:config') %}
/etc/elasticsearch/elasticsearch.yml:
  file.managed:
    - source: salt://elasticsearch/files/elasticsearch.yml
    - user: root
    - template: jinja
    - require:
      - sls: elasticsearch.pkg
    - context:
        config: {{ salt['pillar.get']('elasticsearch:config', '{}') }}
{%- endif %}

2. Create multiple directories if it does not exists


{% for dir in (data_dir, log_dir) %}
{% if dir %}
{{ dir }}:
  file.directory:
    - user: elasticsearch
    - group: elasticsearch
    - mode: 0700
    - makedirs: True
    - require_in:
      - service: elasticsearch
{% endif %}
{% endfor %}

3. Retrieve a value from pillar:


{% set data_dir = salt['pillar.get']('elasticsearch:config:path.data') %}

4. Include a new state in existing state or add a new state:

a. Create/Edit init.sls file

Add the following lines

include:
  - elasticsearch.repo
  - elasticsearch.pkg

5. Append a iptables rule:

iptables_elasticsearch_rest_api:
  iptables.append:
    - table: filter
    - chain: INPUT
    - jump: ACCEPT
    - match:
      - state
      - tcp
      - comment
    - comment: "Allow ElasticSearch REST API port"
    - connstate: NEW
    - dport: 9200
    - proto: tcp
    - save: True

(this appends the rule to the end of the iptables file to insert it before use iptables.insert module)

6. Insert iptables rule:

iptables_elasticsearch_rest_api:
  iptables.insert:
    - position: 2
    - table: filter
    - chain: INPUT
    - jump: ACCEPT
    - match:
      - state
      - tcp
      - comment
    - comment: "Allow ElasticSearch REST API port"
    - connstate: NEW
    - dport: 9200
    - proto: tcp
    - save: True

7. REplace the variables in pillar.yml with the Jinja template

/etc/elasticsearch/jvm.options:
  file.managed:
    - source: salt://elasticsearch/files/jvm.options
    - user: root
    - group: elasticsearch
    - mode: 0660
    - template: jinja
    - watch_in:
      - service: elasticsearch_service
    - context:
        jvm_opts: {{ salt['pillar.get']('elasticsearch:jvm_opts', '{}') }}

Then in elasticsearch/files/jvm.options add:

{% set heap_size = jvm_opts['heap_size'] %}
-Xms{{ heap_size }}

8. Install elasticsearch as the version declared in pillar

elasticsearch:
  #Define the major and minor version for ElasticSearch
  version: [5, 5] 

Then in the pkg.sls you can install the package as follwos:

include:
  - elasticsearch.repo

{% from "elasticsearch/map.jinja" import elasticsearch_map with context %}
{% from "elasticsearch/settings.sls" import elasticsearch with context %}

## Install ElasticSearch pkg with desired version
elasticsearch_pkg:
  pkg.installed:
    - name: {{ elasticsearch_map.pkg }}
    {% if elasticsearch.version %}
    - version: {{ elasticsearch.version[0] }}.{{ elasticsearch.version[1] }}*
    {% endif %}
    - require:
      - sls: elasticsearch.repo
    - failhard: True

failhard: True so that the state apply exits if there is any error in installing elasticsearch.

9. Reload Elasticsearch daemon after change in elasticsearch.service file

elasticsearch_daemon_reload:
  module.run:
    - name: service.systemctl_reload
    - onchanges:
      - file: /usr/lib/systemd/system/elasticsearch.service

10. Install the plugins mentioned in pillar

{% for name, repo in plugins_pillar.items() %}
elasticsearch-{{ name }}:
  cmd.run:
    - name: /usr/share/elasticsearch/bin/{{ plugin_bin }} install -b {{ repo }}
    - require:
      - sls: elasticsearch.install
    - unless: test -x /usr/share/elasticsearch/plugins/{{ name }}
{% endfor %}

11. Enable and auto restart elasticsearch service after file changes.

elasticsearch_service:
  service.running:
    - name: elasticsearch
    - enable: True
    - watch:
      - file: /etc/elasticsearch/elasticsearch.yml
      - file: /etc/elasticsearch/jvm.options
      - file: /usr/lib/systemd/system/elasticsearch.service
    - require:
      - pkg: elasticsearch
    - failhard: True

12. Custom Error if no firewall package set

firewall_error:
   test.fail_without_changes:
     - name: "Please set firewall package as iptables or firewalld"
     - failhard: True

13. Install openjdk

{% set settings = salt['grains.filter_by']({
  'Debian': {
    'package': 'openjdk-8-jdk',
  },
  'RedHat': {
    'package': 'java-1.8.0-openjdk',
  },
}) %}

## Install Openjdk
install_openjdk:
  pkg:
    - installed
    - name: {{ settings.package }}

14. Install package firewalld

firewalld_install:
  pkg.installed:
    - name: firewalld

15. Adding firewall rules

elasticsearch_firewalld_rules:
  firewalld.present:
    - name: public
    - ports:
      - 22/tcp
      - 9200/tcp
      - 9300/tcp
    - onlyif:
      - rpm -q firewalld
    - require:
      - service: firewalld

16. Enable and start firewalld service

firewalld:
  service.running:
    - enable: True
    - reload: True
    - require:
      - pkg: firewalld_install

ElasticSearch Issues

  • java.lang.IllegalArgumentException: unknown setting [node.rack] please check that any required plugins are installed, or check the breaking changes documentation for removed settings

Node level attributes used for allocation filtering, forced awareness or other node identification / grouping must be prefixed with node.attr. In previous versions it was possible to specify node attributes with the node. prefix. All node attributes except of node.masternode.data and node.ingest must be moved to the new node.attr. namespace.

  • Unknown setting mlockall

Replace the bootstrap.mlockall with bootstrap.memory_lock

  • Unable to lock JVM Memory: error=12, reason=Cannot allocate memory

Edit:  /etc/security/limits.conf and add the following lines

elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited

Edit: /usr/lib/systemd/system/elasticsearch.service uncomment the line

LimitMEMLOCK=infinity

Execute the following commands:

systemctl daemon-reload

systemctl elasticsearch start

  • Elasticsearch cluster health “red”: “unassigned_shards”

Execute the following command:

Elasticsearch’s cat API will tell you which shards are unassigned, and why:

curl -XGET localhost:9200/_cat/shards?h=index,shard,prirep,state,unassigned.reason| grep UNASSIGNED

Each row lists the name of the index, the shard number, whether it is a primary (p) or replica ® shard, and the reason it is unassigned:

constant-updates        0 p UNASSIGNED NODE_LEFT node_left[NODE_NAME]

If the unassigned shards belong to an index you thought you deleted already, or an outdated index that you don’t need anymore, then you can delete the index to restore your cluster status to green:

curl -XDELETE 'localhost:9200/index_name/'
  • ElasticSearch nodes not showing hardware metrics

Execute the following command:

curl localhost:9200/_nodes/stats?pretty

It will show you the error root cause. If the error is:

“failures” : [
{
“type” : “failed_node_exception”,
“reason” : “Failed node [3kOQUA2IQ-mnD74ER3O6SQ]”,
“caused_by” : {
“type” : “illegal_state_exception”,
“reason” : “environment is not locked”,
“caused_by” : {
“type” : “no_such_file_exception”,
“reason” : “/opt/apps/elasticsearch/nodes/0/node.lock”
}
}

Then just restart elasticsearch service. It is caused when data directory is deleted while elasticsearch is still running.

  • Elasticsearch service does not start and no logs are captured in elasticsearch.log
    The issue can be found in /var/log/messages, mainly this issue is because of java not installed ot JAVA_HOME not set.
    The issue might also be because improper jvm settings.

 

Elasticsearch Queries

  1. Create indices
curl -XPUT 'localhost:9200/twitter?pretty' -H 'Content-Type: application/json' -d'
{
 "settings" : {
 "index" : {
 "number_of_shards" : 3,
 "number_of_replicas" : 2
 }
 }
}
'

2. Search

curl -XGET 'localhost:9200/sw/_search?pretty' -H 'Content-Type: application/json' -d'
{
 "query": { "match_all": {} },
 "_source": ["gender", "height"]
}
'</pre>
3. Creating index and adding documents to it
<pre>curl -XPUT 'localhost:9200/my_index?pretty' -H 'Content-Type: application/json' -d'
{
 "mappings": {
 "my_type": {
 "properties": {
 "user": {
 "type": "nested"
 }
 }
 }
 }
}
'
curl -XPUT 'localhost:9200/my_index/my_type/1?pretty' -H 'Content-Type: application/json' -d'
{
 "group" : "fans",
 "user" : [
 {
 "first" : "John",
 "last" : "Smith"
 },
 {
 "first" : "Alice",
 "last" : "White"
 }
 ]
}
'

4. Must match

curl -XGET 'localhost:9200/my_index/_search?pretty' -H 'Content-Type: application/json' -d'
{
 "query": {
 "nested": {
 "path": "user",
 "query": {
 "bool": {
 "must": [
 { "match": { "user.first": "Alice" }},
 { "match": { "user.last": "Smith" }}
 ]
 }
 }
 }
 }
}
'

5. Highlight

curl -XGET 'localhost:9200/my_index/_search?pretty' -H 'Content-Type: application/json' -d'
{
  "query": {
    "nested": {
      "path": "user",
      "query": {
        "bool": {
          "must": [
            { "match": { "user.first": "Alice" }},
            { "match": { "user.last":  "White" }}
          ]
        }
      },
      "inner_hits": {
        "highlight": {
          "fields": {
            "user.first": {}
          }
        }
      }
    }
  }
}
'

6. To get all records:
curl -XGET ‘localhost:9200//_search?size=100&pretty=true’ -d ”

7. Match all


curl -XGET 'localhost:9200/foo/_search?size=NO_OF_RESULTS' -d '
{
"query" : {
 "match_all" : {}
 }
}'

8. This example does a match_all and returns documents 11 through 20


curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
"query": { "match_all": {} },
"from": 10,
"size": 10
}
'

9. This example does a match_all and sorts the results by account balance in descending order and returns the top 10 (default size) documents


curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
"query": { "match_all": {} },
"sort": { "balance": { "order": "desc" } }
}
'

10. This example shows how to return two fields, account_number and balance (inside of _source), from the search


curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
"query": { "match_all": {} },
"_source": ["account_number", "balance"]
}
'

11. This example returns the account numbered 20


curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
"query": { "match": { "account_number": 20 } }
}
'

12. This example returns all accounts containing the term “mill” in the address


curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
"query": { "match": { "address": "mill" } }
}
'

13. This example returns all accounts containing the term “mill” or “lane” in the address


curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
"query": { "match": { "address": "mill lane" } }
}
'

14. This example is a variant of match (match_phrase) that returns all accounts containing the phrase “mill lane” in the address


curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
"query": { "match_phrase": { "address": "mill lane" } }
}
'

15. This example composes two match queries and returns all accounts containing “mill” and “lane” in the address


curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
  "query": {
    "bool": {
      "must": [
        { "match": { "address": "mill" } },
        { "match": { "address": "lane" } }
      ]
    }
  }
}
'

16. In contrast, this example composes two match queries and returns all accounts containing “mill” or “lane” in the address


curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
  "query": {
    "bool": {
      "should": [
        { "match": { "address": "mill" } },
        { "match": { "address": "lane" } }
      ]
    }
  }
}
'

17. This example returns all accounts of anybody who is 40 years old but doesn’t live in ID

curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
  "query": {
    "bool": {
      "must": [
        { "match": { "age": "40" } }
      ],
      "must_not": [
        { "match": { "state": "ID" } }
      ]
    }
  }
}
'

18. This example uses a bool query to return all accounts with balances between 20000 and 30000


curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
  "query": {
    "bool": {
      "must": { "match_all": {} },
      "filter": {
        "range": {
          "balance": {
            "gte": 20000,
            "lte": 30000
          }
        }
      }
    }
  }
}
'

19. To start with, this example groups all the accounts by state, and then returns the top 10 (default) states sorted by count descending (also default)

curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
  "size": 0,
  "aggs": {
    "group_by_state": {
      "terms": {
        "field": "state.keyword"
      }
    }
  }
}
'

20. Building on the previous aggregation, let’s now sort on the average balance in descending order

curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
  "size": 0,
  "aggs": {
    "group_by_state": {
      "terms": {
        "field": "state.keyword",
        "order": {
          "average_balance": "desc"
        }
      },
      "aggs": {
        "average_balance": {
          "avg": {
            "field": "balance"
          }
        }
      }
    }
  }
}
'

21. This example demonstrates how we can group by age brackets (ages 20-29, 30-39, and 40-49), then by gender, and then finally get the average account balance, per age bracket, per gender

curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'
{
  "size": 0,
  "aggs": {
    "group_by_age": {
      "range": {
        "field": "age",
        "ranges": [
          {
            "from": 20,
            "to": 30
          },
          {
            "from": 30,
            "to": 40
          },
          {
            "from": 40,
            "to": 50
          }
        ]
      },
      "aggs": {
        "group_by_gender": {
          "terms": {
            "field": "gender.keyword"
          },
          "aggs": {
            "average_balance": {
              "avg": {
                "field": "balance"
              }
            }
          }
        }
      }
    }
  }
}
'

22. Assuming the data consists of documents representing exams grades (between 0 and 100) of students we can average their scores with

curl -XPOST 'localhost:9200/exams/_search?size=0&pretty' -H 'Content-Type: application/json' -d'
{
    "aggs" : {
        "avg_grade" : { "avg" : { "field" : "grade" } }
    }
}
'

23. Multiply current marks with 1.2 then get the aggregate

curl -XPOST 'localhost:9200/exams/_search?size=0&pretty' -H 'Content-Type: application/json' -d'
{
    "aggs" : {
        "avg_corrected_grade" : {
            "avg" : {
                "field" : "grade",
                "script" : {
                    "lang": "painless",
                    "inline": "_value * params.correction",
                    "params" : {
                        "correction" : 1.2
                    }
                }
            }
        }
    }
}
'

24. Documents without a value in the grade field will fall into the same bucket as documents that have the value 10

curl -XPOST 'localhost:9200/exams/_search?size=0&pretty' -H 'Content-Type: application/json' -d'
{
    "aggs" : {
        "grade_avg" : {
            "avg" : {
                "field" : "grade",
                "missing": 10
            }
        }
    }
}
'

25. Type count for the balance

curl -XPOST 'localhost:9200/bank/_search?size=0&pretty' -H 'Content-Type: application/json' -d'
{
    "aggs" : {
        "type_count" : {
            "cardinality" : {
                "field" : "balance"
            }
        }
    }
}
'

26. Use of inline painless script for adding promoted value to type value

curl -XPOST 'localhost:9200/bank/_search?size=0&pretty' -H 'Content-Type: application/json' -d'
{
    "aggs" : {
        "type_promoted_count" : {
            "cardinality" : {
                "script": {
                    "lang": "painless",
                    "inline": "doc[\u0027type\u0027].value + \u0027 \u0027 + doc[\u0027promoted\u0027].value"
                }
            }
        }
    }
}
'

27. Extended stats for balance

curl -XPOST 'localhost:9200/bank/_search?size=0&pretty' -H 'Content-Type: application/json' -d'
{
    "aggs" : {
        "grades_stats" : { "extended_stats" : { "field" : "balance" } }
    }
}
'

28. Geopoint and geo centroid example

curl -XPUT 'localhost:9200/museums' -H 'Content-Type: application/json' -d'
{
    "mappings": {
        "doc": {
            "properties": {
                "location": {
                    "type": "geo_point"
                }
            }
        }
    }
}
'

curl -XPOST 'localhost:9200/museums/doc/_bulk?refresh' -H 'Content-Type: application/json' -d'
{"index":{"_id":1}}
{"location": "52.374081,4.912350", "name": "NEMO Science Museum"}
{"index":{"_id":2}}
{"location": "52.369219,4.901618", "name": "Museum Het Rembrandthuis"}
{"index":{"_id":3}}
{"location": "52.371667,4.914722", "name": "Nederlands Scheepvaartmuseum"}
{"index":{"_id":4}}
{"location": "51.222900,4.405200", "name": "Letterenhuis"}
{"index":{"_id":5}}
{"location": "48.861111,2.336389", "name": "Musée du Louvre"}
{"index":{"_id":6}}
{"location": "48.860000,2.327000", "name": "Musée dOrsay"}'

curl -XPOST 'localhost:9200/museums/_search?size=0' -H 'Content-Type: application/json' -d'
{
    "query" : {
        "match" : { "name" : "musée" }
    },
    "aggs" : {
        "viewport" : {
            "geo_bounds" : {
                "field" : "location",
                "wrap_longitude" : true
            }
        }
    }
}
'

curl -XPOST 'localhost:9200/museums/_search?size=0' -H 'Content-Type: application/json' -d'
{
    "aggs" : {
        "centroid" : {
            "geo_centroid" : {
                "field" : "location"
            }
        }
    }
}
'

curl -XPOST 'localhost:9200/museums/_search?size=0' -H 'Content-Type: application/json' -d'
{
    "aggs" : {
        "cities" : {
            "terms" : { "field" : "city.keyword" },
            "aggs" : {
                "centroid" : {
                    "geo_centroid" : { "field" : "location" }
                }
            }
        }
    }
}
'

29. Max balance

curl -XPOST 'localhost:9200/bank/_search?size=0&pretty' -H 'Content-Type: application/json' -d'
{
    "aggs" : {
        "max_price" : { "max" : { "field" : "balance" } }
    }
}
'

30. Min balance

curl -XPOST 'localhost:9200/sales/_search?size=0&pretty' -H 'Content-Type: application/json' -d'
{
    "aggs" : {
        "min_price" : { "min" : { "field" : "price" } }
    }
}
'

31. Percentiles

{
    "aggs" : {
        "load_time_outlier" : {
            "percentiles" : {
                "field" : "load_time"
            }
        }
    }
}

32. Percentiles of values within specific bounds

curl -XPOST 'localhost:9200/bank/account/_search?size=0&pretty' -H 'Content-Type: application/json' -d'
{
    "aggs": {
        "balance_outlier": {
            "percentile_ranks": {
                "field": "balance",
                "values": [25000, 50000],
                "keyed": false
            }
        }
    }
}
'

33. Sum of hat prices

{
"aggs" : {
        "hat_prices" : { "sum" : { "field" : "price" } }
    }
}

34. Sort by call_duration in descending order

curl -u elastic:changeme -XGET 'localhost:9200/index-alias2-events-2015.01.01-00/_search?pretty' -H 'Content-Type: application/json' -d'
{
  "query": { "match_all": {} },
  "sort": { "call_duration": { "order": "desc" } }
}
'