Sphinx search issues

    1. Access sphinx database:
      The sphinx indexes can be accessed with the following command:
      mysql -P 9306 -h 0
      Execute show tables;
      This will display the indexes. To see the data in the index execute:
      select * from index_name;
    2. WARNING: Attribute count is 0: switching to none docinfo
      Add the following to sphinx.conf source configuration.
      sql_attr_string = title # will be stored but will not be indexed
    3. ERROR: duplicate attribute name
      Check that you do not have sql_attr and sql_field pointed to the same column in sphinx.conf
      If you want the column to be a field and attribute then add it in sql_field else add it as sql_attr
    4. query error: no field ‘first_name’ found in schema\x0
      Add the following in sphinx.conf
      sql_field_string = title # will be both indexed and stored
    5. Overrriding sphinx.conf settings with SphinXql in django:
      Add the following in your settings.py

      'index_params': {
      'type': 'plain',
      'charset_type': 'utf-8'
      'searchd_params': {
      'listen': '9306:mysql41',
      'pid_file': os.path.join(INDEXES['sphinx_path'], 'searchd.pid')
    6. ERROR 1064 (42000): index : fullscan requires extern docinfo
      Add the following in sphinx.conf in the index section:
      docinfo = extern



Ansible: Dynamic fact

The following is how you can set dynamic fact. The fact which will have it’s name as a variable. The key will be a variable and value will also be a variable.

- set_fact:
   {"{{ groups['nginx'][groups['nodejs'].index(inventory_hostname)] }}":"{{ hostvars[inventory_hostname]['ansible_eth0']['ipv4']['address'] }}"}

Here we are setting a fact whose key is the host in nginx group with same index as current host in the nodejs group. We are assigning it the value of IP address of current host.

You can print it as follows

- name: print
    msg: " {{ hostvars[groups['nodejs'][groups['nginx'].index(inventory_hostname)]][groups['nginx'][groups['nodejs'].index(inventory_hostname)]] }} "

Ansible: access the index id of the host in group

The index can be accessed as:

- name: Print index
    msg: "Index is {{ groups['nginx'].index(inventory_hostname) }}"

This will print the index id of the current host in the group “nginx”. It starts from 0.

ElasticSearch Issues

  • java.lang.IllegalArgumentException: unknown setting [node.rack] please check that any required plugins are installed, or check the breaking changes documentation for removed settings

Node level attributes used for allocation filtering, forced awareness or other node identification / grouping must be prefixed with node.attr. In previous versions it was possible to specify node attributes with the node. prefix. All node attributes except of node.masternode.data and node.ingest must be moved to the new node.attr. namespace.

  • Unknown setting mlockall

Replace the bootstrap.mlockall with bootstrap.memory_lock

  • Unable to lock JVM Memory: error=12, reason=Cannot allocate memory

Edit:  /etc/security/limits.conf and add the following lines

elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited

Edit: /usr/lib/systemd/system/elasticsearch.service uncomment the line


Execute the following commands:

systemctl daemon-reload

systemctl elasticsearch start

  • Elasticsearch cluster health “red”: “unassigned_shards”

Execute the following command:

Elasticsearch’s cat API will tell you which shards are unassigned, and why:

curl -XGET localhost:9200/_cat/shards?h=index,shard,prirep,state,unassigned.reason| grep UNASSIGNED

Each row lists the name of the index, the shard number, whether it is a primary (p) or replica ® shard, and the reason it is unassigned:

constant-updates        0 p UNASSIGNED NODE_LEFT node_left[NODE_NAME]

If the unassigned shards belong to an index you thought you deleted already, or an outdated index that you don’t need anymore, then you can delete the index to restore your cluster status to green:

curl -XDELETE 'localhost:9200/index_name/'
  • ElasticSearch nodes not showing hardware metrics

Execute the following command:

curl localhost:9200/_nodes/stats?pretty

It will show you the error root cause. If the error is:

“failures” : [
“type” : “failed_node_exception”,
“reason” : “Failed node [3kOQUA2IQ-mnD74ER3O6SQ]”,
“caused_by” : {
“type” : “illegal_state_exception”,
“reason” : “environment is not locked”,
“caused_by” : {
“type” : “no_such_file_exception”,
“reason” : “/opt/apps/elasticsearch/nodes/0/node.lock”

Then just restart elasticsearch service. It is caused when data directory is deleted while elasticsearch is still running.

  • Elasticsearch service does not start and no logs are captured in elasticsearch.log
    The issue can be found in /var/log/messages, mainly this issue is because of java not installed ot JAVA_HOME not set.
    The issue might also be because improper jvm settings.