Add linux slave node in the Jenkins

As per best practices, the master node should be only used for storing configuration and backup purposes. Only slaves should be used for build. In this blog post, we’ll discover steps required for adding slave node in the Jenkins farm. Most of these steps will cover how to prepare linux slave server for Jenkins usage. The below commands are for CentOS 7 server but these can be easily translated to other linux distros.

Install Java on the Slave server

Run below command on the server:

sudo apt-get update
sudo apt-get install openjdk-8-jre
sudo apt-get install openjdk-8-jdk

You can check if jvm is installed properly using java -version.

In order to help Java-based applications locate the Java virtual machine properly, you need to set two environment variables: “JAVA_HOME” and “JRE_HOME”:

export JAVA_HOME=’/usr/lib/jvm/jre-1.8.0-openjdk’
export JRE_HOME=’/usr/lib/jvm/java-8-openjdk-amd64/jre’

Edit profile script and add these two export commands to it so that these variables are always available whenever the system restarts.

Add administrative service user to the Slave server

This is important from administrative and auditing point of view. In our case, let’s say that the service account name is Jenkins. We’ll also create a user group named jenkins. For this, run below command

sudo useradd jenkins -U -s /bin/bash

Verify that user and group are created by checking /etc/passwd and /etc/group files. Now change the password associated with this account using:

sudo passwd jenkins

and enter new password when asked. Now, configure sudo privileges for this user by modifying /etc/sudoers:

Configure SSH Key authentication for Jenkins

First we need to create the key pair on the master machine:

ssh-keygen -t rsa

Once you have entered the Gen Key command, you will get a few more questions about file location to save keypair and passphrase. It’s up to you whether you want to use a passphrase. Entering a passphrase does have its benefits: the security of a key, no matter how encrypted, still depends on the fact that it is not visible to anyone else. The only downside, of course, to having a passphrase, is then having to type it in each time you use the Key Pair. for our purposes, we’ll leave the passphrase as empty.

Below is a sample run from my lab machine:

The public key is now located in /home/jenkins/.ssh/id_rsa.pub. The private key (identification) is now located in /home/jenkins/.ssh/id_rsa.

Once the key pair is generated, it’s time to place the public key into the slave machine’s authorized_keys file with the ssh-copy-id command:

ssh-copy-id jenkins@10.20.3.132

You need to replace the username and password in the above command as per your environment. Also note that if you are doing this on a cloud virtual machine, do the same for the internal as well as public ip of the machine.

You should see something like below output:

[jenkins@centos2 ~]$ ssh-copy-id jenkins@10.20.3.132
The authenticity of host '10.20.3.132 (10.20.3.132)' can't be established.
ECDSA key fingerprint is 53:c2:32:63:12:a2:8f:29:25:40:fa:0a:b1:d4:8c:f4.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
jenkins@10.20.3.132's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'jenkins@10.20.3.132'"
and check to make sure that only the key(s) you wanted were added.

Now you can go ahead and login into the machine 10.20.3.132 with your username and you will not be prompted for password.

Setup relationship between slave and master

Login into Jenkins master machine with administrative credentials.  First go to Manage Jenkins -> Manage plugins and install ‘SSH Slaves Plugin’. Now Go to Manage Jenkins -> Manage node:

Select new node from left pane. Then enter slave machine’s IP address and select ‘Permanent Agent’ and click okay. This will ask for further details.

In the ‘# of executors’, select maximum number of concurrent builds that Jenkins may perform on this agent. Generally, this is set as per no of processor cores available on the remote machine. For our purposes, we’ll set it to 10.

In the ‘remote root directory’, add the path for a directory dedicated to be used by agent which should be /home/jenkins. In the launch method, select ‘launch slave agents via ssh’ and add the slave machine’s ip address and credentials.

These are going to be details for our case:

In the last, click save and then okay. It’ll take few minutes to connect and bring the slave node online. For checking logs, click on the slave machine name and then click logs:

Once you click logs, you should be able to see output like below:

[02/14/17 07:39:01] [SSH] Opening SSH connection to 10.20.3.132:22.
[02/14/17 07:39:02] [SSH] Authentication successful.
[02/14/17 07:39:03] [SSH] The remote users environment is:
BASH=/usr/bin/bash
BASHOPTS=cmdhist:extquote:force_fignore:hostcomplete:interactive_comments:progcomp:promptvars:sourcepath
BASH_ALIASES=()
BASH_ARGC=()
BASH_ARGV=()
BASH_CMDS=()
BASH_EXECUTION_STRING=set
BASH_LINENO=()
BASH_SOURCE=()
BASH_VERSINFO=([0]="4" [1]="2" [2]="46" [3]="1" [4]="release" [5]="x86_64-redhat-linux-gnu")
BASH_VERSION='4.2.46(1)-release'
DIRSTACK=()
EUID=1001
GROUPS=()
HOME=/home/jenkins
HOSTNAME=centos2.local
HOSTTYPE=x86_64
ID=1001
IFS=$' \t\n'
LANG=en_US.UTF-8
LESSOPEN='||/usr/bin/lesspipe.sh %s'
LOGNAME=jenkins
MACHTYPE=x86_64-redhat-linux-gnu
MAIL=/var/mail/jenkins
OPTERR=1
OPTIND=1
OSTYPE=linux-gnu
PATH=/usr/local/bin:/usr/bin
PIPESTATUS=([0]="0")
PPID=16448
PS4='+ '
PWD=/home/jenkins
SELINUX_LEVEL_REQUESTED=
SELINUX_ROLE_REQUESTED=
SELINUX_USE_CURRENT_RANGE=
SHELL=/bin/bash
SHELLOPTS=braceexpand:hashall:interactive-comments
SHLVL=1
SSH_CLIENT='10.20.2.244 44392 22'
SSH_CONNECTION='10.20.2.244 44392 10.20.3.132 22'
TERM=dumb
UID=1001
USER=jenkins
XDG_RUNTIME_DIR=/run/user/1001
XDG_SESSION_ID=35
_=/etc/bashrc
command_not_found_handle () 
{ 
    local runcnf=1;
    local retval=127;
    [[ $- =~ i ]] || runcnf=0;
    [ ! -S /var/run/dbus/system_bus_socket ] && runcnf=0;
    [ ! -x /usr/libexec/packagekitd ] && runcnf=0;
    [ ${COMP_CWORD-} ] && runcnf=0;
    if [ $runcnf -eq 1 ]; then
        /usr/libexec/pk-command-not-found "$@";
        retval=$?;
    else
        local shell=`basename "$SHELL"`;
        echo "$shell: $1: command not found";
    fi;
    return $retval
}
[02/14/17 07:39:03] [SSH] Checking java version of java
[02/14/17 07:39:03] [SSH] java -version returned 1.8.0_121.
[02/14/17 07:39:03] [SSH] Starting sftp client.
[02/14/17 07:39:03] [SSH] Copying latest slave.jar...
[02/14/17 07:39:04] [SSH] Copied 715,860 bytes.
Expanded the channel window size to 4MB
[02/14/17 07:39:04] [SSH] Starting slave process: cd "/home/jenkins" && java  -jar slave.jar
channel started
Slave.jar version: 3.2
This is a Unix agent
Evacuated stdout
Agent successfully connected and online

If everything mentioned above is configured correctly, it should be able to successfully connect to the slave machine.

Advertisements

Git: Push a new repository

create a new repository on the command line

echo "# helm-nexus" >> README.md
git init
git add README.md
git commit -m "first commit"
git remote add origin https://github.com/{username}/helm-nexus.git
git push -u origin master

…or push an existing repository from the command line

git remote add origin https://github.com/{username}/helm-nexus.git
git push -u origin master

Install kubernetes on Centos/RHEL 7

Kubernetes is a cluster and orchestration engine for docker containers. In other words Kubernetes is  an open source software or tool which is used to orchestrate and manage docker containers in cluster environment. Kubernetes is also known as k8s and it was developed by Google and donated to “Cloud Native Computing foundation”

In Kubernetes setup we have one master node and multiple nodes. Cluster nodes is known as worker node or Minion. From the master node we manage the cluster and its nodes using ‘kubeadm‘ and ‘kubectl‘  command.

Kubernetes can be installed and deployed using following methods:

  • Minikube ( It is a single node kubernetes cluster)
  • Kops ( Multi node kubernetes setup into AWS )
  • Kubeadm ( Multi Node Cluster in our own premises)

In this article we will install latest version of Kubernetes 1.7 on CentOS 7 / RHEL 7 with kubeadm utility. In my setup I am taking three CentOS 7 servers with minimal installation. One server will acts master node and rest two servers will be minion or worker nodes.

Kubernetes-settup-Diagram

On the Master Node following components will be installed

  • API Server  – It provides kubernetes API using Jason / Yaml over http, states of API objects are stored in etcd
  • Scheduler  – It is a program on master node which performs the scheduling tasks like launching containers in worker nodes based on resource availability
  • Controller Manager – Main Job of Controller manager is to monitor replication controllers and create pods to maintain desired state.
  • etcd – It is a Key value pair data base. It stores configuration data of cluster and cluster state.
  • Kubectl utility – It is a command line utility which connects to API Server on port 6443. It is used by administrators to create pods, services etc.

On Worker Nodes following components will be installed

  • Kubelet – It is an agent which runs on every worker node, it connects to docker  and takes care of creating, starting, deleting containers.
  • Kube-Proxy – It routes the traffic to appropriate containers based on ip address and port number of the incoming request. In other words we can say it is used for port translation.
  • Pod – Pod can be defined as a multi-tier or group of containers that are deployed on a single worker node or docker host.

Installations Steps of Kubernetes 1.7 on CentOS 7 / RHEL 7

Perform the following steps on Master Node

Step 1: Disable SELinux & setup firewall rules

Login to your kubernetes master node and set the hostname and disable selinux using following commands

~]# hostnamectl set-hostname 'k8s-master'
~]# exec bash
~]# setenforce 0
~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

Set the following firewall rules.

[root@k8s-master ~]# firewall-cmd --permanent --add-port=6443/tcp
[root@k8s-master ~]# firewall-cmd --permanent --add-port=2379-2380/tcp
[root@k8s-master ~]# firewall-cmd --permanent --add-port=10250/tcp
[root@k8s-master ~]# firewall-cmd --permanent --add-port=10251/tcp
[root@k8s-master ~]# firewall-cmd --permanent --add-port=10252/tcp
[root@k8s-master ~]# firewall-cmd --permanent --add-port=10255/tcp
[root@k8s-master ~]# firewall-cmd --reload
[root@k8s-master ~]# echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

Note: In case you don’t have your own dns server then update /etc/hosts file on master and worker nodes

192.168.1.30 k8s-master
192.168.1.40 worker-node1
192.168.1.50 worker-node2

Step 2: Configure Kubernetes Repository

Kubernetes packages are not available in the default CentOS 7 & RHEL 7 repositories, Use below command to configure its package repositories.

[root@k8s-master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
>         https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
> EOF [root@k8s-master ~]#

Step 3: Install Kubeadm and Docker

Once the package repositories are configured, run the beneath command to install kubeadm and docker packages.

[root@k8s-master ~]# yum install kubeadm docker -y

Start and enable kubectl and docker service

[root@k8s-master ~]# systemctl restart docker && systemctl enable docker
[root@k8s-master ~]# systemctl  restart kubelet && systemctl enable kubelet

Step 4: Initialize Kubernetes Master with ‘kubeadm init’

Run the beneath command to  initialize and setup kubernetes master.

[root@k8s-master ~]# kubeadm init

Output of above command would be something like below

kubeadm-init-output

As we can see in the output that kubernetes master has been initialized successfully. Execute the beneath commands to use the cluster as root user.

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config

Step 5: Deploy pod network to the cluster

Try to run below commands to get status of cluster and pods.

kubectl-get-nodes

To make the cluster status ready and kube-dns status running, deploy the pod network so that containers of different host communicated each other.  POD network is the overlay network between the worker nodes.

Run the beneath command to deploy network.

[root@k8s-master ~]# export kubever=$(kubectl version | base64 | tr -d '\n')
[root@k8s-master ~]# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
serviceaccount "weave-net" created
clusterrole "weave-net" created
clusterrolebinding "weave-net" created
daemonset "weave-net" created
[root@k8s-master ~]#

Now run the following commands to verify the status

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS    AGE       VERSION
k8s-master   Ready     1h        v1.7.5
[root@k8s-master ~]# kubectl  get pods  --all-namespaces
NAMESPACE     NAME                                 READY     STATUS    RESTARTS   AGE
kube-system   etcd-k8s-master                      1/1       Running   0          57m
kube-system   kube-apiserver-k8s-master            1/1       Running   0          57m
kube-system   kube-controller-manager-k8s-master   1/1       Running   0          57m
kube-system   kube-dns-2425271678-044ww            3/3       Running   0          1h
kube-system   kube-proxy-9h259                     1/1       Running   0          1h
kube-system   kube-scheduler-k8s-master            1/1       Running   0          57m
kube-system   weave-net-hdjzd                      2/2       Running   0          7m
[root@k8s-master ~]#

Now let’s add worker nodes to the Kubernetes master nodes.

Perform the following steps on each worker node

Step 1: Disable SELinux & configure firewall rules on both the nodes

Before disabling SELinux set the hostname on the both nodes as ‘worker-node1’ and ‘worker-node2’ respectively

~]# setenforce 0
~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
~]# firewall-cmd --permanent --add-port=10250/tcp
~]# firewall-cmd --permanent --add-port=10255/tcp
~]# firewall-cmd --permanent --add-port=30000-32767/tcp
~]# firewall-cmd --permanent --add-port=6783/tcp
~]# firewall-cmd  --reload
~]# echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

Step 2: Configure Kubernetes Repositories on both worker nodes

~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
>         https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
> EOF

Step 3: Install kubeadm and docker package on both nodes

[root@worker-node1 ~]# yum  install kubeadm docker -y
[root@worker-node2 ~]# yum  install kubeadm docker -y

Start and enable docker service

[root@worker-node1 ~]# systemctl restart docker && systemctl enable docker
[root@worker-node2 ~]# systemctl restart docker && systemctl enable docker

Step 4: Now Join worker nodes to master node

To join worker nodes to Master node, a token is required. Whenever kubernetes master initialized , then in the output we get command and token.  Copy that command and run on both nodes.

[root@worker-node1 ~]# kubeadm join --token a3bd48.1bc42347c3b35851 192.168.1.30:6443

Output of above command would be something like below

kubeadm-node1

[root@worker-node2 ~]# kubeadm join --token a3bd48.1bc42347c3b35851 192.168.1.30:6443

Output would be something like below

kubeadm-join-node2

Now verify Nodes status from master node using kubectl command

[root@k8s-master ~]# kubectl get nodes
NAME           STATUS    AGE       VERSION
k8s-master     Ready     2h        v1.7.5
worker-node1   Ready     20m       v1.7.5
worker-node2   Ready     18m       v1.7.5
[root@k8s-master ~]#

As we can see master and worker nodes are in ready status. This concludes that kubernetes 1.7 has been installed successfully and also we have successfully joined two worker nodes.  Now we can create pods and services.

GoCD installation on centos 7

Installation of the GoCD server using the package manager will require root access on the machine. You are also required to have a java version 8 for the server to run.

The installer will create a user called go if one does not exist on the machine. The home directory will be set to /var/go. If you want to create your own go user, make sure you do it before you install the GoCD server.

 

RPM based distributions (ie RedHat/CentOS/Fedora)

The GoCD server RPM installer has been tested on RedHat Enterprise Linux and CentOS. It should work on most RPM based Linux distributions.

If you prefer to use the YUM repository and install via YUM, paste the following in your shell —

sudo curl https://download.gocd.org/gocd.repo -o /etc/yum.repos.d/gocd.repo
sudo yum install -y java-1.8.0-openjdk #atleast Java 8 is required, you may use other jre/jdk if you prefer

Once you have the repository setup, execute

sudo yum install -y go-server

Alternatively, if you have the server RPM downloaded:

sudo yum install -y java-1.8.0-openjdk #atleast Java 8 is required, you may use other jre/jdk if you prefer
sudo rpm -i go-server-${version}.noarch.rpm

Managing the go-server service on linux

To manage the go-server service, you may use the following commands –

sudo /etc/init.d/go-server [start|stop|status|restart]

Once the installation is complete the GoCD server will be started and it will print out the URL for the Dashboard page. This will be http://localhost:8153/go

Location of GoCD server files

The GoCD server installs its files in the following locations on your filesystem:

/var/lib/go-server       #contains the binaries and database
/etc/go                  #contains the pipeline configuration files
/var/log/go-server       #contains the server logs
/usr/share/go-server     #contains the start script
/etc/default/go-server   #contains all the environment variables with default values. These variable values can be changed as per requirement.

Installing GoCD agent on Linux

Installation of the GoCD agent using the package manager will require root access on the machine. You are also required to have a java version 8 (same version as the GoCD server) for the agent to run.

The installer will create a user called go if one does not exist on the machine. The home directory will be set to /var/go. If you want to create your own go user, make sure you do it before you install the GoCD agent.

RPM based distributions (ie RedHat/CentOS/Fedora)

The GoCD agent RPM installer has been tested on RedHat Enterprise Linux and CentOS. It should work on most RPM based Linux distributions.

If you prefer to use the YUM repository and install via YUM, paste the following in your shell —

sudo curl https://download.gocd.org/gocd.repo -o /etc/yum.repos.d/gocd.repo
sudo yum install -y java-1.8.0-openjdk #atleast Java 8 is required, you may use other jre/jdk if you prefer

Once you have the repository setup, execute

sudo yum install -y go-agent

Alternatively, if you have the agent RPM downloaded:

sudo yum install -y java-1.8.0-openjdk #atleast Java 8 is required, you may use other jre/jdk if you prefer
sudo rpm -i go-agent-${version}.noarch.rpm

Managing the go-agent service on linux

To manage the go-agent service, you may use the following commands –

sudo /etc/init.d/go-agent [start|stop|status|restart]

Configuring the go-agent

After installing the go-agent service, you must first configure the service with the hostname (or IP address) of your GoCD server, in order to do this –

  1. Open /etc/default/go-agent in your favourite text editor.
  2. Change the IP address (127.0.0.1) in the line GO_SERVER_URL=https://127.0.0.1:8154/go to the hostname (or IP address) of your GoCD server.
  3. Save the file and exit your editor.
  4. Run /etc/init.d/go-agent [start|restart] to (re)start the agent.

Note: You can override default environment for the GoCD agent by editing the file /etc/defaults/go-agent

The GoCD has been installed you can open the port 8153 and access the following url on the browser:

http://<ip&gt;:8153/go

Salt stack formulas:

  1. Add all the configurations in pillar.sls into the target file:

{%- if salt['pillar.get']('elasticsearch:config') %}
/etc/elasticsearch/elasticsearch.yml:
  file.managed:
    - source: salt://elasticsearch/files/elasticsearch.yml
    - user: root
    - template: jinja
    - require:
      - sls: elasticsearch.pkg
    - context:
        config: {{ salt['pillar.get']('elasticsearch:config', '{}') }}
{%- endif %}

2. Create multiple directories if it does not exists


{% for dir in (data_dir, log_dir) %}
{% if dir %}
{{ dir }}:
  file.directory:
    - user: elasticsearch
    - group: elasticsearch
    - mode: 0700
    - makedirs: True
    - require_in:
      - service: elasticsearch
{% endif %}
{% endfor %}

3. Retrieve a value from pillar:


{% set data_dir = salt['pillar.get']('elasticsearch:config:path.data') %}

4. Include a new state in existing state or add a new state:

a. Create/Edit init.sls file

Add the following lines

include:
  - elasticsearch.repo
  - elasticsearch.pkg

5. Append a iptables rule:

iptables_elasticsearch_rest_api:
  iptables.append:
    - table: filter
    - chain: INPUT
    - jump: ACCEPT
    - match:
      - state
      - tcp
      - comment
    - comment: "Allow ElasticSearch REST API port"
    - connstate: NEW
    - dport: 9200
    - proto: tcp
    - save: True

(this appends the rule to the end of the iptables file to insert it before use iptables.insert module)

6. Insert iptables rule:

iptables_elasticsearch_rest_api:
  iptables.insert:
    - position: 2
    - table: filter
    - chain: INPUT
    - jump: ACCEPT
    - match:
      - state
      - tcp
      - comment
    - comment: "Allow ElasticSearch REST API port"
    - connstate: NEW
    - dport: 9200
    - proto: tcp
    - save: True

7. REplace the variables in pillar.yml with the Jinja template

/etc/elasticsearch/jvm.options:
  file.managed:
    - source: salt://elasticsearch/files/jvm.options
    - user: root
    - group: elasticsearch
    - mode: 0660
    - template: jinja
    - watch_in:
      - service: elasticsearch_service
    - context:
        jvm_opts: {{ salt['pillar.get']('elasticsearch:jvm_opts', '{}') }}

Then in elasticsearch/files/jvm.options add:

{% set heap_size = jvm_opts['heap_size'] %}
-Xms{{ heap_size }}

8. Install elasticsearch as the version declared in pillar

elasticsearch:
  #Define the major and minor version for ElasticSearch
  version: [5, 5] 

Then in the pkg.sls you can install the package as follwos:

include:
  - elasticsearch.repo

{% from "elasticsearch/map.jinja" import elasticsearch_map with context %}
{% from "elasticsearch/settings.sls" import elasticsearch with context %}

## Install ElasticSearch pkg with desired version
elasticsearch_pkg:
  pkg.installed:
    - name: {{ elasticsearch_map.pkg }}
    {% if elasticsearch.version %}
    - version: {{ elasticsearch.version[0] }}.{{ elasticsearch.version[1] }}*
    {% endif %}
    - require:
      - sls: elasticsearch.repo
    - failhard: True

failhard: True so that the state apply exits if there is any error in installing elasticsearch.

9. Reload Elasticsearch daemon after change in elasticsearch.service file

elasticsearch_daemon_reload:
  module.run:
    - name: service.systemctl_reload
    - onchanges:
      - file: /usr/lib/systemd/system/elasticsearch.service

10. Install the plugins mentioned in pillar

{% for name, repo in plugins_pillar.items() %}
elasticsearch-{{ name }}:
  cmd.run:
    - name: /usr/share/elasticsearch/bin/{{ plugin_bin }} install -b {{ repo }}
    - require:
      - sls: elasticsearch.install
    - unless: test -x /usr/share/elasticsearch/plugins/{{ name }}
{% endfor %}

11. Enable and auto restart elasticsearch service after file changes.

elasticsearch_service:
  service.running:
    - name: elasticsearch
    - enable: True
    - watch:
      - file: /etc/elasticsearch/elasticsearch.yml
      - file: /etc/elasticsearch/jvm.options
      - file: /usr/lib/systemd/system/elasticsearch.service
    - require:
      - pkg: elasticsearch
    - failhard: True

12. Custom Error if no firewall package set

firewall_error:
   test.fail_without_changes:
     - name: "Please set firewall package as iptables or firewalld"
     - failhard: True

13. Install openjdk

{% set settings = salt['grains.filter_by']({
  'Debian': {
    'package': 'openjdk-8-jdk',
  },
  'RedHat': {
    'package': 'java-1.8.0-openjdk',
  },
}) %}

## Install Openjdk
install_openjdk:
  pkg:
    - installed
    - name: {{ settings.package }}

14. Install package firewalld

firewalld_install:
  pkg.installed:
    - name: firewalld

15. Adding firewall rules

elasticsearch_firewalld_rules:
  firewalld.present:
    - name: public
    - ports:
      - 22/tcp
      - 9200/tcp
      - 9300/tcp
    - onlyif:
      - rpm -q firewalld
    - require:
      - service: firewalld

16. Enable and start firewalld service

firewalld:
  service.running:
    - enable: True
    - reload: True
    - require:
      - pkg: firewalld_install