Add linux slave node in the Jenkins

As per best practices, the master node should be only used for storing configuration and backup purposes. Only slaves should be used for build. In this blog post, we’ll discover steps required for adding slave node in the Jenkins farm. Most of these steps will cover how to prepare linux slave server for Jenkins usage. The below commands are for CentOS 7 server but these can be easily translated to other linux distros.

Install Java on the Slave server

Run below command on the server:

sudo apt-get update
sudo apt-get install openjdk-8-jre
sudo apt-get install openjdk-8-jdk

You can check if jvm is installed properly using java -version.

In order to help Java-based applications locate the Java virtual machine properly, you need to set two environment variables: “JAVA_HOME” and “JRE_HOME”:

export JAVA_HOME=’/usr/lib/jvm/jre-1.8.0-openjdk’
export JRE_HOME=’/usr/lib/jvm/java-8-openjdk-amd64/jre’

Edit profile script and add these two export commands to it so that these variables are always available whenever the system restarts.

Add administrative service user to the Slave server

This is important from administrative and auditing point of view. In our case, let’s say that the service account name is Jenkins. We’ll also create a user group named jenkins. For this, run below command

sudo useradd jenkins -U -s /bin/bash

Verify that user and group are created by checking /etc/passwd and /etc/group files. Now change the password associated with this account using:

sudo passwd jenkins

and enter new password when asked. Now, configure sudo privileges for this user by modifying /etc/sudoers:

Configure SSH Key authentication for Jenkins

First we need to create the key pair on the master machine:

ssh-keygen -t rsa

Once you have entered the Gen Key command, you will get a few more questions about file location to save keypair and passphrase. It’s up to you whether you want to use a passphrase. Entering a passphrase does have its benefits: the security of a key, no matter how encrypted, still depends on the fact that it is not visible to anyone else. The only downside, of course, to having a passphrase, is then having to type it in each time you use the Key Pair. for our purposes, we’ll leave the passphrase as empty.

Below is a sample run from my lab machine:

The public key is now located in /home/jenkins/.ssh/id_rsa.pub. The private key (identification) is now located in /home/jenkins/.ssh/id_rsa.

Once the key pair is generated, it’s time to place the public key into the slave machine’s authorized_keys file with the ssh-copy-id command:

ssh-copy-id jenkins@10.20.3.132

You need to replace the username and password in the above command as per your environment. Also note that if you are doing this on a cloud virtual machine, do the same for the internal as well as public ip of the machine.

You should see something like below output:

[jenkins@centos2 ~]$ ssh-copy-id jenkins@10.20.3.132
The authenticity of host '10.20.3.132 (10.20.3.132)' can't be established.
ECDSA key fingerprint is 53:c2:32:63:12:a2:8f:29:25:40:fa:0a:b1:d4:8c:f4.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
jenkins@10.20.3.132's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'jenkins@10.20.3.132'"
and check to make sure that only the key(s) you wanted were added.

Now you can go ahead and login into the machine 10.20.3.132 with your username and you will not be prompted for password.

Setup relationship between slave and master

Login into Jenkins master machine with administrative credentials.  First go to Manage Jenkins -> Manage plugins and install ‘SSH Slaves Plugin’. Now Go to Manage Jenkins -> Manage node:

Select new node from left pane. Then enter slave machine’s IP address and select ‘Permanent Agent’ and click okay. This will ask for further details.

In the ‘# of executors’, select maximum number of concurrent builds that Jenkins may perform on this agent. Generally, this is set as per no of processor cores available on the remote machine. For our purposes, we’ll set it to 10.

In the ‘remote root directory’, add the path for a directory dedicated to be used by agent which should be /home/jenkins. In the launch method, select ‘launch slave agents via ssh’ and add the slave machine’s ip address and credentials.

These are going to be details for our case:

In the last, click save and then okay. It’ll take few minutes to connect and bring the slave node online. For checking logs, click on the slave machine name and then click logs:

Once you click logs, you should be able to see output like below:

[02/14/17 07:39:01] [SSH] Opening SSH connection to 10.20.3.132:22.
[02/14/17 07:39:02] [SSH] Authentication successful.
[02/14/17 07:39:03] [SSH] The remote users environment is:
BASH=/usr/bin/bash
BASHOPTS=cmdhist:extquote:force_fignore:hostcomplete:interactive_comments:progcomp:promptvars:sourcepath
BASH_ALIASES=()
BASH_ARGC=()
BASH_ARGV=()
BASH_CMDS=()
BASH_EXECUTION_STRING=set
BASH_LINENO=()
BASH_SOURCE=()
BASH_VERSINFO=([0]="4" [1]="2" [2]="46" [3]="1" [4]="release" [5]="x86_64-redhat-linux-gnu")
BASH_VERSION='4.2.46(1)-release'
DIRSTACK=()
EUID=1001
GROUPS=()
HOME=/home/jenkins
HOSTNAME=centos2.local
HOSTTYPE=x86_64
ID=1001
IFS=$' \t\n'
LANG=en_US.UTF-8
LESSOPEN='||/usr/bin/lesspipe.sh %s'
LOGNAME=jenkins
MACHTYPE=x86_64-redhat-linux-gnu
MAIL=/var/mail/jenkins
OPTERR=1
OPTIND=1
OSTYPE=linux-gnu
PATH=/usr/local/bin:/usr/bin
PIPESTATUS=([0]="0")
PPID=16448
PS4='+ '
PWD=/home/jenkins
SELINUX_LEVEL_REQUESTED=
SELINUX_ROLE_REQUESTED=
SELINUX_USE_CURRENT_RANGE=
SHELL=/bin/bash
SHELLOPTS=braceexpand:hashall:interactive-comments
SHLVL=1
SSH_CLIENT='10.20.2.244 44392 22'
SSH_CONNECTION='10.20.2.244 44392 10.20.3.132 22'
TERM=dumb
UID=1001
USER=jenkins
XDG_RUNTIME_DIR=/run/user/1001
XDG_SESSION_ID=35
_=/etc/bashrc
command_not_found_handle () 
{ 
    local runcnf=1;
    local retval=127;
    [[ $- =~ i ]] || runcnf=0;
    [ ! -S /var/run/dbus/system_bus_socket ] && runcnf=0;
    [ ! -x /usr/libexec/packagekitd ] && runcnf=0;
    [ ${COMP_CWORD-} ] && runcnf=0;
    if [ $runcnf -eq 1 ]; then
        /usr/libexec/pk-command-not-found "$@";
        retval=$?;
    else
        local shell=`basename "$SHELL"`;
        echo "$shell: $1: command not found";
    fi;
    return $retval
}
[02/14/17 07:39:03] [SSH] Checking java version of java
[02/14/17 07:39:03] [SSH] java -version returned 1.8.0_121.
[02/14/17 07:39:03] [SSH] Starting sftp client.
[02/14/17 07:39:03] [SSH] Copying latest slave.jar...
[02/14/17 07:39:04] [SSH] Copied 715,860 bytes.
Expanded the channel window size to 4MB
[02/14/17 07:39:04] [SSH] Starting slave process: cd "/home/jenkins" && java  -jar slave.jar
channel started
Slave.jar version: 3.2
This is a Unix agent
Evacuated stdout
Agent successfully connected and online

If everything mentioned above is configured correctly, it should be able to successfully connect to the slave machine.

Advertisements

Install kubernetes on Centos/RHEL 7

Kubernetes is a cluster and orchestration engine for docker containers. In other words Kubernetes is  an open source software or tool which is used to orchestrate and manage docker containers in cluster environment. Kubernetes is also known as k8s and it was developed by Google and donated to “Cloud Native Computing foundation”

In Kubernetes setup we have one master node and multiple nodes. Cluster nodes is known as worker node or Minion. From the master node we manage the cluster and its nodes using ‘kubeadm‘ and ‘kubectl‘  command.

Kubernetes can be installed and deployed using following methods:

  • Minikube ( It is a single node kubernetes cluster)
  • Kops ( Multi node kubernetes setup into AWS )
  • Kubeadm ( Multi Node Cluster in our own premises)

In this article we will install latest version of Kubernetes 1.7 on CentOS 7 / RHEL 7 with kubeadm utility. In my setup I am taking three CentOS 7 servers with minimal installation. One server will acts master node and rest two servers will be minion or worker nodes.

Kubernetes-settup-Diagram

On the Master Node following components will be installed

  • API Server  – It provides kubernetes API using Jason / Yaml over http, states of API objects are stored in etcd
  • Scheduler  – It is a program on master node which performs the scheduling tasks like launching containers in worker nodes based on resource availability
  • Controller Manager – Main Job of Controller manager is to monitor replication controllers and create pods to maintain desired state.
  • etcd – It is a Key value pair data base. It stores configuration data of cluster and cluster state.
  • Kubectl utility – It is a command line utility which connects to API Server on port 6443. It is used by administrators to create pods, services etc.

On Worker Nodes following components will be installed

  • Kubelet – It is an agent which runs on every worker node, it connects to docker  and takes care of creating, starting, deleting containers.
  • Kube-Proxy – It routes the traffic to appropriate containers based on ip address and port number of the incoming request. In other words we can say it is used for port translation.
  • Pod – Pod can be defined as a multi-tier or group of containers that are deployed on a single worker node or docker host.

Installations Steps of Kubernetes 1.7 on CentOS 7 / RHEL 7

Perform the following steps on Master Node

Step 1: Disable SELinux & setup firewall rules

Login to your kubernetes master node and set the hostname and disable selinux using following commands

~]# hostnamectl set-hostname 'k8s-master'
~]# exec bash
~]# setenforce 0
~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

Set the following firewall rules.

[root@k8s-master ~]# firewall-cmd --permanent --add-port=6443/tcp
[root@k8s-master ~]# firewall-cmd --permanent --add-port=2379-2380/tcp
[root@k8s-master ~]# firewall-cmd --permanent --add-port=10250/tcp
[root@k8s-master ~]# firewall-cmd --permanent --add-port=10251/tcp
[root@k8s-master ~]# firewall-cmd --permanent --add-port=10252/tcp
[root@k8s-master ~]# firewall-cmd --permanent --add-port=10255/tcp
[root@k8s-master ~]# firewall-cmd --reload
[root@k8s-master ~]# echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

Note: In case you don’t have your own dns server then update /etc/hosts file on master and worker nodes

192.168.1.30 k8s-master
192.168.1.40 worker-node1
192.168.1.50 worker-node2

Step 2: Configure Kubernetes Repository

Kubernetes packages are not available in the default CentOS 7 & RHEL 7 repositories, Use below command to configure its package repositories.

[root@k8s-master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
>         https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
> EOF [root@k8s-master ~]#

Step 3: Install Kubeadm and Docker

Once the package repositories are configured, run the beneath command to install kubeadm and docker packages.

[root@k8s-master ~]# yum install kubeadm docker -y

Start and enable kubectl and docker service

[root@k8s-master ~]# systemctl restart docker && systemctl enable docker
[root@k8s-master ~]# systemctl  restart kubelet && systemctl enable kubelet

Step 4: Initialize Kubernetes Master with ‘kubeadm init’

Run the beneath command to  initialize and setup kubernetes master.

[root@k8s-master ~]# kubeadm init

Output of above command would be something like below

kubeadm-init-output

As we can see in the output that kubernetes master has been initialized successfully. Execute the beneath commands to use the cluster as root user.

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config

Step 5: Deploy pod network to the cluster

Try to run below commands to get status of cluster and pods.

kubectl-get-nodes

To make the cluster status ready and kube-dns status running, deploy the pod network so that containers of different host communicated each other.  POD network is the overlay network between the worker nodes.

Run the beneath command to deploy network.

[root@k8s-master ~]# export kubever=$(kubectl version | base64 | tr -d '\n')
[root@k8s-master ~]# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
serviceaccount "weave-net" created
clusterrole "weave-net" created
clusterrolebinding "weave-net" created
daemonset "weave-net" created
[root@k8s-master ~]#

Now run the following commands to verify the status

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS    AGE       VERSION
k8s-master   Ready     1h        v1.7.5
[root@k8s-master ~]# kubectl  get pods  --all-namespaces
NAMESPACE     NAME                                 READY     STATUS    RESTARTS   AGE
kube-system   etcd-k8s-master                      1/1       Running   0          57m
kube-system   kube-apiserver-k8s-master            1/1       Running   0          57m
kube-system   kube-controller-manager-k8s-master   1/1       Running   0          57m
kube-system   kube-dns-2425271678-044ww            3/3       Running   0          1h
kube-system   kube-proxy-9h259                     1/1       Running   0          1h
kube-system   kube-scheduler-k8s-master            1/1       Running   0          57m
kube-system   weave-net-hdjzd                      2/2       Running   0          7m
[root@k8s-master ~]#

Now let’s add worker nodes to the Kubernetes master nodes.

Perform the following steps on each worker node

Step 1: Disable SELinux & configure firewall rules on both the nodes

Before disabling SELinux set the hostname on the both nodes as ‘worker-node1’ and ‘worker-node2’ respectively

~]# setenforce 0
~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
~]# firewall-cmd --permanent --add-port=10250/tcp
~]# firewall-cmd --permanent --add-port=10255/tcp
~]# firewall-cmd --permanent --add-port=30000-32767/tcp
~]# firewall-cmd --permanent --add-port=6783/tcp
~]# firewall-cmd  --reload
~]# echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

Step 2: Configure Kubernetes Repositories on both worker nodes

~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
>         https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
> EOF

Step 3: Install kubeadm and docker package on both nodes

[root@worker-node1 ~]# yum  install kubeadm docker -y
[root@worker-node2 ~]# yum  install kubeadm docker -y

Start and enable docker service

[root@worker-node1 ~]# systemctl restart docker && systemctl enable docker
[root@worker-node2 ~]# systemctl restart docker && systemctl enable docker

Step 4: Now Join worker nodes to master node

To join worker nodes to Master node, a token is required. Whenever kubernetes master initialized , then in the output we get command and token.  Copy that command and run on both nodes.

[root@worker-node1 ~]# kubeadm join --token a3bd48.1bc42347c3b35851 192.168.1.30:6443

Output of above command would be something like below

kubeadm-node1

[root@worker-node2 ~]# kubeadm join --token a3bd48.1bc42347c3b35851 192.168.1.30:6443

Output would be something like below

kubeadm-join-node2

Now verify Nodes status from master node using kubectl command

[root@k8s-master ~]# kubectl get nodes
NAME           STATUS    AGE       VERSION
k8s-master     Ready     2h        v1.7.5
worker-node1   Ready     20m       v1.7.5
worker-node2   Ready     18m       v1.7.5
[root@k8s-master ~]#

As we can see master and worker nodes are in ready status. This concludes that kubernetes 1.7 has been installed successfully and also we have successfully joined two worker nodes.  Now we can create pods and services.

Installing Kubernetes on your Windows with Minikube

Personally I think if you are looking for a container management solution in today’s world, you have to invest your time in Kubernetes (k8s). There is no doubt about that because of multiple factors. To the best of my undestanding, these points include:

  • Kubernetes is Open Source
  • Great momentum in terms of activities & contribution at its Open Source Project
  • Decades of experience running its predecessor at Google
  • Support of multiple OS and infrastructure software vendors
  • Rate at which features are being released
  • Production readiness (Damn it, Pokemon Go met its scale due to Kubernetes)
  • Number of features available. Check out the list of features at the home page.

The general perception about a management solution like Kubernetes is that it would require quite a bit of setup for you to try it out locally. What this means is that it would take some time to set it up but more than setting it up, you might probably get access to it only during staging phase or something like that. Ideally you want a similar environment in your development too, so that you are as close to what it takes to run your application. The implications of this is that you want it running on your laptop/desktop, where you are likely to do your development.

This was the goal behind the minikube project and the team has put in fantastic effort to help us setup and run Kubernetes on our development machines. This is as simple and portable as it can get. The tagline of minikube project says it all: “Run Kubernetes locally”.

Side Note: The design of the minikube logo makes for interesting reading.

This post is going to take you through setting up Minikube on your Windows development machine and then taking it for a Hello World spin to see a local Kubernetes cluster in action. Along the way, I will highlight my environment and what I had to do to get the experimental build of minikube working on my Windows machine. Yes, it is experimental software, but it works!

If you are not on Windows, the instructions to setup minikube on either your Linux machine or Mac machine are also available here. Check it out. You can then safely skip over the setup and go to the section where we do a quick Hello World to test drive Kubernetes locally.

Keep in mind that Minikube gives you a single node cluster that is running in a VM on your development machine.

Of course, once you are done with what you see in this blog, I strongly recommend that you also look at Managed Container Orchestration solutions like Google Container Engine.

Let’s get started now with installation of minikube. But first, we must make sure that our development machine has some of the pre-requisites required to run it. Do not ignore that!

Using VirtualBox and not Hyper-V

VirtualBox and Hyperv (which is available on Windows 10) do not make a happy pair and you are bound to run into situations where the tools get confused. I preferred to use VirtualBox and avoid all esoteric command-line switches that we need to provide to enable creation of the underlying Docker hosts, etc.

To disable Hyper-V, go to Turn Windows features on or off and you will see a dialog with list of Windows features as shown below. Navigate to the Hyper-V section and disable it completely.

This will require a restart to the machine to take effect and on my machine, it even ended up doing a Windows Update, configuring it and a good 10 minutes later, it was back up.

Great! We have everything now to get going.

Development Machine Environment

I am assuming that you have a setup that is similar to this. I believe, you should be fine on Windows 7 too and it would not have the HyperV stuff, instructions of which I will give in a while.

  • Windows 10 Laptop. VT-x/AMD-v virtualization must be enabled in BIOS.
  • Docker Toolbox v1.12.0. The toolbox sets up VirtualBox and I have gone with that.
  • kubectl command line utility. This is the CLI utility for the Kubernetes cluster and you need to install it and have it available in your PATH. To install the latest 1.4 release, do the following: Go to the browser and give the following URL : http://storage.googleapis.com/kubernetes-release/release/v1.8.1/bin/windows/amd64/kubectl.exe. This will download the kubectl CLI executable. Please make it available in the environment PATH variable.

Note: kubectl versions are available at a generic location as per the following format: https://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/${GOOS}/${GOARCH}/${K8S_BINARY}

To find the latest kubectl version goto this link:  https://storage.googleapis.com/kubernetes-release/release/stable.txt

Minikube installation

The first step is to take the kubectl.exe file that you downloaded in the previous step and place that in the C:\ folder.

The next step is to download the minikube binary from the following location: https://github.com/kubernetes/minikube/releases

Go to the Windows download link as shown below:

This will start downloading the v0.22.3 release of the executable. The file name is minikube-windows-amd64.exe. Just rename this to minikube.exeand place it in C:\ drive, alongside the kubectl.exe file from the previous section.

You are all set now to launch a local Kubernetes one-node cluster!

All the steps moving forward are being done in Powershell. Launch Powershell in Administrative mode (Ctrl-Shift-Enter) and navigate to C:\ drive where the kubectl.exe and minikube.exe files are present.

A few things to note

Let’s do our standard testing to validate our utilities.

If you go to your %HOMEPATH%\.minikube folder now, you will notice that several folders got created. Take a look!

There are multiple commands that Minikube supports. You can use the standard ` — help` option to see the list of commands that it has:

PS C:\> .\minikube --help
Minikube is a CLI tool that provisions and manages single-node Kubernetes clusters optimized for development workflows
Usage:
  minikube [command]
Available Commands:
  dashboard        Opens/displays the kubernetes dashboard URL for your local cluster
  delete           Deletes a local kubernetes cluster.
  docker-env       sets up docker env variables; similar to '$(docker-machine env)'
  get-k8s-versions Gets the list of available kubernetes versions available for minikube.
  ip               Retrieve the IP address of the running cluster.
  logs             Gets the logs of the running localkube instance, used for debugging minikube, not user code.
  config           Modify minikube config
  service          Gets the kubernetes URL for the specified service in your local cluster
  ssh              Log into or run a command on a machine with SSH; similar to 'docker-machine ssh'
  start            Starts a local kubernetes cluster.
  status           Gets the status of a local kubernetes cluster.
  stop             Stops a running local kubernetes cluster.
  version          Print the version of minikube.
Flags:
      --alsologtostderr[=false]: log to standard error as well as files
      --log-flush-frequency=5s: Maximum number of seconds between log flushes
      --log_backtrace_at=:0: when logging hits line file:N, emit a stack trace
      --log_dir="": If non-empty, write log files in this directory
      --logtostderr[=false]: log to standard error instead of files
      --show-libmachine-logs[=false]: Whether or not to show logs from libmachine.
      --stderrthreshold=2: logs at or above this threshold go to stderr
      --v=0: log level for V logs
      --vmodule=: comma-separated list of pattern=N settings for file-filtered logging
Use "minikube [command] --help" for more information about a command.

I have highlighted a couple of Global flags that you can use in all the commands for minikube. These flags are useful to see what is going on inside the hood at times and also for seeing the output on the standard output (console/command).

Minikube supports multiple versions of Kubernetes and the latest version is v1.7.5. To check out the different versions supported try out the following command:

PS C:\> .\minikube get-k8s-versions
The following Kubernetes versions are available:
 - v1.7.5
 - v1.7.4
 - v1.7.3
 - v1.7.2
 - v1.7.0
 - v1.7.0-rc.1
 - v1.7.0-alpha.2
 - v1.6.4
 - v1.6.3
 - v1.6.0
 - v1.6.0-rc.1
 - v1.6.0-beta.4
 - v1.6.0-beta.3
 - v1.6.0-beta.2
 - v1.6.0-alpha.1
 - v1.6.0-alpha.0
 - v1.5.3
 - v1.5.2
 - v1.5.1
 - v1.4.5
 - v1.4.3
 - v1.4.2
 - v1.4.1
 - v1.4.0
 - v1.3.7
 - v1.3.6
 - v1.3.5
 - v1.3.4
 - v1.3.3
 - v1.3.0

Starting our Cluster

We are now ready to launch our Kubernetes cluster locally. We will use the start command for it.

Note: You might run into multiple issues while starting a cluster the first time. I have several of them and have created a section at the end of this blog post on Troubleshooting. Take a look at it, in case you run into any issues.

You can check out the help and description of the command/flags/options via the help option as shown below:

PS C:\> .\minikube.exe start --help

You will notice several Flags that you can provide to the start command and while there are some useful defaults, we are going to be a bit specific, so that we can better understand things.

We want to use Kubernetes v1.7.5 and while the VirtualBox driver is default on windows, we are going to be explicit about it. At the same time, we are going to use a couple of the Global Flags that we highlighted earlier, so that we can see what is going on under the hood.

All we need to do is give the following command (I have separated the flags on separate line for better readability). The output is also attached.

PS C:\> .\minikube.exe start --kubernetes-version="v1.7.5" 
                             --vm-driver="virtualbox" 
                             --alsologtostderr
W1004 13:01:30.429310    9296 root.go:127] Error reading config file at C:\Users\irani_r\.minikube\config\config.json: o
pen C:\Users\irani_r\.minikube\config\config.json: The system cannot find the file specified.
I1004 13:01:30.460582    9296 notify.go:103] Checking for updates...
Starting local Kubernetes cluster...
Creating CA: C:\Users\irani_r\.minikube\certs\ca.pem
Creating client certificate: C:\Users\irani_r\.minikube\certs\cert.pemRunning pre-create checks...
Creating machine...
(minikube) Downloading C:\Users\irani_r\.minikube\cache\boot2docker.iso from file://C:/Users/irani_r/.minikube/cache/iso
/minikube-0.7.iso...
(minikube) Creating VirtualBox VM...
(minikube) Creating SSH key...
(minikube) Starting the VM...
(minikube) Check network to re-create if needed...
(minikube) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
I1004 13:03:06.480550    9296 cluster.go:389] Setting up certificates for IP: %s 192.168.99.100
I1004 13:03:06.567686    9296 cluster.go:202] sudo killall localkube || true
I1004 13:03:06.611680    9296 cluster.go:204] killall: localkube: no process killed
I1004 13:03:06.611680    9296 cluster.go:202]
# Run with nohup so it stays up. Redirect logs to useful places.
sudo sh -c 'PATH=/usr/local/sbin:$PATH nohup /usr/local/bin/localkube   --generate-certs=false --logtostderr=true --node
-ip=192.168.99.100 > /var/lib/localkube/localkube.err 2> /var/lib/localkube/localkube.out < /dev/null & echo $! > /var/r
un/localkube.pid &'
I1004 13:03:06.658605    9296 cluster.go:204]
Kubectl is now configured to use the cluster.
PS C:\>

Let us understand what it is doing behind the scenes in brief. I have also highlighted some of the key lines in the output above:

  1. It generates the certificates and then proceeds to provision a local Docker host. This will result in a VM created inside of VirtualBox.
  2. That host is provisioned with the boot2Docker ISO image.
  3. It does its magic of setting it up, assigning it an IP and all the works.
  4. Finally, it prints out a message that kubectl is configured to talk to your local Kubernetes cluster.

You can now check on the status of the local cluster via the status command:

PS C:\> .\minikube.exe status
minikubeVM: Running
localkube: Running

You can also use the kubectl CLI to get the cluster information:

PS C:\> .\kubectl.exe cluster-info
Kubernetes master is running at https://192.168.99.100:8443
kubernetes-dashboard is running at https://192.168.99.100:8443/api/v1/proxy/namespaces/kube-system/services/kubernetes-d
ashboard
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Kubernetes Client and Server version

Let us do a quick check of the Kubernetes version at the client and server level. Execute the following command:

PS C:\> .\kubectl version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.7.5", GitCommit:"a16c0a7f71a6f93c7e0f222d961f4675cd97a
46b", GitTreeState:"clean", BuildDate:"2016-09-26T18:16:57Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"windows/amd6
4"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.7.5", GitCommit:"a16c0a7f71a6f93c7e0f222d961f4675cd97a
46b", GitTreeState:"dirty", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"linux/amd64"
}

You will notice that both client and server are at version 1.4.

Cluster IP Address

You can get the IP address of the cluster via the ip command:

PS C:\> .\minikube.exe ip
192.168.99.100

Kubernetes Dashboard

You can launch the Kubernetes Dashboard at any point via the dashboardcommand as shown below:

PS C:\> .\minikube.exe dashboard

This will automatically launch the Dashboard in your local browser. However if you just want to nab the Dashboard URL, you can use the following flag:

PS C:\> .\minikube.exe dashboard --url=true
http://192.168.99.100:30000

There is a great post on how the Kubernetes Dashboard underwent a design change in version 1.4. It explains how the information is split up into respective sections i.e. Workloads , Services and Discovery, Storage and Configuration, which are present on the left-side menu and via which you can sequentially introspect more details of your cluster. All of this is provided by a nifty filter for the Namespace value above.

If you look at the Kubernetes dashboard right now, you will see that it indicates that nothing has been deployed. Let us step back and think what we have so far. We have launched a single-node cluster .. right? Click on the Node link and you will see that information:

The above node information can also be obtained by using the kubectl CLI to get the list of nodes.

PS C:\> .\kubectl.exe get nodes
NAME       STATUS    AGE
minikube   Ready     51m

Hopefully, you are now able to relate how some of the CLI calls are reflected in the Dashboard too. Let’s move forward. But before that, one important tip!

Tip: use-context minikube

If you had noticed closely when we started the cluster, there is a statement in the output that says “Kubectl is now configured to use the cluster.” What this is supposed to do is to eventually set the current context for the kubectl utility so that it knows which cluster it is talking to. Behind the scenes in your %HOMEPATH%\.kube directory, there is a config file that contains information about your Kubernetes cluster and the details for connecting to your various clusters is present over there.

In short, we have to be sure that the kubectl is pointing to the right cluster. In our case, the cluster name is minikube.

In case you see an error like the one below (I got it a few times), then you need to probably set the context again.

PS C:\> kubectl get nodes
error: You must be logged in to the server (the server has asked for the client to provide credentials)

The command for that is:

PS C:\> kubectl config use-context minikube
switched to context "minikube".

Running a Workload

Let us proceed now to running a simple Nginx container to see the whole thing in action:

We are going to use the run command as shown below:

PS C:\> .\kubectl.exe run hello-nginx --image=nginx --port=80
deployment "hello-nginx" created

This creates a deployment and we can investigate into the Pod that gets created, which will run the container:

PS C:\> .\kubectl.exe get pods
NAME                   READY     STATUS              RESTARTS   AGE
hello-nginx-24710...   0/1       ContainerCreating   0          2m

You can see that the STATUS column value is ContainerCreating.

Now, let us go back to the Dashboard (I am assuming that you either have it running or can launch it again via the minikube dashboard command):

You can notice that if we go to the Deployments option, the Deployment is listed and the status is still in progress. You can also notice that the Pods value is 0/1.

If we wait for a while, the Pod will eventually get created and it will ready as the command below shows:

PS C:\> .\kubectl.exe get pods
NAME                   READY     STATUS    RESTARTS   AGE
hello-nginx-24710...   1/1       Running   0          3m

If we see the Dashboard again, the Deployment is ready now:

If we visit the Replica Sets now, we can see it:

Click on the Replica Set name and it will show the Pod details as given below:

Alternately, you can also get to the Pods via the Pods link in the Workloads as shown below:

Click on the Pod and you can get various details on it as given below:

You can see that it has been given some default labels. You can see its IP address. It is part of the node named minikube. And most importantly, there is a link for View Logs too.

The 1.4 dashboard greatly simplifies using Kubernetes and explaining it to everyone. It helps to see what is going on in the Dashboard and then the various commands in kubectl will start making sense more.

We could have got the Node and Pod details via a variety of kubectl describe node/pod commands and we can still do that. An example of that is shown below:

PS C:\> .\kubectl.exe describe pod hello-nginx-2471083592-4vfz8
Name:           hello-nginx-2471083592-4vfz8
Namespace:      default
Node:           minikube/192.168.99.100
Start Time:     Tue, 04 Oct 2016 14:05:15 +0530
Labels:         pod-template-hash=2471083592
                run=hello-nginx
Status:         Running
IP:             172.17.0.3
Controllers:    ReplicaSet/hello-nginx-2471083592
Containers:
  hello-nginx:
    Container ID:       docker://98a9e303f0dbf21db80a20aea744725c9bd64f6b2ce2764379151e3ae422fc18
    Image:              nginx
    Image ID:           docker://sha256:ba6bed934df2e644fdd34e9d324c80f3c615544ee9a93e4ce3cfddfcf84bdbc2
    Port:               80/TCP
    State:              Running
      Started:          Tue, 04 Oct 2016 14:06:02 +0530
    Ready:              True
    Restart Count:      0
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rie7t (ro)
    Environment Variables:      <none>
..... /// REST OF THE OUTPUT ////

Expose a Service

It is time now to expose our basic Nginx deployment as a service. We can use the command shown below:

PS C:\> .\kubectl.exe expose deployment hello-nginx --type=NodePort
service "hello-nginx" exposed

If we visit the Dashboard at this point and go to the Services section, we can see out hello-nginx service entry.

Alternately, we can use kubectl too, to check it out:

PS C:\> .\kubectl.exe get services
NAME          CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
hello-nginx   10.0.0.24    <nodes>       80/TCP    3m
kubernetes    10.0.0.1     <none>        443/TCP   1h
PS C:\> .\kubectl.exe describe service hello-nginx
Name:                   hello-nginx
Namespace:              default
Labels:                 run=hello-nginx
Selector:               run=hello-nginx
Type:                   NodePort
IP:                     10.0.0.24
Port:                   <unset> 80/TCP
NodePort:               <unset> 31155/TCP
Endpoints:              172.17.0.3:80
Session Affinity:       None
No events.

We can now use the minikube service to understand the URL for the service as shown below:

PS C:\> .\minikube.exe service --url=true hello-nginx
http://192.168.99.100:31155

Alternately, if we do not use the url flag, then it can directly launch the browser and hit the service endpoint:

PS C:\> .\minikube.exe service hello-nginx
Opening kubernetes service default/hello-nginx in default browser...

View Logs

Assuming that you have accessed the service once in the browser as shown above, let us look at an interesting thing now. Go to the Service link in the Dashboard.

Click on the hello-nginx service. This will also show the list of Pods (single) as shown below. Click on the icon for Logs as highlighted below:

This will show the logs for that particular Pod and with HTTP Request calls that was just made.

You could do the same by using the logs <podname> command for the kubectl CLI:

PS C:\> .\kubectl logs hello-nginx-2471083592-4vfz8
172.17.0.1 - - [04/Oct/2016:09:00:33 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) Appl
eWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36" "-"
2016/10/04 09:00:33 [error] 5#5: *1 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), cl
ient: 172.17.0.1, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "192.168.99.100:31155", referrer: "http
://192.168.99.100:31155/"
172.17.0.1 - - [04/Oct/2016:09:00:33 +0000] "GET /favicon.ico HTTP/1.1" 404 571 "http://192.168.99.100:31155/" "Mozilla/
5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36" "-"
PS C:\>

Scaling the Service

OK, I am not yet done!

When we created the deployment, we did not mention about the number of instances for our service. So we just had one Pod that was provisioned on the single node.

Let us go and see how we can scale this via the scale command. We want to scale it to 3 Pods.

PS C:\> .\kubectl scale --replicas=3 deployment/hello-nginx
deployment "hello-nginx" scaled

We can see the status of the deployment in a while:

PS C:\> .\kubectl.exe get deployment
NAME          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello-nginx   3         3         3            3           1h

Now, if we visit the Dashboard for our Deployment:

We have the 3/3 Pods available. Similarly, we can see our Service or Pods.

or the Pod list:

Stopping and Deleting the Cluster

This is straightforward. You can use the stop and delete commands for minikube utility.

Limitations

Minikube is a work in progress at this moment and it does not support all the features of Kubernetes. Please refer to the minikube documentation, where it clearly states what is currently supported.

Troubleshooting Issues on Windows 10

My experience to get the experimental build of minikube working on Windows was not exactly a smooth one, but that is to be expected from anything that calls itself experimental.

I faced several issues and hope that it will save some time for you. I do not have the time to investigate deeper into why some of the stuff worked for me since my focus is to get it up and running on Windows. So if you have some specific comments around that, that will be great and I can add to this blog post.

In no order of preference, here you go:

Use Powershell

I used Powershell and not command line. Ensure that Powershell is launched in Administrative mode. This means Ctrl + Shift + Enter.

Put minikube.exe file in C:\ drive

I saw some issues that mentioned to do that. I did not experiment too much and went with C:\ drive.

Clear up .minikube directory

If there were some issues starting up minikube first time and then you try to start it again, you might see errors that say stuff like “Starting Machine” or “Machine exists” and then a bunch of errors before it gives up. I suggest that you clear up the .minikube directory that is present in %HOMEPATH%\.minikube directory. In my case, it is C:\Users\irani_r\.minikube. You will see a bunch of folders there. Just delete them all and start all over again.

To see detailed error logging, give the following flags while starting up the cluster:

--show-libmachine-logs --alsologtostderr

Example of the error trace for me was as follows:

PS C:\> .\minikube start --show-libmachine-logs --alsologtostderr
W1003 15:59:52.796394   12080 root.go:127] Error reading config file at C:\Users\irani_r\.minikube\config\config.json: o
pen C:\Users\irani_r\.minikube\config\config.json: The system cannot find the file specified.
I1003 15:59:52.800397   12080 notify.go:103] Checking for updates...
Starting local Kubernetes cluster...
I1003 15:59:53.164759   12080 cluster.go:75] Machine exists!
I1003 15:59:54.133728   12080 cluster.go:82] Machine state:  Error
E1003 15:59:54.133728   12080 start.go:85] Error starting host: Error getting state for host: machine does not exist. Re
trying.
I1003 15:59:54.243132   12080 cluster.go:75] Machine exists!
I1003 15:59:54.555738   12080 cluster.go:82] Machine state:  Error
E1003 15:59:54.555738   12080 start.go:85] Error starting host: Error getting state for host: machine does not exist. Re
trying.
I1003 15:59:54.555738   12080 cluster.go:75] Machine exists!
I1003 15:59:54.790128   12080 cluster.go:82] Machine state:  Error
E1003 15:59:54.790128   12080 start.go:85] Error starting host: Error getting state for host: machine does not exist. Re
trying.
E1003 15:59:54.790128   12080 start.go:91] Error starting host:  Error getting state for host: machine does not exist
Error getting state for host: machine does not exist
Error getting state for host: machine does not exist

Disable Hyper-V

As earlier mentioned, VirtualBox and Hyper-V are not the happiest of co-workers. Definitely disable one of them on your machine. As per the documentation of minikube, both virtualbox and hyperv drivers are supported on Windows. I will do a test of Hyper-V someday but I went with disabling Hyper-V and used VirtualBox only. The steps to disable Hyper-V correctly were shown earlier in this blog post.

 

Chef: Nodes and Search

node1node2

    1. Now login to your chef-node/chef-client and type ohai. Ohai is automatically bootstraped when we install chef.
      You will get an output similar to below:
      ……
      “OPEN_MAX”: 1024,
      “PAGESIZE”: 4096,
      “PAGE_SIZE”: 4096,
      “PASS_MAX”: 8192,
      “PTHREAD_DESTRUCTOR_ITERATIONS”: 4,
      “PTHREAD_KEYS_MAX”: 1024,
      “PTHREAD_STACK_MIN”: 16384,
      “PTHREAD_THREADS_MAX”: null,
      “SCHAR_MAX”: 127,
      “SCHAR_MIN”: -128,
      “SHRT_MAX”: 32767,
      “SHRT_MIN”: -32768,
      “SSIZE_MAX”: 32767,
      “TTY_NAME_MAX”: 32,
      “TZNAME_MAX”: 6,
      “UCHAR_MAX”: 255,
      “UINT_MAX”: 4294967295,
      “UIO_MAXIOV”: 1024,
      “ULONG_MAX”: 18446744073709551615,
      “USHRT_MAX”: 65535,
      “WORD_BIT”: 32,
      “_AVPHYS_PAGES”: 768366,
      “_NPROCESSORS_CONF”: 2,
      “_NPROCESSORS_ONLN”: 2,
      “_PHYS_PAGES”: 970577,
      “_POSIX_ARG_MAX”: 2097152,
      “_POSIX_ASYNCHRONOUS_IO”: 200809,
      “_POSIX_CHILD_MAX”: 15019,
      “_POSIX_FSYNC”: 200809,
      “_POSIX_JOB_CONTROL”: 1,
      “_POSIX_MAPPED_FILES”: 200809,
      “_POSIX_MEMLOCK”: 200809,
      “_POSIX_MEMLOCK_RANGE”: 200809,
      “_POSIX_MEMORY_PROTECTION”: 200809,
      “_POSIX_MESSAGE_PASSING”: 200809,
      “_POSIX_NGROUPS_MAX”: 65536,
      “_POSIX_OPEN_MAX”: 1024,
      “_POSIX_PII”: null,
      “_POSIX_PII_INTERNET”: null,
      “_POSIX_PII_INTERNET_DGRAM”: null,
      “_POSIX_PII_INTERNET_STREAM”: null,
      “_POSIX_PII_OSI”: null,
      “_POSIX_PII_OSI_CLTS”: null,
      “_POSIX_PII_OSI_COTS”: null,
      “_POSIX_PII_OSI_M”: null,
      “_POSIX_PII_SOCKET”: null,
      “_POSIX_PII_XTI”: null,
      “_POSIX_POLL”: null,
      “_POSIX_PRIORITIZED_IO”: 200809,
      “_POSIX_PRIORITY_SCHEDULING”: 200809,
      “_POSIX_REALTIME_SIGNALS”: 200809,
      “_POSIX_SAVED_IDS”: 1,
      “_POSIX_SELECT”: null,
      “_POSIX_SEMAPHORES”: 200809,
      “_POSIX_SHARED_MEMORY_OBJECTS”: 200809,
      “_POSIX_SSIZE_MAX”: 32767,
      “_POSIX_STREAM_MAX”: 16,
      “_POSIX_SYNCHRONIZED_IO”: 200809,
      “_POSIX_THREADS”: 200809,
      “_POSIX_THREAD_ATTR_STACKADDR”: 200809,
      “_POSIX_THREAD_ATTR_STACKSIZE”: 200809,
      “_POSIX_THREAD_PRIORITY_SCHEDULING”: 200809,
      “_POSIX_THREAD_PRIO_INHERIT”: 200809,
      “_POSIX_THREAD_PRIO_PROTECT”: 200809,
      “_POSIX_THREAD_ROBUST_PRIO_INHERIT”: null,
      “_POSIX_THREAD_ROBUST_PRIO_PROTECT”: null,
      “_POSIX_THREAD_PROCESS_SHARED”: 200809,
      “_POSIX_THREAD_SAFE_FUNCTIONS”: 200809,
      “_POSIX_TIMERS”: 200809,
      “TIMER_MAX”: null,
      “_POSIX_TZNAME_MAX”: 6,
      “_POSIX_VERSION”: 200809,
      “_T_IOV_MAX”: null,
      “_XOPEN_CRYPT”: 1,
      “_XOPEN_ENH_I18N”: 1,
      “_XOPEN_LEGACY”: 1,
      “_XOPEN_REALTIME”: 1,
      “_XOPEN_REALTIME_THREADS”: 1,
      “_XOPEN_SHM”: 1,
      “_XOPEN_UNIX”: 1,
      “_XOPEN_VERSION”: 700,
      “_XOPEN_XCU_VERSION”: 4,
      “_XOPEN_XPG2”: 1,
      “_XOPEN_XPG3”: 1,
      “_XOPEN_XPG4”: 1,
      “BC_BASE_MAX”: 99,
      “BC_DIM_MAX”: 2048,
      “BC_SCALE_MAX”: 99,
      “BC_STRING_MAX”: 1000,
      “CHARCLASS_NAME_MAX”: 2048,
      “COLL_WEIGHTS_MAX”: 255,
      “EQUIV_CLASS_MAX”: null,
      “EXPR_NEST_MAX”: 32,
      “LINE_MAX”: 2048,
      “POSIX2_BC_BASE_MAX”: 99,
      “POSIX2_BC_DIM_MAX”: 2048,
      “POSIX2_BC_SCALE_MAX”: 99,
      “POSIX2_BC_STRING_MAX”: 1000,
      “POSIX2_CHAR_TERM”: 200809,
      “POSIX2_COLL_WEIGHTS_MAX”: 255,
      “POSIX2_C_BIND”: 200809,
      “POSIX2_C_DEV”: 200809,
      “POSIX2_C_VERSION”: null,
      “POSIX2_EXPR_NEST_MAX”: 32,
      “POSIX2_FORT_DEV”: null,
      “POSIX2_FORT_RUN”: null,
      “_POSIX2_LINE_MAX”: 2048,
      “POSIX2_LINE_MAX”: 2048,
      “POSIX2_LOCALEDEF”: 200809,
      “POSIX2_RE_DUP_MAX”: 32767,
      “POSIX2_SW_DEV”: 200809,
      “POSIX2_UPE”: null,
      “POSIX2_VERSION”: 200809,
      “RE_DUP_MAX”: 32767,
      “PATH”: “/usr/bin”,
      “CS_PATH”: “/usr/bin”,
      “LFS_CFLAGS”: null,
      “LFS_LDFLAGS”: null,
      “LFS_LIBS”: null,
      “LFS_LINTFLAGS”: null,
      “LFS64_CFLAGS”: “-D_LARGEFILE64_SOURCE”,
      “LFS64_LDFLAGS”: null,
      “LFS64_LIBS”: null,
      “LFS64_LINTFLAGS”: “-D_LARGEFILE64_SOURCE”,
      “_XBS5_WIDTH_RESTRICTED_ENVS”: “XBS5_LP64_OFF64”,
      “XBS5_WIDTH_RESTRICTED_ENVS”: “XBS5_LP64_OFF64”,
      “_XBS5_ILP32_OFF32”: null,
      “XBS5_ILP32_OFF32_CFLAGS”: null,
      “XBS5_ILP32_OFF32_LDFLAGS”: null,
      “XBS5_ILP32_OFF32_LIBS”: null,
      “XBS5_ILP32_OFF32_LINTFLAGS”: null,
      “_XBS5_ILP32_OFFBIG”: null,
      “XBS5_ILP32_OFFBIG_CFLAGS”: null,
      “XBS5_ILP32_OFFBIG_LDFLAGS”: null,
      “XBS5_ILP32_OFFBIG_LIBS”: null,
      “XBS5_ILP32_OFFBIG_LINTFLAGS”: null,
      “_XBS5_LP64_OFF64”: 1,
      “XBS5_LP64_OFF64_CFLAGS”: “-m64”,
      “XBS5_LP64_OFF64_LDFLAGS”: “-m64”,
      “XBS5_LP64_OFF64_LIBS”: null,
      “XBS5_LP64_OFF64_LINTFLAGS”: null,
      “_XBS5_LPBIG_OFFBIG”: null,
      “XBS5_LPBIG_OFFBIG_CFLAGS”: null,
      “XBS5_LPBIG_OFFBIG_LDFLAGS”: null,
      “XBS5_LPBIG_OFFBIG_LIBS”: null,
      “XBS5_LPBIG_OFFBIG_LINTFLAGS”: null,
      “_POSIX_V6_ILP32_OFF32”: null,
      “POSIX_V6_ILP32_OFF32_CFLAGS”: null,
      “POSIX_V6_ILP32_OFF32_LDFLAGS”: null,
      “POSIX_V6_ILP32_OFF32_LIBS”: null,
      “POSIX_V6_ILP32_OFF32_LINTFLAGS”: null,
      “_POSIX_V6_WIDTH_RESTRICTED_ENVS”: “POSIX_V6_LP64_OFF64”,
      “POSIX_V6_WIDTH_RESTRICTED_ENVS”: “POSIX_V6_LP64_OFF64”,
      “_POSIX_V6_ILP32_OFFBIG”: null,
      “POSIX_V6_ILP32_OFFBIG_CFLAGS”: null,
      “POSIX_V6_ILP32_OFFBIG_LDFLAGS”: null,
      “POSIX_V6_ILP32_OFFBIG_LIBS”: null,
      “POSIX_V6_ILP32_OFFBIG_LINTFLAGS”: null,
      “_POSIX_V6_LP64_OFF64”: 1,
      “POSIX_V6_LP64_OFF64_CFLAGS”: “-m64”,
      “POSIX_V6_LP64_OFF64_LDFLAGS”: “-m64”,
      “POSIX_V6_LP64_OFF64_LIBS”: null,
      “POSIX_V6_LP64_OFF64_LINTFLAGS”: null,
      “_POSIX_V6_LPBIG_OFFBIG”: null,
      “POSIX_V6_LPBIG_OFFBIG_CFLAGS”: null,
      “POSIX_V6_LPBIG_OFFBIG_LDFLAGS”: null,
      “POSIX_V6_LPBIG_OFFBIG_LIBS”: null,
      “POSIX_V6_LPBIG_OFFBIG_LINTFLAGS”: null,
      “_POSIX_V7_ILP32_OFF32”: null,
      “POSIX_V7_ILP32_OFF32_CFLAGS”: null,
      “POSIX_V7_ILP32_OFF32_LDFLAGS”: null,
      “POSIX_V7_ILP32_OFF32_LIBS”: null,
      “POSIX_V7_ILP32_OFF32_LINTFLAGS”: null,
      “_POSIX_V7_WIDTH_RESTRICTED_ENVS”: “POSIX_V7_LP64_OFF64”,
      “POSIX_V7_WIDTH_RESTRICTED_ENVS”: “POSIX_V7_LP64_OFF64”,
      “_POSIX_V7_ILP32_OFFBIG”: null,
      “POSIX_V7_ILP32_OFFBIG_CFLAGS”: null,
      “POSIX_V7_ILP32_OFFBIG_LDFLAGS”: null,
      “POSIX_V7_ILP32_OFFBIG_LIBS”: null,
      “POSIX_V7_ILP32_OFFBIG_LINTFLAGS”: null,
      “_POSIX_V7_LP64_OFF64”: 1,
      “POSIX_V7_LP64_OFF64_CFLAGS”: “-m64”,
      “POSIX_V7_LP64_OFF64_LDFLAGS”: “-m64”,
      “POSIX_V7_LP64_OFF64_LIBS”: null,
      “POSIX_V7_LP64_OFF64_LINTFLAGS”: null,
      “_POSIX_V7_LPBIG_OFFBIG”: null,
      “POSIX_V7_LPBIG_OFFBIG_CFLAGS”: null,
      “POSIX_V7_LPBIG_OFFBIG_LDFLAGS”: null,
      “POSIX_V7_LPBIG_OFFBIG_LIBS”: null,
      “POSIX_V7_LPBIG_OFFBIG_LINTFLAGS”: null,
      “_POSIX_ADVISORY_INFO”: 200809,
      “_POSIX_BARRIERS”: 200809,
      “_POSIX_BASE”: null,
      “_POSIX_C_LANG_SUPPORT”: null,
      “_POSIX_C_LANG_SUPPORT_R”: null,
      “_POSIX_CLOCK_SELECTION”: 200809,
      “_POSIX_CPUTIME”: 200809,
      “_POSIX_THREAD_CPUTIME”: 200809,
      “_POSIX_DEVICE_SPECIFIC”: null,
      “_POSIX_DEVICE_SPECIFIC_R”: null,
      “_POSIX_FD_MGMT”: null,
      “_POSIX_FIFO”: null,
      “_POSIX_PIPE”: null,
      “_POSIX_FILE_ATTRIBUTES”: null,
      “_POSIX_FILE_LOCKING”: null,
      “_POSIX_FILE_SYSTEM”: null,
      “_POSIX_MONOTONIC_CLOCK”: 200809,
      “_POSIX_MULTI_PROCESS”: null,
      “_POSIX_SINGLE_PROCESS”: null,
      “_POSIX_NETWORKING”: null,
      “_POSIX_READER_WRITER_LOCKS”: 200809,
      “_POSIX_SPIN_LOCKS”: 200809,
      “_POSIX_REGEXP”: 1,
      “_REGEX_VERSION”: null,
      “_POSIX_SHELL”: 1,
      “_POSIX_SIGNALS”: null,
      “_POSIX_SPAWN”: 200809,
      “_POSIX_SPORADIC_SERVER”: null,
      “_POSIX_THREAD_SPORADIC_SERVER”: null,
      “_POSIX_SYSTEM_DATABASE”: null,
      “_POSIX_SYSTEM_DATABASE_R”: null,
      “_POSIX_TIMEOUTS”: 200809,
      “_POSIX_TYPED_MEMORY_OBJECTS”: null,
      “_POSIX_USER_GROUPS”: null,
      “_POSIX_USER_GROUPS_R”: null,
      “POSIX2_PBS”: null,
      “POSIX2_PBS_ACCOUNTING”: null,
      “POSIX2_PBS_LOCATE”: null,
      “POSIX2_PBS_TRACK”: null,
      “POSIX2_PBS_MESSAGE”: null,
      “SYMLOOP_MAX”: null,
      “STREAM_MAX”: 16,
      “AIO_LISTIO_MAX”: null,
      “AIO_MAX”: null,
      “AIO_PRIO_DELTA_MAX”: 20,
      “DELAYTIMER_MAX”: 2147483647,
      “HOST_NAME_MAX”: 64,
      “LOGIN_NAME_MAX”: 256,
      “MQ_OPEN_MAX”: null,
      “MQ_PRIO_MAX”: 32768,
      “_POSIX_DEVICE_IO”: null,
      “_POSIX_TRACE”: null,
      “_POSIX_TRACE_EVENT_FILTER”: null,
      “_POSIX_TRACE_INHERIT”: null,
      “_POSIX_TRACE_LOG”: null,
      “RTSIG_MAX”: 32,
      “SEM_NSEMS_MAX”: null,
      “SEM_VALUE_MAX”: 2147483647,
      “SIGQUEUE_MAX”: 15019,
      “FILESIZEBITS”: 64,
      “POSIX_ALLOC_SIZE_MIN”: 4096,
      “POSIX_REC_INCR_XFER_SIZE”: null,
      “POSIX_REC_MAX_XFER_SIZE”: null,
      “POSIX_REC_MIN_XFER_SIZE”: 4096,
      “POSIX_REC_XFER_ALIGN”: 4096,
      “SYMLINK_MAX”: null,
      “GNU_LIBC_VERSION”: “glibc 2.17”,
      “GNU_LIBPTHREAD_VERSION”: “NPTL 2.17”,
      “POSIX2_SYMLINKS”: 1,
      “LEVEL1_ICACHE_SIZE”: 32768,
      “LEVEL1_ICACHE_ASSOC”: 8,
      “LEVEL1_ICACHE_LINESIZE”: 64,
      “LEVEL1_DCACHE_SIZE”: 32768,
      “LEVEL1_DCACHE_ASSOC”: 8,
      “LEVEL1_DCACHE_LINESIZE”: 64,
      “LEVEL2_CACHE_SIZE”: 2097152,
      “LEVEL2_CACHE_ASSOC”: 8,
      “LEVEL2_CACHE_LINESIZE”: 64,
      “LEVEL3_CACHE_SIZE”: 0,
      “LEVEL3_CACHE_ASSOC”: 0,
      “LEVEL3_CACHE_LINESIZE”: 0,
      “LEVEL4_CACHE_SIZE”: 0,
      “LEVEL4_CACHE_ASSOC”: 0,
      “LEVEL4_CACHE_LINESIZE”: 0,
      “IPV6”: 200809,
      “RAW_SOCKETS”: 200809
      },
      “time”: {
      “timezone”: “UTC”
      }
      }
      It gives information about our node.
    2. Suppose if I want to retrieve the ipaddress of node then we can execute the command:
      ohai ipaddressOutput will be as follows:
      [
      “192.168.1.240”
      ]
      We can use these attributes in our code.
      ohai hostname
      [
      “chef-node”
      ]
      ohai | grep ipaddress
      “ipaddress”: “192.168.1.240”
      ohai cpu
      {
      “0”: {
      “vendor_id”: “GenuineIntel”,
      “family”: “6”,
      “model”: “61”,
      “model_name”: “Intel Core Processor (Broadwell)”,
      “stepping”: “2”,
      “mhz”: “2095.146”,
      “cache_size”: “4096 KB”,
      “physical_id”: “0”,
      “core_id”: “0”,
      “cores”: “1”,
      “flags”: [
      “fpu”,
      “vme”,
      “de”,
      “pse”,
      “tsc”,
      “msr”,
      “pae”,
      “mce”,
      “cx8”,
      “apic”,
      “sep”,
      “mtrr”,
      “pge”,
      “mca”,
      “cmov”,
      “pat”,
      “pse36”,
      “clflush”,
      “mmx”,
      “fxsr”,
      “sse”,
      “sse2”,
      “ss”,
      “syscall”,
      “nx”,
      “pdpe1gb”,
      “rdtscp”,
      “lm”,
      “constant_tsc”,
      “rep_good”,
      “nopl”,
      “eagerfpu”,
      “pni”,
      “pclmulqdq”,
      “vmx”,
      “ssse3”,
      “fma”,
      “cx16”,
      “pcid”,
      “sse4_1”,
      “sse4_2”,
      “x2apic”,
      “movbe”,
      “popcnt”,
      “tsc_deadline_timer”,
      “aes”,
      “xsave”,
      “avx”,
      “f16c”,
      “rdrand”,
      “hypervisor”,
      “lahf_lm”,
      “abm”,
      “3dnowprefetch”,
      “arat”,
      “tpr_shadow”,
      “vnmi”,
      “flexpriority”,
      “ept”,
      “vpid”,
      “fsgsbase”,
      “bmi1”,
      “hle”,
      “avx2”,
      “smep”,
      “bmi2”,
      “erms”,
      “invpcid”,
      “rtm”,
      “rdseed”,
      “adx”,
      “smap”,
      “xsaveopt”
      ]
      },
      “1”: {
      “vendor_id”: “GenuineIntel”,
      “family”: “6”,
      “model”: “61”,
      “model_name”: “Intel Core Processor (Broadwell)”,
      “stepping”: “2”,
      “mhz”: “2095.146”,
      “cache_size”: “4096 KB”,
      “physical_id”: “1”,
      “core_id”: “0”,
      “cores”: “1”,
      “flags”: [
      “fpu”,
      “vme”,
      “de”,
      “pse”,
      “tsc”,
      “msr”,
      “pae”,
      “mce”,
      “cx8”,
      “apic”,
      “sep”,
      “mtrr”,
      “pge”,
      “mca”,
      “cmov”,
      “pat”,
      “pse36”,
      “clflush”,
      “mmx”,
      “fxsr”,
      “sse”,
      “sse2”,
      “ss”,
      “syscall”,
      “nx”,
      “pdpe1gb”,
      “rdtscp”,
      “lm”,
      “constant_tsc”,
      “rep_good”,
      “nopl”,
      “eagerfpu”,
      “pni”,
      “pclmulqdq”,
      “vmx”,
      “ssse3”,
      “fma”,
      “cx16”,
      “pcid”,
      “sse4_1”,
      “sse4_2”,
      “x2apic”,
      “movbe”,
      “popcnt”,
      “tsc_deadline_timer”,
      “aes”,
      “xsave”,
      “avx”,
      “f16c”,
      “rdrand”,
      “hypervisor”,
      “lahf_lm”,
      “abm”,
      “3dnowprefetch”,
      “arat”,
      “tpr_shadow”,
      “vnmi”,
      “flexpriority”,
      “ept”,
      “vpid”,
      “fsgsbase”,
      “bmi1”,
      “hle”,
      “avx2”,
      “smep”,
      “bmi2”,
      “erms”,
      “invpcid”,
      “rtm”,
      “rdseed”,
      “adx”,
      “smap”,
      “xsaveopt”
      ]
      },
      “total”: 2,
      “real”: 2,
      “cores”: 2
      }
      ohai platform
      [
      “centos”
      ]
      ohai platform_family

      [
      “rhel”
      ]

    3. Lets edit the apache cookbook we created in previous post.
      Edit default.rb

      if node['platform_family'] == "rhel"
              package = "httpd"
      elsif node['platform_family'] == "debian"
              package = "apache2"
      end
      
      package 'apache2' do
              package_name package
              action :install
      end
      
      service 'apache2' do
              service_name package
              action [:start, :enable]
      end
      
      
    4. Now create a recipe motd.rb with the following content:
      
      hostname = node['hostname']
      file '/etc/motd' do
              content "Hostname is this #{hostname}"
      end
      
      

      Add the code to git repo. Then upload the cookbook to chef-server. Then add the recipe to the run_list with the command:
      knife node run_list add chef-node ‘recipe[motd]’

    5. Now if you run chef-client then you will get the following error:
      Error Resolving Cookbooks for Run List:
      ================================================================================

      Missing Cookbooks:
      ——————
      The following cookbooks are required by the client but don’t exist on the server
      * motd

      We called motd but motd is not a cookbook it is a recipe inside the apache cookbook.
      Now go ahead and remove the recipe from run_list.
      knife node run_list remove chef-node ‘recipe[motd]’
      Then add the recipe as:
      knife node run_list add chef-node ‘recipe[apache::motd]’

    6. Then run the chef-client the motd recipe will be executed. View the contents of /etc/motd you will see the content updated there.

Search

search1.png

search2.png

search3search4

search5

search6

Execute the following command to find nodes having platform_family as rhel

knife search ‘platform_family:rhel’
Output:

Environment: _default
FQDN:
IP: 192.168.1.240
Run List: recipe[apache::websites], recipe[apache], recipe[apache::motd]
Roles:
Recipes: apache::websites, apache, apache::default, apache::motd
Platform: centos 7.2.1511
Tags:


Execute the following command to find nodes having recipes:apache
knife search ‘recipes:apache’

To find the recipe websites in cookbook apache:
knife search ‘recipes:apache\:\:websites’
knife search ‘recipes:apache\:\:websites*’

If you want to retrieve a list of hostnames of the nodes which have platform of centos:
knife search ‘platfor?:centos’ -a hostname
With -a we are specifying the attribute we want.

If you want to list all nodes:
knife search ‘*:*’

If you want to search the nodes with role web:
knife search role ‘role:web’
You can also execute the following:
knife search ‘*.*’ -a recipes

 

ElasticSearch Issues

  • java.lang.IllegalArgumentException: unknown setting [node.rack] please check that any required plugins are installed, or check the breaking changes documentation for removed settings

Node level attributes used for allocation filtering, forced awareness or other node identification / grouping must be prefixed with node.attr. In previous versions it was possible to specify node attributes with the node. prefix. All node attributes except of node.masternode.data and node.ingest must be moved to the new node.attr. namespace.

  • Unknown setting mlockall

Replace the bootstrap.mlockall with bootstrap.memory_lock

  • Unable to lock JVM Memory: error=12, reason=Cannot allocate memory

Edit:  /etc/security/limits.conf and add the following lines

elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited

Edit: /usr/lib/systemd/system/elasticsearch.service uncomment the line

LimitMEMLOCK=infinity

Execute the following commands:

systemctl daemon-reload

systemctl elasticsearch start

  • Elasticsearch cluster health “red”: “unassigned_shards”

Execute the following command:

Elasticsearch’s cat API will tell you which shards are unassigned, and why:

curl -XGET localhost:9200/_cat/shards?h=index,shard,prirep,state,unassigned.reason| grep UNASSIGNED

Each row lists the name of the index, the shard number, whether it is a primary (p) or replica ® shard, and the reason it is unassigned:

constant-updates        0 p UNASSIGNED NODE_LEFT node_left[NODE_NAME]

If the unassigned shards belong to an index you thought you deleted already, or an outdated index that you don’t need anymore, then you can delete the index to restore your cluster status to green:

curl -XDELETE 'localhost:9200/index_name/'
  • ElasticSearch nodes not showing hardware metrics

Execute the following command:

curl localhost:9200/_nodes/stats?pretty

It will show you the error root cause. If the error is:

“failures” : [
{
“type” : “failed_node_exception”,
“reason” : “Failed node [3kOQUA2IQ-mnD74ER3O6SQ]”,
“caused_by” : {
“type” : “illegal_state_exception”,
“reason” : “environment is not locked”,
“caused_by” : {
“type” : “no_such_file_exception”,
“reason” : “/opt/apps/elasticsearch/nodes/0/node.lock”
}
}

Then just restart elasticsearch service. It is caused when data directory is deleted while elasticsearch is still running.

  • Elasticsearch service does not start and no logs are captured in elasticsearch.log
    The issue can be found in /var/log/messages, mainly this issue is because of java not installed ot JAVA_HOME not set.
    The issue might also be because improper jvm settings.