change python for pip

You can check which python is configured with pip with command:
pip –version
pip uses the following package name python$VERSION-pip
The python specific pip will be installed, then you can change the default pip as:
update-alternatives –install /usr/bin/pip pip /usr/bin/pip2 1

update-alternatives –config pip Then select the pip version you want.


Kubernetes From Scratch on Ubuntu 16.04

This is the first and most important component in Kubernetes. Kubelet’s responsibility is to spawn/kill pods and containers on its node, it communicates directly with Docker daemon so we need to install it first. For Ubuntu 16.04 the default version of Docker is 1.12.6.


root@node:~$ apt-get update && apt-get install -y

root@node:~$ docker version


Version: 1.12.6

API version: 1.24

Go version: go1.6.2

Git commit: 78d1802

Built: Tue Jan 31 23:35:14 2017

OS/Arch: linux/amd64


Version: 1.12.6

API version: 1.24

Go version: go1.6.2

Git commit: 78d1802

Built: Tue Jan 31 23:35:14 2017

OS/Arch: linux/amd64

So let’s download Kubernetes binaries and run kubelet.

root@node:~$ wget -q --show-progress

kubernetes-server-linux-amd64.tar.gz 100%[==================================================================================================================================>] 417.16M 83.0MB/s in 5.1s

root@node:~$ tar xzf kubernetes-server-linux-amd64.tar.gz

root@node:~$ mv kubernetes/server/bin/* /usr/local/bin/

root@node:~$ rm -rf *

We run kubelet with –pod-manifest-path option. This is the directory that kubelet will watch for pod manifest yaml files.

root@node:~$ kubelet --pod-manifest-path /tmp/manifests &> /tmp/kubelet.log &

Let’s put simple nginx pod manifest file to that directory and see what happens.

apiVersion: v1

kind: Pod


name: nginx


app: nginx



- name: nginx

image: nginx


- containerPort: 80

Now we can check docker ps to see that our container has been added and try to curl it:

root@node:~$ docker ps


c3369c72ebb2 nginx@sha256:aa1c5b5f864508ef5ad472c45c8d3b6ba34e5c0fb34aaea24acf4b0cee33187e "nginx -g 'daemon off" 3 minutes ago Up 3 minutes k8s_nginx_nginx-node_default_594710e736bc86ef2c87ea5615da08b1_0

b603d65d8bfd "/pause" 3 minutes ago Up 3 minutes k8s_POD_nginx-node_default_594710e736bc86ef2c87ea5615da08b1_0

root@node:~$ docker inspect b603d65d8bfd | jq .[0].NetworkSettings.IPAddress


root@node:~$ curl

<!DOCTYPE html>



<title>Welcome to nginx!</title>

The b603d65d8bfd is the id of a pause container. This is an infrastructure container that Kubernetes creates first when creating a pod. Using a pause container Kubernetes acquires IP and setup network namespace. All other containers in a pod shares the same IP address and network interface. When all your containers die, this is the last container that holds whole network namespace.

This is how our node looks like now:

Kubernetes use etcd, a distributed database with strong consistency data model to store the state of whole cluster. API Server is the only component that can talk to etcd directly, all other components (including kubelet) have to communicate through API Server. Let’s try to run API Server with kubelet.

First we need etcd:

root@node:~$ wget -q --show-progress

etcd-v3.2.6-linux-amd64.tar.gz 100%[==================================================================================================================================>] 9.70M 2.39MB/s in 4.1s

root@node:~$ tar xzf etcd-v3.2.6-linux-amd64.tar.gz

root@node:~$ mv etcd-v3.2.6-linux-amd64/etcd* /usr/local/bin/

root@node:~$ etcd --listen-client-urls --advertise-client-urls http://localhost:2379 &> /tmp/etcd.log &

root@node:~$ etcdctl cluster-health

member 8e9e05c52164694d is healthy: got healthy result from

cluster is health

And the API Server:

root@node:~$ kube-apiserver --etcd-servers=http://localhost:2379 --service-cluster-ip-range= --bind-address= --insecure-bind-address= &> /tmp/apiserver.log &

root@node:~$ curl http://localhost:8080/api/v1/nodes


"kind": "NodeList",

"apiVersion": "v1",

"metadata": {

"selfLink": "/api/v1/nodes",

"resourceVersion": "45"


"items": []


Now we can connect kubelet to API Server and check if it was discovered by the cluster.

root@node:~$ pkill -f kubelet

root@node:~$ kubelet --api-servers=localhost:8080 &> /tmp/kubelet.log &

root@node:~$ kubectl get nodes


node Ready 5m v1.7.6

root@node:~$ kubectl get pods

No resources found.

We don’t have any pods yet, so let’s create one with kubectl create -f nginx.yaml using previous manifest file.

root@node:~$ kubectl create -f nginx.yaml

pod "nginx" created

root@node:~$ kubectl get pods


nginx 0/1 Pending 0 6m

Notice here that the pod hangs in Pending status – but why ? This is because we don’t yet have another Kubernetes component responsible for choosing a node for the pod – Scheduler. We will talk about it later but for now we can just create nginx2 with updated manifest that determinates what node should be used.

root@node:~# git diff nginx.yaml nginx2.yaml

diff --git a/nginx.yaml b/nginx2.yaml

index 7053af0..36885ae 100644

--- a/nginx.yaml

+++ b/nginx2.yaml

@@ -1,10 +1,11 @@

apiVersion: v1

kind: Pod


- name: nginx

+ name: nginx2


app: nginx


+ nodeName: node


- name: nginx

image: nginx

root@node:~$ kubectl create -f nginx2.yaml

root@node:~$ kubectl get pod


nginx 0/1 Pending 0 10m

nginx2 1/1 Running 0 8s

Great, so now we can see that API Server and kubelet works. This is how our node looks like now:

Scheduler is responsible for assigning pod to a node. It watches pods and assigns available nodes to those without one.

We still have nginx pod that is in Pending state from previous example. Let’s run scheduler and see what happens.

root@node:~$ kube-scheduler --master=http://localhost:8080 &> /tmp/scheduler.log &

root@node:~$ kubectl get pods


nginx 1/1 Running 0 17m

nginx2 1/1 Running 0 17m
view hosted with   by GitHub
as you can see the scheduler kicks in, finds a pod and assigns it to the node. You can see it’s placement on our node schema:

Controller Manager is responsible for managing (among others) Replication Controllers and Replica Sets so without it we can’t use Kubernetes Deployments.
Here we are going to run it and create a deployment.

apiVersion: apps/v1beta1

kind: Deployment


name: nginx


replicas: 3




run: nginx



- name: nginx

image: nginx


- containerPort: 80
view rawnginx-deploy.yaml hosted with   by GitHub

root@node:~$ kube-controller-manager --master=http://localhost:8080 &> /tmp/controller-manager.log &

root@node:~$ kubectl create -f nginx-deploy.yaml

deployment "nginx" created

root@node:~$ kubectl get deploy


nginx 3 3 3 2 7s

root@node:~$ kubectl get po


nginx 1/1 Running 0 32m

nginx-31893996-3dnx7 1/1 Running 0 18s

nginx-31893996-5d1ts 1/1 Running 0 18s

nginx-31893996-9k93w 1/1 Running 0 18s

nginx2 1/1 Running 0 32m

Updated version of our node scheme:

Kubernetes (network) proxy is responsible for managing Kubernetes Services and thus internal load balancing and exposing pods internally for other pods and for external clients.

apiVersion: v1

kind: Service


name: nginx


run: nginx


type: NodePort


- name: http

port: 80

nodePort: 30073


run: nginx

root@node:~$ kube-proxy --master=http://localhost:8080 &> /tmp/proxy.log &

root@node:~$ kubectl create -f nginx-svc.yaml

service "nginx" created

root@node:~$ kubectl get svc


kubernetes <none> 443/TCP 2h

nginx <nodes> 80:30073/TCP 7s
view hosted with   by GitHub
Nginx deployment is now exposed via 30073 port externally, we can check that with curl.

$ doctl compute droplet list (env: st)

ID Name Public IPv4 Private IPv4 Public IPv6 Memory VCPUs Disk Region Image Status Tags

63370004 node1 2048 2 40 fra1 Ubuntu 16.04.3 x64 active

$ curl

<!DOCTYPE html>



<title>Welcome to nginx!</title>

Using environment variables in Kubernetes deployment spec

I am concerned about pushing information such as passwords or IP addresses into remote Git repositories. Can I avoid this e.g. by making use of environment variables, e.g. with a deployment spec and actual deployment roughly as follows:

   type: LoadBalancer
   loadBalancerIP: ${SERVICE_ADDRESS}


export SERVICE_ADDRESS=<static-ip-address>
kubectl create -f Deployment.yaml

Obviously this specific syntax does not work yet. But is something like this possible and if so how?


In deploy.yml:

LoadbalancerIP: $LBIP

Then just create your env var and run kubectl like this:

export LBIP=""
envsubst < deploy.yml | kubectl apply -f -

envsubst is available in e.g. Ubuntu/Debian gettext package.


Python: clear console prompt

os.system('clear') works on linux. If you are running windows try os.system('CLS') instead.

You need to import os first like this:

import os

Using “${a:-b}” for variable assignment in scripts

This technique allows for a variable to be assigned a value if another variable is either empty or is undefined. NOTE: This “other variable” can be the same or another variable.


    If parameter is unset or null, the expansion of word is substituted. 
    Otherwise, the value of parameter is substituted.

NOTE: This form also works, ${parameter-word}. If you’d like to see a full list of all forms of parameter expansion available within Bash then I highly suggest you take a look at this topic in the Bash Hacker’s wiki titled: “Parameter expansion“.


variable doesn’t exist

$ echo "$VAR1"

$ VAR1="${VAR1:-default value}"
$ echo "$VAR1"
default value

variable exists

$ VAR1="has value"
$ echo "$VAR1"
has value

$ VAR1="${VAR1:-default value}"
$ echo "$VAR1"
has value

The same thing can be done by evaluating other variables, or running commands within the default value portion of the notation.

$ VAR2="has another value"
$ echo "$VAR2"
has another value
$ echo "$VAR1"


$ VAR1="${VAR1:-$VAR2}"
$ echo "$VAR1"
has another value

More Examples

You can also use a slightly different notation where it’s just VARX=${VARX-<def. value>.

$ echo "${VAR1-0}"
has another value
$ echo "${VAR2-0}"
has another value
$ echo "${VAR3-0}"

In the above $VAR1 & $VAR2 were already defined with the string “has another value” but $VAR3was undefined, so the default value was used instead, 0.

Another Example

$ VARX="${VAR3-0}"
$ echo "$VARX"

Checking and assigning using := notation

Lastly I’ll mention the handy operator, :=. This will do a check and assign a value if the variable under test is empty or undefined.


Notice that $VAR1 is now set. The operator := did the test and the assignment in a single operation.

$ unset VAR1
$ echo "$VAR1"

$ echo "${VAR1:=default}"
$ echo "$VAR1"

However if the value is set prior, then it’s left alone.

$ VAR1="some value"
$ echo "${VAR1:=default}"
some value
$ echo "$VAR1"
some value

Handy Dandy Reference Table

ss of table


Install ruby on centos with rvm


Whether you are preparing your VPS to try out a new application, or find yourself in need of a solid and isolated Ruby installation, getting your system ready-for-work (inline with CentOS design ideologies of stability, along with its incentives of minimalism) can get you feeling a little bit lost.

In this DigitalOcean article, we are focusing on the simplest and quickest rock-solid way to get the latest Ruby interpreter (version 2.1.0) installed on a VPS running CentOS 6.5 using the Ruby Version Manager – RVM.


1. Ruby Version Manager (RVM)

2. Understanding CentOS

3. Getting Started With Installation

  1. Preparing The System
  2. Downloading And Installing RVM
  3. Installing Ruby 2.1.0 On CentOS 6.5 Using RVM
  4. Setting Up Any Ruby Version As The Default Interpreter
  5. Working With Different Ruby Installations
  6. Working With RVM gemsets

Ruby Version Manager (RVM)

Ruby Version Manager, or RVM (and rvm as a command) for short, lets developers and system administrators quickly get started using Ruby and/or developing applications with a Ruby interpreter.

Not only does RVM support multiple versions of Ruby simultaneously, but also it comes with built-in tools to create and work with virtual environments called gemsets. With the help of RVM, it is possible to create any number of perfectly isolated – and self-contained – gemsets where dependencies, packages, and the default Ruby installation are crafted to match your needs and kept accordingly between different stages of deployment — guaranteed to work the same way regardless of where.

RVM gemsets

The power of RVM is its ability to create fully isolated Ruby containers which act like a completely different (and a new) environment. Any application running inside the environment can access (and function) only within its reach.

Understanding CentOS

CentOS operating system is derived from RHEL – Red Hat Enterprise Linux. The target users of these distributions are usually businesses, which require their systems to be running the most stable way for a long time.

The main incentives of CentOS, therefore, is the desire for stability, which is achieved by supplying tested, stable versions of applications.

All the default applications that are shipped with CentOS remain to be used by the system (and its supportive applications such as the package manager YUM) alone. It is neither recommended nor easy to try to work with them.

That is why we are going to prepare our CentOS 6.5 running droplet with necessary tools and continue with installing a Ruby interpreter targeted to run your applications.

Getting Started With Installation

Preparing The System

CentOS distributions are very lean. They do not come with many of the popular applications and tools that you are likely to need – and this is an intentional design choice as we have seen.

For our installations, however, we are going to need some libraries and tools (i.e. development [related] tools) that are not shipped by default. Therefore, we need to get them downloaded and installed before we continue.

For this purpose we will download various development tools using YUM software groups which consist of bunch of commonly used tools (applications) bundled together, ready to download.

As the first step, in order to get necessary development tools, run the following:

yum groupinstall -y development


yum groupinstall -y 'development tools'

Note: The former (shorter) version might not work on older distributions of CentOS.

Downloading And Installing RVM

After arming our system with tools needed for development (and deployment) of applications, such as a generic compiler, we are ready to get RVM downloaded installed.

RVM is designed from the ground up to make the whole process of getting Ruby and managing environments easy. It is no surprise that getting RVM itself is simplified as well.

In order to download and install RVM, run the following:

curl -L | bash -s stable

And to create a system environment using RVM shell script:

source /etc/profile.d/

Installing Ruby 2.1.0 On CentOS 6.5 Using RVM

All that is needed from now on to work with Ruby 2.1.0 (or any other version), after downloading RVM and configuring a system environment is the actual installation of Ruby from source – which is to be handled by RVM.

In order to install Ruby 2.1.0 from source using RVM, run the following:

rvm reload
To find available ruby versions execute rvm list known
It will display ruby versions
rvm install 2.1.0 

Setting Up Any Ruby Version As The Default Interpreter

If you are working with multiple applications which are already in production, it is a highly likely scenario that at some point you will need to use a different version of Ruby for a certain application.

However, for most situations, you will probably be using the latest version as the interpreter to run all others.

One of RVM’s excellent features is its ability to help you set a default Ruby version to be used generally and switch between them when necessary.

To check your current default interpreter, run the following:

ruby --version
# ruby command is linked to the selected version of Ruby Interpreter (i.e. 2.1.0)

To see all the installed Ruby versions, use the following command:

rvm list rubies

To set a Ruby version as the default, run the following:

# Usage: rvm use [version] --default
rvm use 2.1.0 --default

Useful ansible stuff


inventory_hostname‘ contains the name of the current node being worked on…. (as in, what it is defined in your hosts file as) so if you want to skip a task for a single node –

- name: Restart amavis
  service: name=amavis state=restarted
  when: inventory_hostname != "boris"

(Don’t restart Amavis for boris,  do for all others).

You could also use :

  when: inventory_hostname not in groups['group_name']

if your aim was to (perhaps skip) a task for some nodes in the specified group.


Need to check whether you need to reboot for a kernel update?

  1. If /vmlinuz doesn’t resolve to the same kernel as we’re running
  2. Reboot
  3. Wait 45 seconds before carrying on…
- name: Check for reboot hint.
  shell: if [ $(readlink -f /vmlinuz) != /boot/vmlinuz-$(uname -r) ]; then echo 'reboot'; else echo 'no'; fi
  ignore_errors: true
  register: reboot_hint

- name: Rebooting ...
  command: shutdown -r now "Ansible kernel update applied"
  async: 0
  poll: 0
  ignore_errors: true
  when: kernelup|changed or reboot_hint.stdout.find("reboot") != -1
  register: rebooting

- name: Wait for thing to reboot...
  pause: seconds=45
  when: rebooting|changed

Fixing ~/.ssh/known_hosts

Often an ansible script may create a remote node – and often it’ll have the same IP/name as a previous entity. This confuses SSH — so after creating :

- name: Fix .ssh/known_hosts. (1)
  local_action: command  ssh-keygen -f "~/.ssh/known_hosts" -R hostname

If you’re using ec2, for instance, you could do something like :

- name: Fix .ssh/known_hosts.
  local_action: command  ssh-keygen -f "~/.ssh/known_hosts" -R {{ item.public_ip }} 
  with_items: ec2_info.instances

Where ec2_info is your registered variable from calling the ‘ec2’ module.

Debug/Dump a variable?

- name: What's in reboot_hint?
  debug: var=reboot_hint

which might output something like :

"reboot_hint": {
        "changed": true, 
        "cmd": "if [ $(readlink -f /vmlinuz) != /boot/vmlinuz-$(uname -r) ]; then echo 'reboot'; else echo 'no'; fi", 
        "delta": "0:00:00.024759", 
        "end": "2014-07-29 09:05:06.564505", 
        "invocation": {
            "module_args": "if [ $(readlink -f /vmlinuz) != /boot/vmlinuz-$(uname -r) ]; then echo 'reboot'; else echo 'no'; fi", 
            "module_name": "shell"
        "rc": 0, 
        "start": "2014-07-29 09:05:06.539746", 
        "stderr": "", 
        "stdout": "reboot", 
        "stdout_lines": [

Which leads on to —

Want to run a shell command do something with the output?

Registered variables have useful attributes like :

  • changed – set to boolean true if something happened (useful to tell when a task has done something on a remote machine).
  • stderr – contains stringy output from stderr
  • stdout – contains stringy output from stdout
  • stdout_lines – contains a list of lines (i.e. stdout split on \n).

(see above)

- name: Do something
  shell: /usr/bin/something | grep -c foo || true
  register: shell_output

So – we could :

- name: Catch some fish (there are at least 5)
  shell: /usr/bin/somethingelse 
  when: shell_output.stdout > "5"

Default values for a Variable, and host specific values.

Perhaps you’ll override a variable, or perhaps not … so you can do something like the following in a template :

max_allowed_packet = {{ mysql_max_allowed_packet|default('128M') }}

And for the annoying hosts that need a larger mysql_max_allowed_packet, just define it within the inventory hosts file like :

busy-web-server mysql_max_allowed_packet=256M