execute python function from terminal

If you have a python file: myfunction.py

def hours_to_minutes(minutes):
  hours = minutes / 60.0
  return hours
<span 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span>

Then you can execute it with:

python -c ‘from function import *; print(hours_to_minutes(20))’

Advertisements

Kubernetes From Scratch on Ubuntu 16.04

KUBELET
This is the first and most important component in Kubernetes. Kubelet’s responsibility is to spawn/kill pods and containers on its node, it communicates directly with Docker daemon so we need to install it first. For Ubuntu 16.04 the default version of Docker is 1.12.6.

 

root@node:~$ apt-get update && apt-get install -y docker.io

root@node:~$ docker version

Client:

Version: 1.12.6

API version: 1.24

Go version: go1.6.2

Git commit: 78d1802

Built: Tue Jan 31 23:35:14 2017

OS/Arch: linux/amd64



Server:

Version: 1.12.6

API version: 1.24

Go version: go1.6.2

Git commit: 78d1802

Built: Tue Jan 31 23:35:14 2017

OS/Arch: linux/amd64

So let’s download Kubernetes binaries and run kubelet.

 
root@node:~$ wget -q --show-progress https://dl.k8s.io/v1.7.6/kubernetes-server-linux-amd64.tar.gz

kubernetes-server-linux-amd64.tar.gz 100%[==================================================================================================================================>] 417.16M 83.0MB/s in 5.1s

root@node:~$ tar xzf kubernetes-server-linux-amd64.tar.gz

root@node:~$ mv kubernetes/server/bin/* /usr/local/bin/

root@node:~$ rm -rf *

We run kubelet with –pod-manifest-path option. This is the directory that kubelet will watch for pod manifest yaml files.

root@node:~$ kubelet --pod-manifest-path /tmp/manifests &> /tmp/kubelet.log &

Let’s put simple nginx pod manifest file to that directory and see what happens.

apiVersion: v1

kind: Pod

metadata:

name: nginx

labels:

app: nginx

spec:

containers:

- name: nginx

image: nginx

ports:

- containerPort: 80

Now we can check docker ps to see that our container has been added and try to curl it:

root@node:~$ docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

c3369c72ebb2 nginx@sha256:aa1c5b5f864508ef5ad472c45c8d3b6ba34e5c0fb34aaea24acf4b0cee33187e "nginx -g 'daemon off" 3 minutes ago Up 3 minutes k8s_nginx_nginx-node_default_594710e736bc86ef2c87ea5615da08b1_0

b603d65d8bfd gcr.io/google_containers/pause-amd64:3.0 "/pause" 3 minutes ago Up 3 minutes k8s_POD_nginx-node_default_594710e736bc86ef2c87ea5615da08b1_0



root@node:~$ docker inspect b603d65d8bfd | jq .[0].NetworkSettings.IPAddress

"172.17.0.2"

root@node:~$ curl 172.17.0.2

<!DOCTYPE html>

<html>

<head>

<title>Welcome to nginx!</title>

The b603d65d8bfd is the id of a pause container. This is an infrastructure container that Kubernetes creates first when creating a pod. Using a pause container Kubernetes acquires IP and setup network namespace. All other containers in a pod shares the same IP address and network interface. When all your containers die, this is the last container that holds whole network namespace.

This is how our node looks like now:

KUBE API SERVER
Kubernetes use etcd, a distributed database with strong consistency data model to store the state of whole cluster. API Server is the only component that can talk to etcd directly, all other components (including kubelet) have to communicate through API Server. Let’s try to run API Server with kubelet.

First we need etcd:

 
root@node:~$ wget -q --show-progress https://github.com/coreos/etcd/releases/download/v3.2.6/etcd-v3.2.6-linux-amd64.tar.gz

etcd-v3.2.6-linux-amd64.tar.gz 100%[==================================================================================================================================>] 9.70M 2.39MB/s in 4.1s

root@node:~$ tar xzf etcd-v3.2.6-linux-amd64.tar.gz

root@node:~$ mv etcd-v3.2.6-linux-amd64/etcd* /usr/local/bin/

root@node:~$ etcd --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls http://localhost:2379 &> /tmp/etcd.log &

root@node:~$ etcdctl cluster-health

member 8e9e05c52164694d is healthy: got healthy result from http://46.101.177.76:2379

cluster is health

And the API Server:

root@node:~$ kube-apiserver --etcd-servers=http://localhost:2379 --service-cluster-ip-range=10.0.0.0/16 --bind-address=0.0.0.0 --insecure-bind-address=0.0.0.0 &> /tmp/apiserver.log &

root@node:~$ curl http://localhost:8080/api/v1/nodes

{

"kind": "NodeList",

"apiVersion": "v1",

"metadata": {

"selfLink": "/api/v1/nodes",

"resourceVersion": "45"

},

"items": []

}

Now we can connect kubelet to API Server and check if it was discovered by the cluster.

root@node:~$ pkill -f kubelet

root@node:~$ kubelet --api-servers=localhost:8080 &> /tmp/kubelet.log &

root@node:~$ kubectl get nodes

NAME STATUS AGE VERSION

node Ready 5m v1.7.6

root@node:~$ kubectl get pods

No resources found.

We don’t have any pods yet, so let’s create one with kubectl create -f nginx.yaml using previous manifest file.

root@node:~$ kubectl create -f nginx.yaml

pod "nginx" created

root@node:~$ kubectl get pods

NAME READY STATUS RESTARTS AGE

nginx 0/1 Pending 0 6m

Notice here that the pod hangs in Pending status – but why ? This is because we don’t yet have another Kubernetes component responsible for choosing a node for the pod – Scheduler. We will talk about it later but for now we can just create nginx2 with updated manifest that determinates what node should be used.

root@node:~# git diff nginx.yaml nginx2.yaml

diff --git a/nginx.yaml b/nginx2.yaml

index 7053af0..36885ae 100644

--- a/nginx.yaml

+++ b/nginx2.yaml

@@ -1,10 +1,11 @@

apiVersion: v1

kind: Pod

metadata:

- name: nginx

+ name: nginx2

labels:

app: nginx

spec:

+ nodeName: node

containers:

- name: nginx

image: nginx


root@node:~$ kubectl create -f nginx2.yaml

root@node:~$ kubectl get pod

NAME READY STATUS RESTARTS AGE

nginx 0/1 Pending 0 10m

nginx2 1/1 Running 0 8s

Great, so now we can see that API Server and kubelet works. This is how our node looks like now:

KUBE SCHEDULER
Scheduler is responsible for assigning pod to a node. It watches pods and assigns available nodes to those without one.

We still have nginx pod that is in Pending state from previous example. Let’s run scheduler and see what happens.

 
root@node:~$ kube-scheduler --master=http://localhost:8080 &> /tmp/scheduler.log &

root@node:~$ kubectl get pods

NAME READY STATUS RESTARTS AGE

nginx 1/1 Running 0 17m

nginx2 1/1 Running 0 17m
view raw10.sh hosted with   by GitHub
as you can see the scheduler kicks in, finds a pod and assigns it to the node. You can see it’s placement on our node schema:

KUBE CONTROLLER MANAGER
Controller Manager is responsible for managing (among others) Replication Controllers and Replica Sets so without it we can’t use Kubernetes Deployments.
Here we are going to run it and create a deployment.

 
apiVersion: apps/v1beta1

kind: Deployment

metadata:

name: nginx

spec:

replicas: 3

template:

metadata:

labels:

run: nginx

spec:

containers:

- name: nginx

image: nginx

ports:

- containerPort: 80
view rawnginx-deploy.yaml hosted with   by GitHub

root@node:~$ kube-controller-manager --master=http://localhost:8080 &> /tmp/controller-manager.log &

root@node:~$ kubectl create -f nginx-deploy.yaml

deployment "nginx" created

root@node:~$ kubectl get deploy

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE

nginx 3 3 3 2 7s

root@node:~$ kubectl get po

NAME READY STATUS RESTARTS AGE

nginx 1/1 Running 0 32m

nginx-31893996-3dnx7 1/1 Running 0 18s

nginx-31893996-5d1ts 1/1 Running 0 18s

nginx-31893996-9k93w 1/1 Running 0 18s

nginx2 1/1 Running 0 32m

Updated version of our node scheme:

KUBE PROXY
Kubernetes (network) proxy is responsible for managing Kubernetes Services and thus internal load balancing and exposing pods internally for other pods and for external clients.

apiVersion: v1

kind: Service

metadata:

name: nginx

labels:

run: nginx

spec:

type: NodePort

ports:

- name: http

port: 80

nodePort: 30073

selector:

run: nginx

root@node:~$ kube-proxy --master=http://localhost:8080 &> /tmp/proxy.log &

root@node:~$ kubectl create -f nginx-svc.yaml

service "nginx" created

root@node:~$ kubectl get svc

NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kubernetes 10.0.0.1 <none> 443/TCP 2h

nginx 10.0.167.201 <nodes> 80:30073/TCP 7s
view raw2-12.sh hosted with   by GitHub
Nginx deployment is now exposed via 30073 port externally, we can check that with curl.

 
$ doctl compute droplet list (env: st)

ID Name Public IPv4 Private IPv4 Public IPv6 Memory VCPUs Disk Region Image Status Tags

63370004 node1 46.101.177.76 10.135.53.41 2048 2 40 fra1 Ubuntu 16.04.3 x64 active

$ curl http://46.101.177.76:30073

<!DOCTYPE html>

<html>

<head>

<title>Welcome to nginx!</title>

How to add a directory to the PATH ubuntu?

Edit .bashrc in your home directory and add the following line:

export PATH="/path/to/dir:$PATH"

You will need to source your .bashrc or logout/login (or restart the terminal) for the changes to take effect. To source your .bashrc, simply type

$ source ~/.bashrc

Using environment variables in Kubernetes deployment spec

I am concerned about pushing information such as passwords or IP addresses into remote Git repositories. Can I avoid this e.g. by making use of environment variables, e.g. with a deployment spec and actual deployment roughly as follows:

spec:
   type: LoadBalancer
   loadBalancerIP: ${SERVICE_ADDRESS}

and

export SERVICE_ADDRESS=<static-ip-address>
kubectl create -f Deployment.yaml

Obviously this specific syntax does not work yet. But is something like this possible and if so how?

Solution:

In deploy.yml:

LoadbalancerIP: $LBIP

Then just create your env var and run kubectl like this:

export LBIP="1.2.3.4"
envsubst < deploy.yml | kubectl apply -f -


envsubst is available in e.g. Ubuntu/Debian gettext package.

 

jenkins parsing poms stackoverflowerror

This issue is mainly with Java version or Maven version.
You can add Java version in the Manage Jenkins -> Tool configuration section as:
JDK -> JDK installations
java1

Then you can choose the required java in the job as follows:
java2

To change Maven:
1. First download maven from: https://maven.apache.org/download.cgi

2. Unzip it in a directory accessible by Jenkins suppose: /home/jenkins/apache-maven-3.2.3

3. Then add maven as

maven1
You can also add mvn to PATH and manually execute mvn clean for that project.

Jenkins initialization has not reached the COMPLETED

Well, ‘Loaded all jobs’ is not the last init stage, ‘Completed’ is. So there appears to be a problem

Do you have the extremenotification plugin installed? It’s the one referenced in JENKINS-37759.

jenkins: bash script to backup all jobs

#! /bin/bash
    SAVEIFS=$IFS
    IFS=$(echo -en "\n\b")
    declare -i
    for i in $(java -jar jenkins-cli.jar -s http://localhost:8080 list-jobs  --username admin --password admin123);
    do
    echo $i;
    java -jar jenkins-cli.jar -s http://localhost:8080 get-job --username admin --password admin123 ${i} > backup/${i}.xml;
    echo "done";
    done