Automating Slapd Install

You could execute the following command:

export DEBIAN_FRONTEND=noninteractive
debconf-set-selections <<< ‘slapd/root_password password 123123’
debconf-set-selections <<< ‘slapd/root_password_again 123123’
apt-get install slapd ldap-utils -y

Or for a more complex installation you can use:
cat > /root/debconf-slapd.conf << ‘EOF’
slapd slapd/password1 password admin
slapd slapd/internal/adminpw password admin
slapd slapd/internal/generated_adminpw password admin
slapd slapd/password2 password admin
slapd slapd/unsafe_selfwrite_acl note
slapd slapd/purge_database boolean false
slapd slapd/domain string phys.ethz.ch
slapd slapd/ppolicy_schema_needs_update select abort installation
slapd slapd/invalid_config boolean true
slapd slapd/move_old_database boolean false
slapd slapd/backend select MDB
slapd shared/organization string ETH Zurich
slapd slapd/dump_database_destdir string /var/backups/slapd-VERSION
slapd slapd/no_configuration boolean false
slapd slapd/dump_database select when needed
slapd slapd/password_mismatch note
EOF
export DEBIAN_FRONTEND=noninteractive
cat /root/debconf-slapd.conf | debconf-set-selections
apt install ldap-utils slapd -y

The possible attributes for debconf-set-selections are defined in the slapd.templates file in the debian package, together with a description of what the configuration attribute is about.

For slapd on Debian Jessie, you can find the file here: https://anonscm.debian.org/cgit/pkg-openldap/openldap.git/tree/debian/slapd.templates?h=jessie

 

Advertisements

python pip broken on ubuntu: forcing reinstallation of alternative /usr/bin/pip2 because link group pip is broken

I got the same error. I did this and it worked!

sudo apt-get install --reinstall python2.7

This to reinstall python. Don’t ever try to uninstall python ,it will crash your OS as part of Ubuntu is dependent on python.Then,

sudo apt-get purge python-pip

This is to remove pip.

 wget https://bootstrap.pypa.io/get-pip.py

Installs pip..`

sudo python get-pip.py

Then,you can install packages using pip like

sudo pip install package-name

change python for pip

You can check which python is configured with pip with command:
pip –version
pip uses the following package name python$VERSION-pip
The python specific pip will be installed, then you can change the default pip as:
update-alternatives –install /usr/bin/pip pip /usr/bin/pip2 1

update-alternatives –config pip Then select the pip version you want.

Kubernetes From Scratch on Ubuntu 16.04

KUBELET
This is the first and most important component in Kubernetes. Kubelet’s responsibility is to spawn/kill pods and containers on its node, it communicates directly with Docker daemon so we need to install it first. For Ubuntu 16.04 the default version of Docker is 1.12.6.

 

root@node:~$ apt-get update && apt-get install -y docker.io

root@node:~$ docker version

Client:

Version: 1.12.6

API version: 1.24

Go version: go1.6.2

Git commit: 78d1802

Built: Tue Jan 31 23:35:14 2017

OS/Arch: linux/amd64



Server:

Version: 1.12.6

API version: 1.24

Go version: go1.6.2

Git commit: 78d1802

Built: Tue Jan 31 23:35:14 2017

OS/Arch: linux/amd64

So let’s download Kubernetes binaries and run kubelet.

 
root@node:~$ wget -q --show-progress https://dl.k8s.io/v1.7.6/kubernetes-server-linux-amd64.tar.gz

kubernetes-server-linux-amd64.tar.gz 100%[==================================================================================================================================>] 417.16M 83.0MB/s in 5.1s

root@node:~$ tar xzf kubernetes-server-linux-amd64.tar.gz

root@node:~$ mv kubernetes/server/bin/* /usr/local/bin/

root@node:~$ rm -rf *

We run kubelet with –pod-manifest-path option. This is the directory that kubelet will watch for pod manifest yaml files.

root@node:~$ kubelet --pod-manifest-path /tmp/manifests &> /tmp/kubelet.log &

Let’s put simple nginx pod manifest file to that directory and see what happens.

apiVersion: v1

kind: Pod

metadata:

name: nginx

labels:

app: nginx

spec:

containers:

- name: nginx

image: nginx

ports:

- containerPort: 80

Now we can check docker ps to see that our container has been added and try to curl it:

root@node:~$ docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

c3369c72ebb2 nginx@sha256:aa1c5b5f864508ef5ad472c45c8d3b6ba34e5c0fb34aaea24acf4b0cee33187e "nginx -g 'daemon off" 3 minutes ago Up 3 minutes k8s_nginx_nginx-node_default_594710e736bc86ef2c87ea5615da08b1_0

b603d65d8bfd gcr.io/google_containers/pause-amd64:3.0 "/pause" 3 minutes ago Up 3 minutes k8s_POD_nginx-node_default_594710e736bc86ef2c87ea5615da08b1_0



root@node:~$ docker inspect b603d65d8bfd | jq .[0].NetworkSettings.IPAddress

"172.17.0.2"

root@node:~$ curl 172.17.0.2

<!DOCTYPE html>

<html>

<head>

<title>Welcome to nginx!</title>

The b603d65d8bfd is the id of a pause container. This is an infrastructure container that Kubernetes creates first when creating a pod. Using a pause container Kubernetes acquires IP and setup network namespace. All other containers in a pod shares the same IP address and network interface. When all your containers die, this is the last container that holds whole network namespace.

This is how our node looks like now:

KUBE API SERVER
Kubernetes use etcd, a distributed database with strong consistency data model to store the state of whole cluster. API Server is the only component that can talk to etcd directly, all other components (including kubelet) have to communicate through API Server. Let’s try to run API Server with kubelet.

First we need etcd:

 
root@node:~$ wget -q --show-progress https://github.com/coreos/etcd/releases/download/v3.2.6/etcd-v3.2.6-linux-amd64.tar.gz

etcd-v3.2.6-linux-amd64.tar.gz 100%[==================================================================================================================================>] 9.70M 2.39MB/s in 4.1s

root@node:~$ tar xzf etcd-v3.2.6-linux-amd64.tar.gz

root@node:~$ mv etcd-v3.2.6-linux-amd64/etcd* /usr/local/bin/

root@node:~$ etcd --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls http://localhost:2379 &> /tmp/etcd.log &

root@node:~$ etcdctl cluster-health

member 8e9e05c52164694d is healthy: got healthy result from http://46.101.177.76:2379

cluster is health

And the API Server:

root@node:~$ kube-apiserver --etcd-servers=http://localhost:2379 --service-cluster-ip-range=10.0.0.0/16 --bind-address=0.0.0.0 --insecure-bind-address=0.0.0.0 &> /tmp/apiserver.log &

root@node:~$ curl http://localhost:8080/api/v1/nodes

{

"kind": "NodeList",

"apiVersion": "v1",

"metadata": {

"selfLink": "/api/v1/nodes",

"resourceVersion": "45"

},

"items": []

}

Now we can connect kubelet to API Server and check if it was discovered by the cluster.

root@node:~$ pkill -f kubelet

root@node:~$ kubelet --api-servers=localhost:8080 &> /tmp/kubelet.log &

root@node:~$ kubectl get nodes

NAME STATUS AGE VERSION

node Ready 5m v1.7.6

root@node:~$ kubectl get pods

No resources found.

We don’t have any pods yet, so let’s create one with kubectl create -f nginx.yaml using previous manifest file.

root@node:~$ kubectl create -f nginx.yaml

pod "nginx" created

root@node:~$ kubectl get pods

NAME READY STATUS RESTARTS AGE

nginx 0/1 Pending 0 6m

Notice here that the pod hangs in Pending status – but why ? This is because we don’t yet have another Kubernetes component responsible for choosing a node for the pod – Scheduler. We will talk about it later but for now we can just create nginx2 with updated manifest that determinates what node should be used.

root@node:~# git diff nginx.yaml nginx2.yaml

diff --git a/nginx.yaml b/nginx2.yaml

index 7053af0..36885ae 100644

--- a/nginx.yaml

+++ b/nginx2.yaml

@@ -1,10 +1,11 @@

apiVersion: v1

kind: Pod

metadata:

- name: nginx

+ name: nginx2

labels:

app: nginx

spec:

+ nodeName: node

containers:

- name: nginx

image: nginx


root@node:~$ kubectl create -f nginx2.yaml

root@node:~$ kubectl get pod

NAME READY STATUS RESTARTS AGE

nginx 0/1 Pending 0 10m

nginx2 1/1 Running 0 8s

Great, so now we can see that API Server and kubelet works. This is how our node looks like now:

KUBE SCHEDULER
Scheduler is responsible for assigning pod to a node. It watches pods and assigns available nodes to those without one.

We still have nginx pod that is in Pending state from previous example. Let’s run scheduler and see what happens.

 
root@node:~$ kube-scheduler --master=http://localhost:8080 &> /tmp/scheduler.log &

root@node:~$ kubectl get pods

NAME READY STATUS RESTARTS AGE

nginx 1/1 Running 0 17m

nginx2 1/1 Running 0 17m
view raw10.sh hosted with   by GitHub
as you can see the scheduler kicks in, finds a pod and assigns it to the node. You can see it’s placement on our node schema:

KUBE CONTROLLER MANAGER
Controller Manager is responsible for managing (among others) Replication Controllers and Replica Sets so without it we can’t use Kubernetes Deployments.
Here we are going to run it and create a deployment.

 
apiVersion: apps/v1beta1

kind: Deployment

metadata:

name: nginx

spec:

replicas: 3

template:

metadata:

labels:

run: nginx

spec:

containers:

- name: nginx

image: nginx

ports:

- containerPort: 80
view rawnginx-deploy.yaml hosted with   by GitHub

root@node:~$ kube-controller-manager --master=http://localhost:8080 &> /tmp/controller-manager.log &

root@node:~$ kubectl create -f nginx-deploy.yaml

deployment "nginx" created

root@node:~$ kubectl get deploy

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE

nginx 3 3 3 2 7s

root@node:~$ kubectl get po

NAME READY STATUS RESTARTS AGE

nginx 1/1 Running 0 32m

nginx-31893996-3dnx7 1/1 Running 0 18s

nginx-31893996-5d1ts 1/1 Running 0 18s

nginx-31893996-9k93w 1/1 Running 0 18s

nginx2 1/1 Running 0 32m

Updated version of our node scheme:

KUBE PROXY
Kubernetes (network) proxy is responsible for managing Kubernetes Services and thus internal load balancing and exposing pods internally for other pods and for external clients.

apiVersion: v1

kind: Service

metadata:

name: nginx

labels:

run: nginx

spec:

type: NodePort

ports:

- name: http

port: 80

nodePort: 30073

selector:

run: nginx

root@node:~$ kube-proxy --master=http://localhost:8080 &> /tmp/proxy.log &

root@node:~$ kubectl create -f nginx-svc.yaml

service "nginx" created

root@node:~$ kubectl get svc

NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kubernetes 10.0.0.1 <none> 443/TCP 2h

nginx 10.0.167.201 <nodes> 80:30073/TCP 7s
view raw2-12.sh hosted with   by GitHub
Nginx deployment is now exposed via 30073 port externally, we can check that with curl.

 
$ doctl compute droplet list (env: st)

ID Name Public IPv4 Private IPv4 Public IPv6 Memory VCPUs Disk Region Image Status Tags

63370004 node1 46.101.177.76 10.135.53.41 2048 2 40 fra1 Ubuntu 16.04.3 x64 active

$ curl http://46.101.177.76:30073

<!DOCTYPE html>

<html>

<head>

<title>Welcome to nginx!</title>

Using environment variables in Kubernetes deployment spec

I am concerned about pushing information such as passwords or IP addresses into remote Git repositories. Can I avoid this e.g. by making use of environment variables, e.g. with a deployment spec and actual deployment roughly as follows:

spec:
   type: LoadBalancer
   loadBalancerIP: ${SERVICE_ADDRESS}

and

export SERVICE_ADDRESS=<static-ip-address>
kubectl create -f Deployment.yaml

Obviously this specific syntax does not work yet. But is something like this possible and if so how?

Solution:

In deploy.yml:

LoadbalancerIP: $LBIP

Then just create your env var and run kubectl like this:

export LBIP="1.2.3.4"
envsubst < deploy.yml | kubectl apply -f -


envsubst is available in e.g. Ubuntu/Debian gettext package.

 

Docker – Ubuntu – bash: ping: command not found

Docker images are pretty minimal, But you can install ping in your official ubuntu docker image via:

apt-get update
apt-get install iputils-ping

Chances are you dont need ping your image, and just want to use it for testing purposes. Above example will help you out.

But if you need ping to exist on your image, you can create a Dockerfile or commit the container you ran the above commands in to a new image.

Commit:

docker commit -m "Installed iputils-ping" --author "Your Name <name@domain.com>" ContainerNameOrId yourrepository/imagename:tag

Dockerfile:

FROM ubuntu
RUN apt-get update && apt-get install -y iputils-ping
CMD bash