ssh to the minions in pks cluster

a. Open the Ops Manager interface by navigating to the Ops Manager fully qualified domain name (FQDN) in a web browser.

b. Click the Ops Manager Director tile and select the Status tab.

c. Record the IP address for the Director job. This is the IP address of the VM where the BOSH Director runs.
d. Select the Credentials tab.

e. Click Link to Credential to view the Director Credentials. Record these credentials.

f. ssh to the ops manager vm
a. bosh alias-env gcp -e 192.168.101.10 –ca-cert /var/tempest/workspaces/default/root_ca_certificate
where 192.168.101.10 is the DIRECTOR-IP-ADDRESS we retrieved in step 1.
b. bosh -e gcp log-in
enter the identity and password retrieved in step 1
c. bosh -e gcp vms
will list the vms
d. bosh -e gcp -d service-instance_00579a16-7d5e-4ab5-9dfb-e70873d24ed2 ssh worker/55764b66-7eb9-4834-a9f4-24fd421558cc
where service-instance_00579a16-7d5e-4ab5-9dfb-e70873d24ed2 is deployment name and is instance name

Advertisements

How to access kubernetes dashboard from outside cluster

Edit kubernetes-dashboard service.

$ kubectl -n kube-system edit service kubernetes-dashboard

You should see yaml representation of the service. Change type: ClusterIP to type: NodePort and save file. If it’s already changed go to next step.

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
...
  name: kubernetes-dashboard
  namespace: kube-system
  resourceVersion: "343478"
  selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard-head
  uid: 8e48f478-993d-11e7-87e0-901b0e532516
spec:
  clusterIP: 10.100.124.90
  externalTrafficPolicy: Cluster
  ports:
  - port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

Next we need to check port on which Dashboard was exposed.

$ kubectl -n kube-system get service kubernetes-dashboard
NAME                   CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes-dashboard   10.100.124.90   <nodes>       443:31707/TCP   21h

Dashboard has been exposed on port 31707 (HTTPS). Now you can access it from your browser at:
]https://master-ip:31707]

master-ip can be found by executing kubectl cluster-info. Usually it is either 127.0.0.1 or IP of your machine, assuming that your cluster is running directly on the machine, on which these commands are executed.

In case you are trying to expose Dashboard using NodePort on a multi-node cluster, then you have to find out IP of the node on which Dashboard is running to access it. Instead of accessinghttps://master-ip:nodePort

you should access https://node-ip:nodePort.

If the dashboard is still not accessible execute the below:

sudo iptables -P FORWARD ACCEPT

Prometheus pod consuming a lot of memory

I can use ‘–storage.local.memory-chunks’ to limit the memory usage when prometheus 1.X.

 

Prometheus 2.0 uses the OS page cache for data.  It will only use as much memory as it needs to operate. The good news is the memory use is far efficient than 1.x.  The amount needed to collect more data is minimal.

There is some extra memory needed if you do a large amount of queries, or queries that require a large amount of data.
You will want to monitor the memory use of the Prometheus process (process_resident_memory_bytes) and how much page cache the node has left (node_exporter, node_memory_Cached).

Continue reading Prometheus pod consuming a lot of memory

Prometheus giving error for alert rules ConfigMap in kubernetes

When creating a configMap or prometheus alert rules it gives an error as follows:
“rule manager” msg=”loading groups failed” err=”yaml: unmarshal errors:\n line 3: field rules not found in type rulefmt.RuleGroups”

The correct format for the rules to be added is as follows:

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-alert-rules
data:
  alert.rules: |-
    groups:
    - name: example
      rules:
      - alert: Lots_Of_Billing_Jobs_In_Queue
        expr: sum (container_memory_working_set_bytes{id="/",kubernetes_io_hostname=~"(.*)"}) / sum (machine_memory_bytes{kubernetes_io_hostname=~"(.*)"}) * 100 > 40
        for: 5m
        labels:
          severity: critical
        annotations:
          summary: container memory high

Can’t chown /usr/local in High Sierra

/usr/local can no longer be chown’d in High Sierra. Instead use

sudo chown -R $(whoami) $(brew --prefix)/*

Kubernetes pod does not get deleted

So i have a pod that died .. the replicaset and deployment are gone , and even that i delete it .. its still there.

The solution to it is to forcefully delete it using:

kubectl delete pod <pod> --force --grace-period=0

Assign static IP using default bridge network to a docker container

version: "2"
services:
  host1:
    networks:
      mynet:
        ipv4_address: 172.25.0.101
networks:
  mynet:
    driver: bridge
    ipam:
      config:
      - subnet: 172.25.0.0/24