Disable RPCbind

If your systemd listens on port 111 but you don’t want that, please do the following steps:

  • verify it’s listening on port 111 with netstat or ss:
# ss -tpna|grep 111
LISTEN     0      128          *:111                      *:*                   users:(("systemd",pid=1,fd=39))
LISTEN     0      128         :::111                     :::*                   users:(("systemd",pid=1,fd=38))
  • disable rpcbind:
# systemctl stop rpcbind

# systemctl disable rpcbind

# systemctl mask rpcbind

# systemctl stop rpcbind.socket

# systemctl disable rpcbind.socket

# systemctl status rpcbind
● rpcbind.service
   Loaded: masked (/dev/null; bad)
   Active: inactive (dead) since Sun 2017-12-17 15:31:11 CET; 3min 51s ago
 Main PID: 10920 (code=exited, status=0/SUCCESS)

Dec 13 13:28:35 sys.example.com systemd[1]: Starting RPC bind service...
Dec 13 13:28:35 sys.example.com systemd[1]: Started RPC bind service.
Dec 17 15:31:11 sys.example.com systemd[1]: Stopping RPC bind service...
Dec 17 15:31:11 sys.example.com systemd[1]: Stopped RPC bind service.
  • verify it’s no longer listening:
# ss -tpna|grep 111

Configure prometheus with kubernetes

Quick start

To quickly start all the things just do this:

kubectl apply \
  --filename https://raw.githubusercontent.com/giantswarm/kubernetes-prometheus/master/manifests-all.yaml

This will create the namespace monitoring and bring up all components in there.

To shut down all components again you can just delete that namespace:

kubectl delete namespace monitoring

Default Dashboards

If you want to re-import the default dashboards from this setup run this job:

kubectl apply --filename ./manifests/grafana/grafana-import-dashboards-job.yaml

In case the job already exists from an earlier run, delete it before:

kubectl --namespace monitoring delete job grafana-import-dashboards

More Dashboards

See grafana.net for some example dashboards and plugins.

To add a new graph Grafana UI -> Dashboards -> New

Select Graph -> Click on “Panel title” -> Edit. Then In the metrics section in the query box add the following: sum by (status) (irate(kubelet_docker_operations[5m]))

Select source as Prometheus.

You will see graph lines appearing. Thus this way you can add graphs in grafana.

kernel:NMI watchdog: BUG: soft lockup – CPU#0 stuck for 21s!

  • Consulting the files /etc/grub.conf and /boot/grub/grub.conf, in RHEL 6 and below, or /etc/sysconfig/grub in RHEL 7, it should be verified if the console output is redirected to a console, i.e. using console=ttyS1 or console=ttyS1,9600. In both of these cases the output is restricted to 9600 baud, limiting the output and possibly causing issues.
  • A fix might be to not log to the serial console, or explicitly configure a higher baudrate, i.e. using console=ttyS1,115200. Please note, in some situations also 115200 baud might be a limiting factor.

Otherwise, investigate further root cause conditions

  • Determine if the system was under extremely high load at the time the soft lockups were seen in the logs. If the sysstat package was already installed, it will have recorded load average every 10 minutes using a cron job.
  • Then Load average can be found by searching for ldavg in /var/log/sa/sar<day> where day is the number date of the day when soft lockups were seen. If load average is significantly higher than the amount of logical CPU cores on the system it indicates the soft lockups probably occured because of extremely high workloads.
    In this case it would be best to determine what processes caused the load to go so high and make changes so that the processes don’t cause the issue again.
  • Since it is also possible that defects in the kernel could have caused the soft lockups, full logs needs to be investigated around the time of the soft lockups to see if the issue is a bug or is fixed by errata. It can help to look in the changelog of the latest kernel available on Red Hat Network and see if any soft lockup issues were fixed since the version of the installed kernel.
  • Another way is to eliminate the possibility of a known issue which has already been fixed by testing the system by running it with the latest kernel and see if the soft lockups happen again. Red Hat support may be required to conclusively determine if the issue is a bug.
  • Also verify with a hardware vendor that the issue is not hardware related. One way to verify that the issue is not a known and solved hardware problem is to update the firmware or BIOS to the latest available from the hardware vendor.
  • On virtual systems, soft lockups can indicate that the underlying hypervisor is overcommitted. Please see this article addressing this issue: VMware virtual machine guest suffers multiple soft lockups at the same time
  • If all of the above have been verified to not be the cause it could be a case where soft lockups do not indicate a problem; for example on systems with very large numbers of CPU cores.

If this is encountered in RHEL 5, then increase the threshold at which the messages appear using the following procedures:

  • Run following command and check whether “soft lockup” errors are still encountered on the system:
    # sysctl -w kernel.softlockup_thresh=30
  • To make this parameter persistent across reboots by adding following line in /etc/sysctl.conf file:

In RHEL 6 and above, the threshold is now named “watchdog_thresh” and can be set to no higher than 60:
– To make this change in RHEL 6 and above, set the tuneable kernel.watchdog_thresh in sysctl.conf

Additional Notes:

  • The softlockup_thresh kernel parameter was introduced in Red Hat Enterprise Linux 5.2 in kernel-2.6.18-92.el5 thus it is not possible to modify this on older versions.

Root Cause

  • Soft lockups are situations in which the kernel’s scheduler subsystem has not been given a chance to perform its job for more than the limit set by the watchdog threshold, in seconds; they can be caused by defects in the kernel, by hardware issues or by extremely high workloads.
  • If lockups are encountered on a virtual system, it is important to ensure that the hypervisor is not overcommitted.
  • Hardware issues related to newly installed memory might cause soft lockups.
  • Also misconfigurations might cause the issue, like redirecting console output to a serial device and limiting it to i.e. 9600 baud.
  • On systems with a very large numbers of CPU cores soft lockups might not indicate a problem.

WordPress: 301 moved permanently via IP but loads with localhost

If you are getting this error but you can curl with localhost you can add nginx to proxy_pass your apache.

Install nginx:
yum install epel-release

yum install nginx

Edit /etc/nginx/nginx.conf
Replace port 80 with 8080

Create a file: /etc/nginx/conf.d/wp.conf with following contents:

server {
        listen       8081;
        server_name  _;

location @wp {
      proxy_pass         http://localhost:80;

 location / {
     try_files $uri @wp;

Restart nginx.

Your wordpress site will be available on <IP>:8081

WordPress configure dynamic url

Edit wp-config.php and add the following lines:
define(‘WP_HOME’, ‘/’);
define(‘WP_SITEURL’, ‘/’);

Change the values in database as:
update wp_options set option_value=”/” where option_name=”siteurl”

update wp_options set option_value=”/” where option_name=”home”

Now your siteurl IP/DNS will not be hardcoded.

InnoDB: Cannot open ‘/var/lib/mysql/ib_buffer_pool. incomplete’ for writing: Permission denied

Change the owner of /var/lib/mysql folder:
chown -R mysql:mysql /var/lib/mysql

/bin/bash: Permission denied docker container ssh

This error majorly can come in  docker containers when trying to ssh them because of selinux permissions: set them to permissive with setenforce 0.