- Consulting the files
/boot/grub/grub.conf, in RHEL 6 and below, or
/etc/sysconfig/grubin RHEL 7, it should be verified if the console output is redirected to a console, i.e. using
console=ttyS1,9600. In both of these cases the output is restricted to 9600 baud, limiting the output and possibly causing issues.
- A fix might be to not log to the serial console, or explicitly configure a higher baudrate, i.e. using
console=ttyS1,115200. Please note, in some situations also 115200 baud might be a limiting factor.
Otherwise, investigate further root cause conditions
- Determine if the system was under extremely high load at the time the soft lockups were seen in the logs. If the
sysstatpackage was already installed, it will have recorded load average every 10 minutes using a cron job.
- Then Load average can be found by searching for
/var/log/sa/sar<day>where day is the number date of the day when soft lockups were seen. If load average is significantly higher than the amount of logical CPU cores on the system it indicates the soft lockups probably occured because of extremely high workloads.
In this case it would be best to determine what processes caused the load to go so high and make changes so that the processes don’t cause the issue again.
- Since it is also possible that defects in the kernel could have caused the soft lockups, full logs needs to be investigated around the time of the soft lockups to see if the issue is a bug or is fixed by errata. It can help to look in the changelog of the latest kernel available on Red Hat Network and see if any soft lockup issues were fixed since the version of the installed kernel.
- Another way is to eliminate the possibility of a known issue which has already been fixed by testing the system by running it with the latest kernel and see if the soft lockups happen again. Red Hat support may be required to conclusively determine if the issue is a bug.
- Also verify with a hardware vendor that the issue is not hardware related. One way to verify that the issue is not a known and solved hardware problem is to update the firmware or BIOS to the latest available from the hardware vendor.
- On virtual systems, soft lockups can indicate that the underlying hypervisor is overcommitted. Please see this article addressing this issue: VMware virtual machine guest suffers multiple soft lockups at the same time
- If all of the above have been verified to not be the cause it could be a case where soft lockups do not indicate a problem; for example on systems with very large numbers of CPU cores.
If this is encountered in RHEL 5, then increase the threshold at which the messages appear using the following procedures:
- Run following command and check whether “soft lockup” errors are still encountered on the system:
# sysctl -w kernel.softlockup_thresh=30
- To make this parameter persistent across reboots by adding following line in
In RHEL 6 and above, the threshold is now named “watchdog_thresh” and can be set to no higher than 60:
– To make this change in RHEL 6 and above, set the tuneable
kernel.watchdog_thresh in sysctl.conf
softlockup_threshkernel parameter was introduced in Red Hat Enterprise Linux 5.2 in
kernel-2.6.18-92.el5thus it is not possible to modify this on older versions.
- Soft lockups are situations in which the kernel’s scheduler subsystem has not been given a chance to perform its job for more than the limit set by the watchdog threshold, in seconds; they can be caused by defects in the kernel, by hardware issues or by extremely high workloads.
- If lockups are encountered on a virtual system, it is important to ensure that the hypervisor is not overcommitted.
- Hardware issues related to newly installed memory might cause
- Also misconfigurations might cause the issue, like redirecting console output to a serial device and limiting it to i.e. 9600 baud.
- On systems with a very large numbers of CPU cores
soft lockupsmight not indicate a problem.