How to boot with old kernel version in RHEL7 ?

  • By default, the key for the GRUB_DEFAULT directive in the /etc/default/grub file is the word saved. This instructs GRUB 2 to load the kernel specified by the saved_entry directive in the GRUB 2 environment file, located at /boot/grub2/grubenv. One can set another GRUB record to be the default, using the grub2-set-default command, which will update the GRUB 2 environment file.
  • By default, the saved_entry value is set to the name of latest installed kernel of package type kernel. This is defined in /etc/sysconfig/kernel by the UPDATEDEFAULT and DEFAULTKERNEL directives. The file can be viewed by the root user as follows:
    $ cat /etc/sysconfig/kernel
    # UPDATEDEFAULT specifies if new-kernel-pkg should make
    # new kernels the default
    UPDATEDEFAULT=yes
    
    # DEFAULTKERNEL specifies the default kernel package type
    DEFAULTKERNEL=kernel
    
  • To force a system to always use a particular menu entry, use the menu entry name as the key to the GRUB_DEFAULT directive in the /etc/default/grub file. To list the available menu entries, run the following command as root:
    ~]# awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg
    
    Eg: 
    ~]#  awk -F\' '$1=="menuentry " {print $2}' /etc/grub2-efi.cfg 
    Red Hat Enterprise Linux Server (3.10.0-693.el7.x86_64) 7.3 (Maipo)           <<==== Entry 0
    Red Hat Enterprise Linux Server (3.10.0-514.el7.x86_64) 7.3 (Maipo)           <<==== Entry  1
    Red Hat Enterprise Linux Server (0-rescue-d3c598b9d2204138bd2e1001316a5cc6) 7.3 (Maipo)
    
  • GRUB 2 supports using a numeric value as the key for the saved_entry directive to change the default order in which the kernel or operating systems are loaded. To specify which kernel or operating system should be loaded first, pass its number to the grub2-set-defaultcommand. For example:
    ~]# grub2-set-default 1
    
  • Check the below file to see the kernel which will be loaded at next boot, crosscheck the numeric value with the menuentry in the /etc/default/grub file.
    ~]# cat /boot/grub2/grubenv |grep saved
    
    Eg:
    ~]# cat /boot/grub2/grubenv |grep saved
    saved_entry=1
    
  • Changes to /etc/default/grub require rebuilding the grub.cfg file as follows:
  • Rebuild the /boot/grub2/grub.cfg file by running the grub2-mkconfig -o command as follows:
    • On BIOS-based machines: ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
    • On UEFI-based machines: ~]# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
Advertisements

Running unit tests after starting elasticsearch

I’ve downloaded and set up elasticsearch on an EC2 instance that I use to run Jenkins. I’d like to use Jenkins to run some unit tests that use the local elasticsearch.

My problem is that I haven’t found a way on how to start the elasticsearch locally and run the tests after, since the script doesn’t proceed after starting ES, because the job is not killed or anything.

I can do this by starting ES manually through SSH and then building a project with only the unit tests. However, I’d like to automate the ES launching.

Any suggestions on how I could achieve this? I’ve tried now using single “Execute shell” block and two “Execute shell” blocks.

Solution:

It is happening because you starting elasticsearch command in blocking way. It means command will wait until elasticsearch server is shutdown. Jenkins just keep waiting.

You can use following command

./elasticsearch 2>&1 >/dev/null &

or

nohup ./elasticsearch 2>&1 >/dev/null &

it will run command in non-blocking way.

You can also add small delay to allow elasticsearch server start

nohup ./elasticsearch 2>&1 >/dev/null &; sleep 5

aws ec2 – running python uwsgi with bash command keeps returning –no python application found

I’m trying to run my flask app following a tutorial written in this link – https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-uwsgi-and-nginx-on-ubuntu-14-04#configure-uwsgi

using amazon server’s ec2 to run this…

Amazon Linux AMI 2017.09.1 (HVM), free tiers on all options.

my file structure is as follows:

/home/ec2-user/login_test/login_test/app.py
                                    /wsgi.py
                         /venv/

so I gave a uwsgi --socket 0.0.0.0:8000 --protocol=http -w wsgi command, as stated in the tutorial, “Testing uWSGI Serving” part, this returns:

--- no python application found, check your startup logs for errors ---
[pid: 24218|app: -1|req: -1/1] 127.0.0.1 () {24 vars in 257 bytes} [Wed Apr 11 07:01:38 2018] GET / => generated 21 bytes in 0 msecs (HTTP/1.1 500) 2 headers in 83 bytes (0 switches on core 0)
with browser returning Internal Server Error

so… what should I try and check?? the app works all fine if I activate this with without uwsgi (just python app.py command) and via cmd on my home computer(windows 10)

EDIT: my wsgi.py contents:

from app import app as application

if __name__ == "__main__":
    application.run()

Solution:

here is a minimal working example:

wsgi.py:

from flask_app import app

flask_app.py:

from flask import Flask

app = Flask('my test app')

@app.route("/ping")
def ping():

    return 'pong'

command: uwsgi --socket 0.0.0.0:5000 --protocol=http -w wsgi:app

or without the wsgi.py file:
uwsgi --socket 0.0.0.0:5000 --protocol=http -w flask_app:app

things to watch out for:

  • wsgi: (the -w parameter) means you have a file called: wsgi.py
  • app (the -w parameter) is the initiated flask object (Flask()) imported

DynamoDB BatchGetItem dynamic TableName in Lambda function

I’m building a serverless backend for my current application using dynamoDb as my database. I use aws sam to upload my lambda functions to aws. In addition, I pass all my table names as global variables to lambda (nodejs8.10 runtime) to access them on the process.env object within my lambda function. The problem that I’m facing with this is the following: Whenever I run the batchGetItem method on dynamoDB I have to pass a string as my table name, I cannot dynamically change the table name depending on the global variable:

const AWS = require('aws-sdk');
const dynamodb = new AWS.DynamoDB({region: 'ap-south-1'}, {apiVersion:'2012-08-10'});

const params = {
    RequestItems: {
        //needs to be a string, cannot be a variable containing a string
        'tableName': {
            Keys: [] //array of keys
         }
    }
}
dynamodb.batchGetItem(params, (err, result) => {
// some logic
})

I need to pass the table name as a string, essentially hardcoding the table name into my function. Other DynamoDB operations, like for example the getItem method, accept a key value pair for the table name in the parameter object:

const tableName = process.env.TableName;
const getItemParams = {
   Key: {
        "Key": {
             S: 'some key'
         }
   },
   // table name can be changed according to the value past to lambda's environment variable
   TableName: tableName
}
dynamodb.getItem(getItemParams, (err, result) => {
// some logic
}

Hence my question, is there any way to avoid hardcoding the table name in the batchGetItem method and, instead, allocate it dynamically like in the getItem method?

Solution:

You can use the tableName from environment variables. Build your params in 2 steps :

const { tableName } = process.env;

const params = {
  RequestItems: {},
};

// `tableName` is your environment variable, it may have any value
params.RequestItems[tableName] = {
  Keys: [], //array of keys
};

dynamodb.batchGetItem(params, (err, result) => {
  // some logic
})

Getting specific Nth words from variable

I have this script

#!/bin/bash

tmpvar="$*"
doit () {
    echo " ${tmpvar[1]} will be installed "
    apt-get install ${tmpvar[2*]}
    echo " ${tmpvar[1]} was installed "
}
doit

Which works under the command ./file.sh word1 word2 word3 word4
The point is to get the first word for the ‘echos’ and the rest for the installation command.

Example: ./file.sh App app app-gtk
Therefor displaying the first word in both ‘echos’ and getting the rest for the apt command.
But this is not working.

Solution:

You may use shift here:

doit () {
   arg1="$1"  # take first word into a var arg1
   shift      # remove first word from $@

   echo "$arg1 will be installed..."
   # attempt to call apt-get
   if apt-get install "$@"; then
      echo "$arg1 was installed"
   else
      echo "$arg1 couldn't be installed">&2
}

and call this function as:

doit "$@"

How to kill a range of consecutive processes in Linux?

I am working on a multi-user Ubuntu server and need to run multiprocessing python scripts. Sometimes I need to kill some of those processes. For example,

$ ps -eo pid,comm,cmd,start,etime | grep .py
3457 python          python process_to_kill.py - 20:57:28    01:44:09
3458 python          python process_to_kill.py - 20:57:28    01:44:09
3459 python          python process_to_kill.py - 20:57:28    01:44:09
3460 python          python process_to_kill.py - 20:57:28    01:44:09
3461 python          python process_to_kill.py - 20:57:28    01:44:09
3462 python          python process_to_kill.py - 20:57:28    01:44:09
3463 python          python process_to_kill.py - 20:57:28    01:44:09
3464 python          python process_to_kill.py - 20:57:28    01:44:09
13465 python         python process_not_to_kill.py - 08:57:28    13:44:09
13466 python         python process_not_to_kill.py - 08:57:28    13:44:09

processes 3457-3464 are to be killed. So far I can only do

$ kill 3457 3458 3459 3460 3461 3462 3463 3464

Is there a command like $ kill 3457-3464 so I can specify the starting and ending processes and kill all of those within the range?

Solution:

Use the shell’s brace expansion syntax:

$ kill {3457..3464}

which expands to:

$ kill 3457 3458 3459 3460 3461 3462 3463 3464

Or you can kill processes by name with pkill. For example:

$ pkill -f process_to_kill.py

Running Python code in Vim without saving

Is there a way to run my current python code in vim without making any changes to the file? Normally, when I want to test my code from within vim, I would execute this:

:w !python

However, this overrides the current file I am editing. Often, I add print statements or comment stuff out to see why my code isn’t working. I do not want such changes to overwrite a previous version of whatever .py file I’m currently working on. Is there a way to do so? Perhaps a combination of saving to a temporary file and deleting it afterwards?

Solution:

You have already answered your own question:

:w !python

will run the file in python without saving it. Seriously, test it out yourself! make some changes, run :w !python and then after it runs, run :e!. It will revert all of your changes.

The reason this works is because :w does not mean save. It means write, and by default, it chooses to write the file to the currently selected file, which is equivalent to saving. In bash speak, it’s like

cat myfile > myfile

But if you give an argument, it will write the file to that stream rather than saving. In this case, your writing it to python, so the file is not saved.


I wrote a much longer answer on this topic here.