Run script command on parallel

i’ve bash script which I need to run on it two command in parallel

For example I’m executing a command of npm install which takes some time (20 -50 secs)

and I run it on two different folders in sequence first npm install on books folder and the second
is for orders folder, is there a way to run both in parallel in shell script ?

For example assume the script is like following:

#!/usr/bin/env bash


  cd $tmpDir/books/  

  npm install


  npm prune production 
  cd $tmpDir/orders/

  npm install


 npm prune production 


You could use & to run the process in the background, for example:


cd $HOME/project/books/
npm install &

cd $HOME/project/orders/
npm install &

# if want to wait for the processes to finish

To run and wait for nested/multiple processes you could use a subshell () for example:


(sleep 10 && echo 10 && sleep 1 && echo 1) &

cd $HOME/project/books/
(npm install && grunt && npm prune production ) &

cd $HOME/project/orders/
(npm install && grunt && npm prune production ) &

# waiting ...

In this case, notice the that the commands are within () and using && that means that only the right side will be evaluated if the left size succeeds (exit 0) so for the example:

(sleep 10 && echo 10 && sleep 1 && echo 1) &
  • It creates a subshell putting things between ()
  • runs sleep 10 and if succeeds && then runs echo 10, if succeeds && then run sleep 1 and if succeeds && then runs echo 1
  • run all this in the background by ending the command with &

Resolve variable from config-file based on output

I have a shell script that consist of two files, one bash-file ( and one file holding all my config-variables(vars.config).


domains=("" "")

else_something_com_key="key-to-something else"

In my code i want to loop through the domains array and get the key for the domain.

#!/usr/bin/env sh
source ./vars.config
for i in ${domains[@]}; 
    base="$(echo $i | tr . _)" # this swaps out . to _ to match the vars
    let farmid=$base$key 
    echo $farmid

So when i run it i get an error message

./ line 13: let: key-to-something: syntax error: operand
expected (error token is “key-to-something”)

So it actually swaps it out, but i cant save it to a variable.


You can expand a variable to the value of its value using ${!var_name}, for example in your code you can do:

for i in ${domains[@]};
    base="$(echo $i | tr . _)" # this swaps out . to _ to match the vars
    echo $farmvalue

Safely remembering ssh credentials in bash script

Imagine I have a bash script that executes commands on a remote machine via ssh:

# Do something here
ssh otheruser@host command1
# Do something else
ssh otheruser@host command2
# Do most local tasks

This script prompts me to enter credentials for otheruser@host multiple times. Is there a safe, easy, and accepted way to cache these credentials for the lifetime of the script but guarantee that they are lost after the script ends (either normally or when an error occurs)? Maybe a solution will use ssh-agent?

I am looking for something like this:

special_credential_saving_command_here # This will prompt for credentials
ssh otheruser@host command1 # This will not prompt now
ssh otheruser@host command2 # This will not prompt either

My motivation here is to avoid entering the credentials multiple times in the same script while not running the risk of those credentials persisting after the script has terminated. Not only is entering the credentials cumbersome, it also requires I wait around for the script to finish so that I can enter the credentials rather than leave it to run on its own (it’s a long running script).


Use a control socket to share an authenticated connection among multiple processes:

ssh -fNM -S ~/.ssh/sock otheruser@host  # Will prompt for password, then exit
ssh -S ~/.ssh/sock otheruser@host command1
ssh -S ~/.ssh/sock otheruser@host command2
ssh -S ~/.ssh/sock -O exit otheruser@host  # Close the master connection

See man ssh_config, under the ControlPath option, for information on how to create a unique path for the control socket.

Running unit tests after starting elasticsearch

I’ve downloaded and set up elasticsearch on an EC2 instance that I use to run Jenkins. I’d like to use Jenkins to run some unit tests that use the local elasticsearch.

My problem is that I haven’t found a way on how to start the elasticsearch locally and run the tests after, since the script doesn’t proceed after starting ES, because the job is not killed or anything.

I can do this by starting ES manually through SSH and then building a project with only the unit tests. However, I’d like to automate the ES launching.

Any suggestions on how I could achieve this? I’ve tried now using single “Execute shell” block and two “Execute shell” blocks.


It is happening because you starting elasticsearch command in blocking way. It means command will wait until elasticsearch server is shutdown. Jenkins just keep waiting.

You can use following command

./elasticsearch 2>&1 >/dev/null &


nohup ./elasticsearch 2>&1 >/dev/null &

it will run command in non-blocking way.

You can also add small delay to allow elasticsearch server start

nohup ./elasticsearch 2>&1 >/dev/null &; sleep 5

Getting specific Nth words from variable

I have this script


doit () {
    echo " ${tmpvar[1]} will be installed "
    apt-get install ${tmpvar[2*]}
    echo " ${tmpvar[1]} was installed "

Which works under the command ./ word1 word2 word3 word4
The point is to get the first word for the ‘echos’ and the rest for the installation command.

Example: ./ App app app-gtk
Therefor displaying the first word in both ‘echos’ and getting the rest for the apt command.
But this is not working.


You may use shift here:

doit () {
   arg1="$1"  # take first word into a var arg1
   shift      # remove first word from $@

   echo "$arg1 will be installed..."
   # attempt to call apt-get
   if apt-get install "$@"; then
      echo "$arg1 was installed"
      echo "$arg1 couldn't be installed">&2

and call this function as:

doit "$@"

Does all linux users are present on /etc/passwd?

There is one user “user1” which I cant find in /etc/passwd but I can execute cmds like

$touch abc
$chown user1 abc
$su user1

These command runs fine, but if I try to chown to some really nonexistent user these chown and su commands fail

I was wondering where is this user1 coming from?


While logged in with user1 (after su user1) execute:

getent passwd $USER

This fetches user passwd entries across different databases. All users are not necessarily system users – they can come from LDAP etc.
Check docs on getenv.

Also check your nsswitch.conf to see all sources used to obtain name-service information.

./path of file to execute not executing

I am trying to execute matlab from desktop path of file is


executing ./usr/local/MATLAB/R2017b/bin/matlab

also tried .//usr/local/MATLAB/R2017b/bin/matlab

and ./ /usr/local/MATLAB/R2017b/bin/matlab
how it works?


Just run /usr/local/MATLAB/R2017b/bin/matlab to access the binary via the full path otherwise you will run try to run it via the relative path: <CURRENT DIR>/usr/local/MATLAB/R2017b/bin/matlab if you put a . before.

You can also change the add /usr/local/MATLAB/R2017b/bin/ to your path variable in order to be able to execute the command matlab without having to specify its whole path each time.

Also change your ~/.bashrc file and add PATH=$PATH:/usr/local/MATLAB/R2017b/bin to be able to keep those change after a reboot and just run matlab