Rabbitmq standalone and cluster installation

  • Install rabbitMQ in the VM. Following are the installations steps.
    ·         Verify if the earlang package is installed
  • rpm -q erlang-solutions-1.0-1.nonarch.rpm
  • wget http://packages.erlang-solutions.com/erlang-solutions-1.0-1.noarch.rpm
  • sudo wget http://packages.erlang-solutions.com/erlang-solutions-1.0-1.noarch.rpm
  • sudo yum update NOTE : use command “yum –releasever=6.7 update” if you want a specific version.
  • su -c ‘yum list rabbitmq’   Or use
  • yum install rabbitmq-server
  • sudo rpm -Uvh http://www.rabbitmq.com/releases/rabbitmq-server/v3.6.0/rabbitmq-server-3.6.0-1.noarch.rpm
  • sudo /etc/init.d/rabbitmq-server start·
  • Uncomment the loopback line in security section of rabbitMq.config :  {loopback_users, []}ss
  • rabbitmq-plugins enable rabbitmq_management·
  • Configure port firewall rule should be in place to accept the tcp connection.
  • Use following command : lokkit –p <rabbitMQ port>:tcp , lokkit –p <rabbitMQ management port>:tcp·
  • Default guest/guest account should be disabled. Change the user and user permissions using following commands :
  • Note : password should be 16 characters , no special characters allowed and should be generated by keypass.
  • rabbitmqctl set_user_tags <username> administrator      rabbitmqctl change_password guest guest123
  • Disable the guest user by changing the password once the created user is tested.
  • rabbitmqctl add_user <username> <password>
  • Avoid use of RabbitMQ default port and configure to use our own choice. Edit the port in rabbitMq.config file. uncomment following line and edit the port : {tcp_listeners, [<rabbitMQ port>]} and {listener, [{port,    <rabbitMQ management port>}.
  • Install management console of rabbitmq using following command :
  • Copy  /usr/share/doc/rabbitmq-server/ rabbitmq.config.example in /etc/rabbitmq folder and rename it as rabbitmq.config. Edit the permissions for the file to: 666
  • sudo chkconfig rabbitmq-server on
  • sudo rpm –import http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
    for rabbitmq 3.6.*  ,require socat dependency:
    steps : sudo yum install epel-release
    sudo yum install socat
  • sudo yum install -y erlang-18.2-1.el6
  • sudo rpm -Uvh erlang-solutions-1.0-1.noarch.rpm
  • Install erlang package:
  • dowload the erlang package from web site:
  • Restart the rabbitmq server using commnad : sudo service rabbitmq_server restart.
  • Make the following changes on rabbitmq console:  Got to Admin > click on user and click on set permissions. Check the permissions of the user. It should be same as user guest.
  • Try to create new queue to check it is working fine.

 

Create RabbitMQ High Availability Cluster:

1) Stop RabbitMQ in Master and slave nodes. Ensure service is stopped properly.

/etc/init.d/rabbitmq-server stop

2) Copy the file below to all nodes from the master. This cookie file needs to be the same across all nodes.

$ sudo cat /var/lib/rabbitmq/.erlang.cookie

3) Make sure you start all nodes after copying the cookie file from the master.

Start RabbitMQ in master and all nodes.

$ /etc/init.d/rabbitmq-server start

4) Then run the following commands in all the nodes, except the master node:

$ rabbitmqctl stop_app$ rabbitmqctl reset$ rabbitmqctl start_app

5) Now, run the following commands in the master node:

$ rabbitmqctl stop_app$ rabbitmqctl reset

6) Do not start the app yet.

Open port 4369 and 25672: lokkit -p 4369:tcp -p 25672:tcp

Stop the iptables on both master and slaves.

The following command is executed to join the slaves to the cluster:

$ rabbitmqctl join_cluster rabbit@slave1 rabbit@slave2

Update slave1 and slave2 with the hostnames/IP address of the slave nodes. You can add as many slave nodes as needed in the cluster.

7) Start master app in master machine

$ rabbitmqctl start_app

8) Check the cluster status from any node in the cluster:

$ rabbitmqctl cluster_status

9) In rabbitmq management console check if you can login with previous user and have all the previous settings in place.

If not create users by following command:

rabbitmqctl add_user <username> <password>

give admin rights:

rabbitmqctl set_user_tags <username> administrator

rabbitmqctl add_vhost /

Give vhost rights by:

rabbitmqctl set_permissions -p / <username> “.*” “.*” “.*”

10) Create ha mirroring by:

rabbitmqctl set_policy ha-all “” ‘{“ha-mode”:”all”,”ha-sync-mode”:”automatic”}’This will mirror all queues.

11) Now start iptables. You will have created rabbitmq HA cluster.

Pyinotify – Monitor Filesystem Changes in Real-Time in Linux

Pyinotify is a simple yet useful Python module for monitoring filesystems changes in real-time in Linux.

As a System administrator, you can use it to monitor changes happening to a directory of interest such as web directory or application data storage directory and beyond.

It depends on inotify (a Linux kernel feature incorporated in kernel 2.6.13), which is an event-driven notifier, its notifications are exported from kernel space to user space via three system calls.

The purpose of pyinotify is to bind the three system calls, and support an implementation on top of them providing a common and abstract means to manipulate those functionalities.

In this article, we will show you how to install and use pyinotify in Linux to monitor filesystem changes or modifications in real-time.

Dependencies

In order to use pyinotify, your system must be running:

  1. Linux kernel 2.6.13 or higher
  2. Python 2.4 or higher

How to Install Pyinotify in Linux

First start by checking the kernel and Python versions installed on your system as follows:

# uname -r 
# python -V

Once dependencies are met, we will use pip to install pynotify. In most Linux distributions, Pip is already installed if you’re using Python 2 >=2.7.9 or Python 3 >=3.4 binaries downloaded from python.org, otherwise, install it as follows:

# yum install python-pip      [On CentOS based Distros]
# apt-get install python-pip  [On Debian based Distros]
# dnf install python-pip      [On Fedora 22+]

Now, install pyinotify like so:

# pip install pyinotify

It will install available version from the default repository, if you are looking to have a latest stable version of pyinotify, consider cloning it’s git repository as shown.

# git clone https://github.com/seb-m/pyinotify.git
# cd pyinotify/
# ls
# python setup.py install

How to Use pyinotify in Linux

In the example below, I am monitoring any changes to the user tecmint’s home (/home/tecmint) directory as root user (logged in via ssh) as shown in the screenshot:

# python -m pyinotify -v /home/tecmint

Monitor Directory Changes

Next, we will keep a watch for any changes to the web directory (/var/www/html/tecmint.com):

# python -m pyinotify -v /var/www/html/tecmint.com

To exit the program, simply hit [Ctrl+C].

Note: When you run pyinotify without specifying any directory to monitor, the /tmp directory is considered by default.

How to Trace Execution of Commands in Shell Script with Shell Tracing

In this article of the shell script debugging series, we will explain the third shell script debugging mode, that is shell tracing and look at some examples to demonstrate how it works, and how it can be used.

The previous part of this series clearly throws light upon the two other shell script debugging modes: verbose mode and syntax checking mode with easy-to-understand examples of how to enable shell script debugging in these modes.

Shell tracing simply means tracing the execution of the commands in a shell script. To switch on shell tracing, use the -x debugging option.

This directs the shell to display all commands and their arguments on the terminal as they are executed.

We will use the sys_info.sh shell script below, which briefly prints your system date and time, number of users logged in and the system uptime. However, it contains syntax errors that we need to find and correct.

#!/bin/bash
#script to print brief system info
ROOT_ID="0"
DATE=`date`
NO_USERS=`who | wc -l`
UPTIME=`uptime`
check_root(){
if [ "$UID" -ne "$ROOT_ID" ]; then
echo "You are not allowed to execute this program!"
exit 1;    
}
print_sys_info(){
echo "System Time    : $DATE"
echo "Number of users: $NO_USERS"
echo "System Uptime  : $UPTIME
}
check_root
print_sys_info
exit 0

Save the file and make the script executable. The script can only be run by root, therefore employ the sudo command to run it as below:

$ chmod +x sys_info.sh
$ sudo bash -x sys_info.sh

Shell Tracing - Show Error in Script

From the output above, we can observe that, a command is first executed before its output is substituted as the value of a variable.

For example, the date was first executed and the its output was substituted as the value of the variable DATE.

We can perform syntax checking to only display the syntax errors as follows:

$ sudo bash -n sys_info.sh 

Syntax Checking in Script

If we look at the shell script critically, we will realize that the if statement is missing a closing fi word. Therefore, let us add it and the new script should now look like below:

#!/bin/bash
#script to print brief system info
ROOT_ID="0"
DATE=`date`
NO_USERS=`who | wc -l`
UPTIME=`uptime`
check_root(){
if [ "$UID" -ne "$ROOT_ID" ]; then
echo "You are not allowed to execute this program!"
exit 1;
fi    
}
print_sys_info(){
echo "System Time    : $DATE"
echo "Number of users: $NO_USERS"
echo "System Uptime  : $UPTIME
}
check_root
print_sys_info
exit 0

Save the file again and invoke it as root and do some syntax checking:

$ sudo bash -n sys_info.sh

Perform Syntax Check in Shell Scripts

The result of our syntax checking operation above still shows that there is one more bug in our script on line 21. So, we still have some syntax correction to do.

If we look through the script analytically one more time, the error on line 21 is due to a missing closing double quote (”) in the last echo command inside the print_sys_info function.

We will add the closing double quote in the echo command and save the file. The changed script is below:

#!/bin/bash
#script to print brief system info
ROOT_ID="0"
DATE=`date`
NO_USERS=`who | wc -l`
UPTIME=`uptime`
check_root(){
if [ "$UID" -ne "$ROOT_ID" ]; then
echo "You are not allowed to execute this program!"
exit 1;
fi
}
print_sys_info(){
echo "System Time    : $DATE"
echo "Number of users: $NO_USERS"
echo "System Uptime  : $UPTIME"
}
check_root
print_sys_info
exit 0

Now syntactically check the script one more time.

$ sudo bash -n sys_info.sh

The command above will not produce any output because our script is now syntactically correct. We can as well trace the execution of the script all for a second time and it should work well:

$ sudo bash -x sys_info.sh

Trace Shell Script Execution

Now run the script.

$ sudo ./sys_info.sh

Shell Script to Show Date, Time and Uptime

Importance of Shell Script Execution Tracing

Shell script tracing helps us identify syntax errors and more importantly, logical errors. Take for instance the check_root function in the sys_info.sh shell script, which is intended to determine if a user is root or not, since the script is only allowed to be executed by the superuser.

check_root(){
if [ "$UID" -ne "$ROOT_ID" ]; then
echo "You are not allowed to execute this program!"
exit 1;
fi
}

The magic here is controlled by the if statement expression [ "$UID" -ne "$ROOT_ID" ], once we do not use the suitable numerical operator (-ne in this case, which means not equal ), we end up with a possible logical error.

Assuming that we used -eq ( means equal to), this would permit any system user as well as the root user to run the script, hence a logical error.

check_root(){
if [ "$UID" -eq "$ROOT_ID" ]; then
echo "You are not allowed to execute this program!"
exit 1;
fi
}

Note: As we looked at before at the start of this series, the set shell built-in command can activate debugging in a particular section of a shell script.

Therefore, the line below will help us find this logical error in the function by tracing its execution:

The script with a logical error:

#!/bin/bash
#script to print brief system info
ROOT_ID="0"
DATE=`date`
NO_USERS=`who | wc -l`
UPTIME=`uptime`
check_root(){
if [ "$UID" -eq "$ROOT_ID" ]; then
echo "You are not allowed to execute this program!"
exit 1;
fi
}
print_sys_info(){
echo "System Time    : $DATE"
echo "Number of users: $NO_USERS"
echo "System Uptime  : $UPTIME"
}
#turning on and off debugging of check_root function
set -x ; check_root;  set +x ;
print_sys_info
exit 0

Save the file and invoke the script, we can see that a regular system user can run the script without sudo as in the output below. This is because the value of USER_ID is 100 which is not equal to root ROOT_ID which is 0.

$ ./sys_info.sh

Run Shell Script Without Sudo

Adventures in Elm

2017-03-19 11_30_53-GOTO2016•Advent-Download-From-YTPak.com.mp4 - VLC media player2017-03-19 11_32_58-GOTO2016•Advent-Download-From-YTPak.com.mp4 - VLC media player2017-03-19 11_33_50-GOTO2016•Advent-Download-From-YTPak.com.mp4 - VLC media player2017-03-19 11_34_13-GOTO2016•Advent-Download-From-YTPak.com.mp4 - VLC media player2017-03-19 11_35_14-GOTO2016•Advent-Download-From-YTPak.com.mp4 - VLC media player2017-03-19 11_36_25-GOTO2016•Advent-Download-From-YTPak.com.mp4 - VLC media player2017-03-19 11_37_32-GOTO2016•Advent-Download-From-YTPak.com.mp4 - VLC media player2017-03-19 11_38_15-GOTO2016•Advent-Download-From-YTPak.com.mp4 - VLC media player2017-03-19 11_40_16-GOTO2016•Advent-Download-From-YTPak.com.mp4 - VLC media player2017-03-19 11_43_29-GOTO2016•Advent-Download-From-YTPak.com.mp4 - VLC media player2017-03-19 11_43_59-GOTO2016•Advent-Download-From-YTPak.com.mp4 - VLC media player2017-03-19 11_44_31-GOTO2016•Advent-Download-From-YTPak.com.mp4 - VLC media player

2017-03-19 11_47_45-GOTO2016•Advent-Download-From-YTPak.com.mp4 - VLC media player2017-03-19 11_45_39-GOTO2016•Advent-Download-From-YTPak.com.mp4 - VLC media player2017-03-19 11_48_41-GOTO2016•Advent-Download-From-YTPak.com.mp4 - VLC media player

2017-03-19 11_49_16-GOTO2016•Advent-Download-From-YTPak.com.mp4 - VLC media player

You can write your first elm code online here: http://elm-lang.org/examples/hello-html

Your first hello word code will look like this:

import Html 

main =
 Html.text "Hello, World!!"

Your index.html  will look like this:

Hello, World!!

You can also div as follows:

import Html 

main =
 Html.div [] [
 Html.text "Hello, World!!"]

Output will remain the same, but if you inspect the webpage, you can see the following in the source code:

2017-03-19 12_04_08-example_hello-html.png

You can convert your html code to elm here: https://mbylstra.github.io/html-to-elm/

2017-03-19 12_08_33-Photos.png

You can also add a view function

import Html 

main = view

view : Html.Html Never
view = 
 Html.div [] [
 Html.text "Hello, World!!"]

Your output will remain the same.

If you want to output your mouse position your code will look like:

import Html 
import Mouse
import Programmator

main : Program {}
main = {

init = { x=0, y=0 },
input = Mouse.moves,
view = view
} |> Programmator.viewFromOneInput

view : Mouse.Position -> Html.Html Mouse.Position
view { x , y }= 
 Html.div [] [
 Html.text ("x= "++ (toString x) ++ ", y= " ++ (toString y))]

 

2017-03-19 12_40_06-GOTO2016•Advent-Download-From-YTPak.com.mp4 - VLC media player2017-03-19 12_40_17-GOTO2016•Advent-Download-From-YTPak.com.mp4 - VLC media player2017-03-19 12_40_47-GOTO2016•Advent-Download-From-YTPak.com.mp4 - VLC media player

Microservices at Netflix scale

2017-03-11 18_37_43-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 18_38_19-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 18_40_13-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 18_40_28-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 18_40_45-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player

Netflix took 7 years to completely transform to microservices. The traditional approach that was followed was that developers contributed to their individual jars/wars which would go through the regular sprint iteration.

2017-03-11 18_42_47-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player.png

As can be seen above that there are issues with the velocity for delivery and reliability.

Any one change in one service it would reflect into the other services which was very difficult to handle. This caused too many bugs and single point database. Few years back the production database of netflix got corrupted and the users/customers saw the following message.

2017-03-11 18_47_48-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player.png

2017-03-11 18_49_46-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 18_50_22-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player

2017-03-11 18_52_13-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 18_52_53-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player

2017-03-11 18_54_05-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 18_54_44-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player

2017-03-11 18_56_36-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player.png

2017-03-11 18_57_58-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player.png

2017-03-11 19_03_16-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 18_58_26-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 18_59_06-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 18_59_39-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 19_00_14-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 19_00_27-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 19_00_47-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 19_01_09-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 19_01_20-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 19_01_40-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 19_02_29-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 19_02_57-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player

2017-03-11 19_07_25-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 19_07_47-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player

2017-03-11 19_13_46-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 19_10_19-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 19_11_12-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 19_11_25-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 19_13_03-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player

At netflix they want their services to isolate single point failures so here comes Hystrix. Hystrix is a latency and fault tolerance library designed to isolate points of access to remote systems, services and 3rd party libraries, stop cascading failure and enable resilience in complex distributed systems where failure is inevitable.

They test their system with fault injection test framework (FIT).

2017-03-11 19_19_45-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 19_20_10-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 19_21_50-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 19_21_43-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player2017-03-11 19_22_05-GOTO2016•Micros-Download-From-YTPak.com (1).mp4 - VLC media player

Higher order infrastructure

2017-03-11 17_30_38-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

2017-03-11 17_31_50-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

Developer need not to worry about the underlying infrastructure, all he/she has to look into is the services running on them and the stack they write.

You do not have to worry about where your code is running. Which leads to faster rollouts, faster releases, faster deployments. Even rollbacks have become piece of cake with having docker on your infrastructure.

2017-03-11 17_35_15-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

If there is any change in your service all you have to do is change the YAML (yet another markup language) file and you will have a completely new service in minutes.  Docker was build for scalabilty and high availability.

It is very easy to load balance your services in docker, scale up and scale down as per your requirements.

The most basic application that is demoed by docker, is the following cat and dog polling polygot application.

2017-03-11 17_43_31-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png2017-03-11 17_43_44-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

Each part of this application will be written and maintained by a different team. Add it will just get collaborated by docker.

2017-03-11 17_47_59-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

The above are the components required to get the docker application up and running.

2017-03-11 17_51_39-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

2017-03-11 17_51_54-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

2017-03-11 17_52_45-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

Docker swarm is a docker cluster manager that we can run our docker commands on and they will be executed on the whole cluster instead of just one machine.

The following is a docker swarm architecture:

2017-03-11 17_54_34-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

Containers provide an elegant solution for those looking to design and deploy applications at scale. While Docker provides the actual containerizing technology, many other projects assist in developing the tools needed for appropriate bootstrapping and communication in the deployment environment.

One of the core technologies that many Docker environments rely on is service discovery. Service discovery allows an application or component to discover information about their environment and neighbors. This is usually implemented as a distributed key-value store, which can also serve as a more general location to dictate configuration details. Configuring a service discovery tool allows you to separate your runtime configuration from the actual container, which allows you to reuse the same image in a number of environments.

The basic idea behind service discovery is that any new instance of an application should be able to programmatically identify the details of its current environment. This is required in order for the new instance to be able to “plug in” to the existing application environment without manual intervention. Service discovery tools are generally implemented as a globally accessible registry that stores information about the instances or services that are currently operating. Most of the time, in order to make this configuration fault tolerant and scalable, the registry is distributed among the available hosts in the infrastructure.

While the primary purpose of service discovery platforms is to serve connection details to link components together, they can be used more generally to store any type of configuration. Many deployments leverage this ability by writing their configuration data to the discovery tool. If the containers are configured so that they know to look for these details, they can modify their behavior based on what they find.

How Does Service Discovery Work?

Each service discovery tool provides an API that components can use to set or retrieve data. Because of this, for each component, the service discovery address must either be hard-coded into the application/container itself, or provided as an option at runtime. Typically the discovery service is implemented as a key-value store accessible using standard http methods.

The way a service discovery portal works is that each service, as it comes online, registers itself with the discovery tool. It records whatever information a related component might need in order to consume the service it provides. For instance, a MySQL database may register the IP address and port where the daemon is running, and optionally the username and credentials needed to sign in.

When a consumer of that service comes online, it is able to query the service discovery registry for information at a predefined endpoint. It can then interact with the components it needs based on the information it finds. One good example of this is a load balancer. It can find every backend server that it needs to feed traffic to by querying the service discovery portal and adjusting its configuration accordingly.

This takes the configuration details out of the containers themselves. One of the benefits of this is that it makes the component containers more flexible and less bound to a specific configuration. Another benefit is that it makes it simple to make your components react to new instances of a related service, allowing dynamic reconfiguration.

What Are Some Common Service Discovery Tools?

Now that we’ve discussed some of the general features of service discovery tools and globally distributed key-value stores, we can mention a few of the projects that relate to these concepts.

Some of the most common service discovery tools are:

  • etcd: This tool was created by the makers of CoreOS to provide service discovery and globally distributed configuration to both containers and the host systems themselves. It implements an http API and has a command line client available on each host machine.
  • consul: This service discovery platform has many advanced features that make it stand out including configurable health checks, ACL functionality, HAProxy configuration, etc.
  • zookeeper: This example is a bit older than the previous two, providing a more mature platform at the expense of some newer features.

Some other projects that expand basic service discovery are:

  • crypt: Crypt allows components to protect the information they write using public key encryption. The components that are meant to read the data can be given the decryption key. All other parties will be unable to read the data.
  • confd: Confd is a project aimed at allowing dynamic reconfiguration of arbitrary applications based on changes in the service discovery portal. The system involves a tool to watch relevant endpoints for changes, a templating system to build new configuration files based on the information gathered, and the ability to reload affected applications.
  • vulcand: Vulcand serves as a load balancer for groups of components. It is etcd aware and modifies its configuration based on changes detected in the store.
  • marathon: While marathon is mainly a scheduler (covered later), it also implements a basic ability to reload HAProxy when changes are made to the available services it should be balancing between.
  • frontrunner: This project hooks into marathon to provide a more robust solution for updating HAProxy.
  • synapse: This project introduces an embedded HAProxy instance that can route traffic to components.
  • nerve: Nerve is used in conjunction with synapse to provide health checks for individual component instances. If the component becomes unavailable, nerve updates synapse to bring the component out of rotation.

2017-03-11 18_01_52-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player2017-03-11 18_02_10-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player2017-03-11 18_02_27-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player

The command above is used to create a consul machine droplet in digital ocean.

2017-03-11 18_05_06-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

Use the above command to create docker swarm master which will attach to the consul.

2017-03-11 18_09_42-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

In docker swarm you can define your strategies in a very fine grain style.

2017-03-11 18_11_51-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

2017-03-11 18_12_38-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player2017-03-11 18_13_17-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player

2017-03-11 18_17_14-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player2017-03-11 18_17_32-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player2017-03-11 18_18_19-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player

To scale up all you have to type is docker-compose scale <your-service-name> and you are done.

auto-scaling will2017-03-11 18_28_03-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

Auto-scalng will need a monitoring service to be plugged in.

Java 8 coding challenge: Count Divisors

Problem:

You have been given 3 integers l, r and k. Find how many numbers between l and r (both inclusive) are divisible by k. You do not need to print these numbers, you just have to find their count.

Input Format
The first and only line of input contains 3 space separated integers l, r and k.

Output Format
Print the required answer on a single line.

Constraints
1lr10001≤l≤r≤1000
1k10001≤k≤1000

SAMPLE INPUT
1 10 1
SAMPLE OUTPUT
10

Code:

/* IMPORTANT: Multiple classes and nested static classes are supported */
 
/*
 * uncomment this if you want to read input.
//imports for BufferedReader
import java.io.BufferedReader;
import java.io.InputStreamReader;
 
//import for Scanner and other utility classes
import java.util.*;
*/
import java.util.Scanner;
 
class TestClass {
 public static void main(String args[] ) throws Exception {
 Scanner sc = new Scanner(System.in);
 
 int l = sc.nextInt();
 int r = sc.nextInt();
 int k = sc.nextInt();
 
 sc.close();
 int count = 0;
 if(l == r && l%k != 0) {
 System.out.println(0);
 System.exit(0);
 }
 
 while (l <= r) {
 if(l % k == 0) {
 break;
 }
 l++;
 }
 
 count = (r - l) / k + 1;
 System.out.println(count);
 }
}