Managing Users in a Graphical Environment RHEL 7

Linux the great

The Users utility allows you to view, modify, add, and delete local users in the graphical user interface.

3.2.1. Using the Users Settings Tool

Press the Super key to enter the Activities Overview, type Users and then press Enter. The Users settings tool appears. The Super key appears in a variety of guises, depending on the keyboard and other hardware, but often as either the Windows or Command key, and typically to the left of the Spacebar. Alternatively, you can open the Users utility from the Settings menu after clicking your user name in the top right corner of the screen.
To make changes to the user accounts, first select the Unlock button and authenticate yourself as indicated by the dialog box that appears. Note that unless you have superuser privileges, the application will prompt you to authenticate as root. To add and remove users, select the + and

View original post 114 more words


Introduction to users and groups RHEL 7

Linux the great

While users can be either people (meaning accounts tied to physical users) or accounts that exist for specific applications to use, groups are logical expressions of organization, tying users together for a common purpose. Users within a group share the same permissions to read, write, or execute files owned by that group.
Each user is associated with a unique numerical identification number called a user ID (UID). Likewise, each group is associated with a group ID (GID). A user who creates a file is also the owner and group owner of that file. The file is assigned separate read, write, and execute permissions for the owner, the group, and everyone else. The file owner can be changed only by root, and access permissions can be changed by both the root user and file owner.
Additionally, Red Hat Enterprise Linux supports access control lists (ACLs) for…

View original post 559 more words

Overview of File System Hierarchy Standard (FHS)

Linux the great

Red Hat Enterprise Linux uses the Filesystem Hierarchy Standard (FHS) file system structure, which defines the names, locations, and permissions for many file types and directories.
The FHS document is the authoritative reference to any FHS-compliant file system, but the standard leaves many areas undefined or extensible. This section is an overview of the standard and a description of the parts of the file system not covered by the standard.
Compliance with the standard means many things, but the two most important are compatibility with other compliant systems and the ability to mount a /usr/ partition as read-only. This second point is important because the directory contains common executables and should not be changed by users. Also, since the /usr/ directory is mounted as read-only, it can be mounted from the CD-ROM or from another machine via a read-only NFS mount.

⁠1.2.1. FHS Organization

The directories and files noted…

View original post 1,545 more words

Microservices at Netflix scale

2017-03-11 18_37_43-GOTO2016• (1).mp4 - VLC media player2017-03-11 18_38_19-GOTO2016• (1).mp4 - VLC media player2017-03-11 18_40_13-GOTO2016• (1).mp4 - VLC media player2017-03-11 18_40_28-GOTO2016• (1).mp4 - VLC media player2017-03-11 18_40_45-GOTO2016• (1).mp4 - VLC media player

Netflix took 7 years to completely transform to microservices. The traditional approach that was followed was that developers contributed to their individual jars/wars which would go through the regular sprint iteration.

2017-03-11 18_42_47-GOTO2016• (1).mp4 - VLC media player.png

As can be seen above that there are issues with the velocity for delivery and reliability.

Any one change in one service it would reflect into the other services which was very difficult to handle. This caused too many bugs and single point database. Few years back the production database of netflix got corrupted and the users/customers saw the following message.

2017-03-11 18_47_48-GOTO2016• (1).mp4 - VLC media player.png

2017-03-11 18_49_46-GOTO2016• (1).mp4 - VLC media player2017-03-11 18_50_22-GOTO2016• (1).mp4 - VLC media player

2017-03-11 18_52_13-GOTO2016• (1).mp4 - VLC media player2017-03-11 18_52_53-GOTO2016• (1).mp4 - VLC media player

2017-03-11 18_54_05-GOTO2016• (1).mp4 - VLC media player2017-03-11 18_54_44-GOTO2016• (1).mp4 - VLC media player

2017-03-11 18_56_36-GOTO2016• (1).mp4 - VLC media player.png

2017-03-11 18_57_58-GOTO2016• (1).mp4 - VLC media player.png

2017-03-11 19_03_16-GOTO2016• (1).mp4 - VLC media player2017-03-11 18_58_26-GOTO2016• (1).mp4 - VLC media player2017-03-11 18_59_06-GOTO2016• (1).mp4 - VLC media player2017-03-11 18_59_39-GOTO2016• (1).mp4 - VLC media player2017-03-11 19_00_14-GOTO2016• (1).mp4 - VLC media player2017-03-11 19_00_27-GOTO2016• (1).mp4 - VLC media player2017-03-11 19_00_47-GOTO2016• (1).mp4 - VLC media player2017-03-11 19_01_09-GOTO2016• (1).mp4 - VLC media player2017-03-11 19_01_20-GOTO2016• (1).mp4 - VLC media player2017-03-11 19_01_40-GOTO2016• (1).mp4 - VLC media player2017-03-11 19_02_29-GOTO2016• (1).mp4 - VLC media player2017-03-11 19_02_57-GOTO2016• (1).mp4 - VLC media player

2017-03-11 19_07_25-GOTO2016• (1).mp4 - VLC media player2017-03-11 19_07_47-GOTO2016• (1).mp4 - VLC media player

2017-03-11 19_13_46-GOTO2016• (1).mp4 - VLC media player2017-03-11 19_10_19-GOTO2016• (1).mp4 - VLC media player2017-03-11 19_11_12-GOTO2016• (1).mp4 - VLC media player2017-03-11 19_11_25-GOTO2016• (1).mp4 - VLC media player2017-03-11 19_13_03-GOTO2016• (1).mp4 - VLC media player

At netflix they want their services to isolate single point failures so here comes Hystrix. Hystrix is a latency and fault tolerance library designed to isolate points of access to remote systems, services and 3rd party libraries, stop cascading failure and enable resilience in complex distributed systems where failure is inevitable.

They test their system with fault injection test framework (FIT).

2017-03-11 19_19_45-GOTO2016• (1).mp4 - VLC media player2017-03-11 19_20_10-GOTO2016• (1).mp4 - VLC media player2017-03-11 19_21_50-GOTO2016• (1).mp4 - VLC media player2017-03-11 19_21_43-GOTO2016• (1).mp4 - VLC media player2017-03-11 19_22_05-GOTO2016• (1).mp4 - VLC media player

Higher order infrastructure

2017-03-11 17_30_38-GOTO2016• - VLC media player.png

2017-03-11 17_31_50-GOTO2016• - VLC media player.png

Developer need not to worry about the underlying infrastructure, all he/she has to look into is the services running on them and the stack they write.

You do not have to worry about where your code is running. Which leads to faster rollouts, faster releases, faster deployments. Even rollbacks have become piece of cake with having docker on your infrastructure.

2017-03-11 17_35_15-GOTO2016• - VLC media player.png

If there is any change in your service all you have to do is change the YAML (yet another markup language) file and you will have a completely new service in minutes.  Docker was build for scalabilty and high availability.

It is very easy to load balance your services in docker, scale up and scale down as per your requirements.

The most basic application that is demoed by docker, is the following cat and dog polling polygot application.

2017-03-11 17_43_31-GOTO2016• - VLC media player.png2017-03-11 17_43_44-GOTO2016• - VLC media player.png

Each part of this application will be written and maintained by a different team. Add it will just get collaborated by docker.

2017-03-11 17_47_59-GOTO2016• - VLC media player.png

The above are the components required to get the docker application up and running.

2017-03-11 17_51_39-GOTO2016• - VLC media player.png

2017-03-11 17_51_54-GOTO2016• - VLC media player.png

2017-03-11 17_52_45-GOTO2016• - VLC media player.png

Docker swarm is a docker cluster manager that we can run our docker commands on and they will be executed on the whole cluster instead of just one machine.

The following is a docker swarm architecture:

2017-03-11 17_54_34-GOTO2016• - VLC media player.png

Containers provide an elegant solution for those looking to design and deploy applications at scale. While Docker provides the actual containerizing technology, many other projects assist in developing the tools needed for appropriate bootstrapping and communication in the deployment environment.

One of the core technologies that many Docker environments rely on is service discovery. Service discovery allows an application or component to discover information about their environment and neighbors. This is usually implemented as a distributed key-value store, which can also serve as a more general location to dictate configuration details. Configuring a service discovery tool allows you to separate your runtime configuration from the actual container, which allows you to reuse the same image in a number of environments.

The basic idea behind service discovery is that any new instance of an application should be able to programmatically identify the details of its current environment. This is required in order for the new instance to be able to “plug in” to the existing application environment without manual intervention. Service discovery tools are generally implemented as a globally accessible registry that stores information about the instances or services that are currently operating. Most of the time, in order to make this configuration fault tolerant and scalable, the registry is distributed among the available hosts in the infrastructure.

While the primary purpose of service discovery platforms is to serve connection details to link components together, they can be used more generally to store any type of configuration. Many deployments leverage this ability by writing their configuration data to the discovery tool. If the containers are configured so that they know to look for these details, they can modify their behavior based on what they find.

How Does Service Discovery Work?

Each service discovery tool provides an API that components can use to set or retrieve data. Because of this, for each component, the service discovery address must either be hard-coded into the application/container itself, or provided as an option at runtime. Typically the discovery service is implemented as a key-value store accessible using standard http methods.

The way a service discovery portal works is that each service, as it comes online, registers itself with the discovery tool. It records whatever information a related component might need in order to consume the service it provides. For instance, a MySQL database may register the IP address and port where the daemon is running, and optionally the username and credentials needed to sign in.

When a consumer of that service comes online, it is able to query the service discovery registry for information at a predefined endpoint. It can then interact with the components it needs based on the information it finds. One good example of this is a load balancer. It can find every backend server that it needs to feed traffic to by querying the service discovery portal and adjusting its configuration accordingly.

This takes the configuration details out of the containers themselves. One of the benefits of this is that it makes the component containers more flexible and less bound to a specific configuration. Another benefit is that it makes it simple to make your components react to new instances of a related service, allowing dynamic reconfiguration.

What Are Some Common Service Discovery Tools?

Now that we’ve discussed some of the general features of service discovery tools and globally distributed key-value stores, we can mention a few of the projects that relate to these concepts.

Some of the most common service discovery tools are:

  • etcd: This tool was created by the makers of CoreOS to provide service discovery and globally distributed configuration to both containers and the host systems themselves. It implements an http API and has a command line client available on each host machine.
  • consul: This service discovery platform has many advanced features that make it stand out including configurable health checks, ACL functionality, HAProxy configuration, etc.
  • zookeeper: This example is a bit older than the previous two, providing a more mature platform at the expense of some newer features.

Some other projects that expand basic service discovery are:

  • crypt: Crypt allows components to protect the information they write using public key encryption. The components that are meant to read the data can be given the decryption key. All other parties will be unable to read the data.
  • confd: Confd is a project aimed at allowing dynamic reconfiguration of arbitrary applications based on changes in the service discovery portal. The system involves a tool to watch relevant endpoints for changes, a templating system to build new configuration files based on the information gathered, and the ability to reload affected applications.
  • vulcand: Vulcand serves as a load balancer for groups of components. It is etcd aware and modifies its configuration based on changes detected in the store.
  • marathon: While marathon is mainly a scheduler (covered later), it also implements a basic ability to reload HAProxy when changes are made to the available services it should be balancing between.
  • frontrunner: This project hooks into marathon to provide a more robust solution for updating HAProxy.
  • synapse: This project introduces an embedded HAProxy instance that can route traffic to components.
  • nerve: Nerve is used in conjunction with synapse to provide health checks for individual component instances. If the component becomes unavailable, nerve updates synapse to bring the component out of rotation.

2017-03-11 18_01_52-GOTO2016• - VLC media player2017-03-11 18_02_10-GOTO2016• - VLC media player2017-03-11 18_02_27-GOTO2016• - VLC media player

The command above is used to create a consul machine droplet in digital ocean.

2017-03-11 18_05_06-GOTO2016• - VLC media player.png

Use the above command to create docker swarm master which will attach to the consul.

2017-03-11 18_09_42-GOTO2016• - VLC media player.png

In docker swarm you can define your strategies in a very fine grain style.

2017-03-11 18_11_51-GOTO2016• - VLC media player.png

2017-03-11 18_12_38-GOTO2016• - VLC media player2017-03-11 18_13_17-GOTO2016• - VLC media player

2017-03-11 18_17_14-GOTO2016• - VLC media player2017-03-11 18_17_32-GOTO2016• - VLC media player2017-03-11 18_18_19-GOTO2016• - VLC media player

To scale up all you have to type is docker-compose scale <your-service-name> and you are done.

auto-scaling will2017-03-11 18_28_03-GOTO2016• - VLC media player.png

Auto-scalng will need a monitoring service to be plugged in.