Install ruby on centos with rvm

Introduction

Whether you are preparing your VPS to try out a new application, or find yourself in need of a solid and isolated Ruby installation, getting your system ready-for-work (inline with CentOS design ideologies of stability, along with its incentives of minimalism) can get you feeling a little bit lost.

In this DigitalOcean article, we are focusing on the simplest and quickest rock-solid way to get the latest Ruby interpreter (version 2.1.0) installed on a VPS running CentOS 6.5 using the Ruby Version Manager – RVM.

Glossary

1. Ruby Version Manager (RVM)

2. Understanding CentOS

3. Getting Started With Installation

  1. Preparing The System
  2. Downloading And Installing RVM
  3. Installing Ruby 2.1.0 On CentOS 6.5 Using RVM
  4. Setting Up Any Ruby Version As The Default Interpreter
  5. Working With Different Ruby Installations
  6. Working With RVM gemsets

Ruby Version Manager (RVM)

Ruby Version Manager, or RVM (and rvm as a command) for short, lets developers and system administrators quickly get started using Ruby and/or developing applications with a Ruby interpreter.

Not only does RVM support multiple versions of Ruby simultaneously, but also it comes with built-in tools to create and work with virtual environments called gemsets. With the help of RVM, it is possible to create any number of perfectly isolated – and self-contained – gemsets where dependencies, packages, and the default Ruby installation are crafted to match your needs and kept accordingly between different stages of deployment — guaranteed to work the same way regardless of where.

RVM gemsets

The power of RVM is its ability to create fully isolated Ruby containers which act like a completely different (and a new) environment. Any application running inside the environment can access (and function) only within its reach.

Understanding CentOS

CentOS operating system is derived from RHEL – Red Hat Enterprise Linux. The target users of these distributions are usually businesses, which require their systems to be running the most stable way for a long time.

The main incentives of CentOS, therefore, is the desire for stability, which is achieved by supplying tested, stable versions of applications.

All the default applications that are shipped with CentOS remain to be used by the system (and its supportive applications such as the package manager YUM) alone. It is neither recommended nor easy to try to work with them.

That is why we are going to prepare our CentOS 6.5 running droplet with necessary tools and continue with installing a Ruby interpreter targeted to run your applications.

Getting Started With Installation

Preparing The System

CentOS distributions are very lean. They do not come with many of the popular applications and tools that you are likely to need – and this is an intentional design choice as we have seen.

For our installations, however, we are going to need some libraries and tools (i.e. development [related] tools) that are not shipped by default. Therefore, we need to get them downloaded and installed before we continue.

For this purpose we will download various development tools using YUM software groups which consist of bunch of commonly used tools (applications) bundled together, ready to download.

As the first step, in order to get necessary development tools, run the following:

yum groupinstall -y development

or;

yum groupinstall -y 'development tools'

Note: The former (shorter) version might not work on older distributions of CentOS.

Downloading And Installing RVM

After arming our system with tools needed for development (and deployment) of applications, such as a generic compiler, we are ready to get RVM downloaded installed.

RVM is designed from the ground up to make the whole process of getting Ruby and managing environments easy. It is no surprise that getting RVM itself is simplified as well.

In order to download and install RVM, run the following:

curl -L get.rvm.io | bash -s stable

And to create a system environment using RVM shell script:

source /etc/profile.d/rvm.sh

Installing Ruby 2.1.0 On CentOS 6.5 Using RVM

All that is needed from now on to work with Ruby 2.1.0 (or any other version), after downloading RVM and configuring a system environment is the actual installation of Ruby from source – which is to be handled by RVM.

In order to install Ruby 2.1.0 from source using RVM, run the following:

rvm reload
To find available ruby versions execute rvm list known
It will display ruby versions
rvm install 2.1.0 

Setting Up Any Ruby Version As The Default Interpreter

If you are working with multiple applications which are already in production, it is a highly likely scenario that at some point you will need to use a different version of Ruby for a certain application.

However, for most situations, you will probably be using the latest version as the interpreter to run all others.

One of RVM’s excellent features is its ability to help you set a default Ruby version to be used generally and switch between them when necessary.

To check your current default interpreter, run the following:

ruby --version
# ruby command is linked to the selected version of Ruby Interpreter (i.e. 2.1.0)

To see all the installed Ruby versions, use the following command:

rvm list rubies

To set a Ruby version as the default, run the following:

# Usage: rvm use [version] --default
rvm use 2.1.0 --default
Advertisements

Installing Kubernetes 1.8.1 on centos 7 with flannel

Prerequisites:-

You should have at least two VMs (1 master and 1 slave) with you before creating cluster in order to test full functionality of k8s.

1] Master :-

Minimum of 1 Gb RAM, 1 CPU core and 50 Gb HDD     ( suggested )

2] Slave :-

Minimum of 1 Gb RAM, 1 CPU core and 50 Gb HDD     ( suggested )

3] Also, make sure of following things.

  • Network interconnectivity between VMs.
  • hostnames
  • Prefer to give Static IP.
  • DNS entries
  • Disable SELinux

$ vi /etc/selinux/config

  • Disable and stop firewall. ( If you are not familiar with firewall )

$ systemctl stop firewalld

$ systemctl disable firewalld

Following steps creates k8s cluster on the above VMs using kubeadm on centos 7.

Step 1] Installing kubelet and kubeadm on all your hosts

$ ARCH=x86_64

$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-${ARCH}

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg

       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

EOF

$ setenforce 0

$ yum install -y docker kubelet kubeadm kubectl kubernetes-cni

$ systemctl enable docker && systemctl start docker

$ systemctl enable kubelet && systemctl start kubelet

You might have an issue where the kubelet service does not start. You can see the error in /var/log/messages: If you have an error as follows:
Oct 16 09:55:33 k8s-master kubelet: error: unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
Oct 16 09:55:33 k8s-master systemd: kubelet.service: main process exited, code=exited, status=1/FAILURE

Then you will have to initialize the kubeadm first as in the next step. And the start the kubelet service.

Step 2.1] Initializing your master

$ kubeadm init

Note:-

  1. execute above command on master node. This command will select one of interface to be used as API server. If you wants to provide another interface please provide “–apiserver-advertise-address=<ip-address>” as an argument. So the whole command will be like this-

$ kubeadm init –apiserver-advertise-address=<ip-address>

 

  1. K8s has provided flexibility to use network of your choice like flannel, calico etc. I am using flannel network. For flannel network we need to pass network CIDR explicitly. So now the whole command will be-

$ kubeadm init –apiserver-advertise-address=<ip-address> –pod-network-cidr=10.244.0.0/16

Exa:- $  kubeadm init –apiserver-advertise-address=172.31.14.55 –pod-network-cidr=10.244.0.0/16

Step 2.2] Start using cluster

$ sudo cp /etc/kubernetes/admin.conf $HOME/
$ sudo chown $(id -u):$(id -g) $HOME/admin.conf
$ export KUBECONFIG=$HOME/admin.conf
-> Use same network CIDR as it is also configured in yaml file of flannel that we are going to configure in step 3.

-> At the end you will get one token along with the command, make a note of it, which will be used to join the slaves.

 

Step 3] Installing a pod network

Different networks are supported by k8s and depends on user choice. For this demo I am using flannel network. As of k8s-1.6, cluster is more secure by default. It uses RBAC ( Role Based Access Control ), so make sure that the network you are going to use has support for RBAC and k8s-1.6.

  1. Create RBAC Pods :

$ kubectl apply -f  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

Check whether pods are creating or not :

$ kubectl get pods –all-namespaces

  1. Create Flannel pods :

$ kubectl apply -f   https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Check whether pods are creating or not :

$ kubectl get pods –all-namespaces -o wide

-> at this stage all your pods should be in running state.

-> option “-o wide” will give more details like IP and slave where it is deployed.

 

Step 4] Joining your nodes

 

SSH to slave and execute following command to join the existing cluster.

$ kubeadm join –token <token> <master-ip>:<master-port>

You might also have an ca-cert-hash make sure you copy the entire join command from the init output to join the nodes.

Go to master node and see whether new slave has joined or not as-

$ kubectl get nodes

-> If slave is not ready, wait for few seconds, new slave will join soon.

 

Step 5]  Verify your cluster by running sample nginx application

$ vi  sample_nginx.yaml

———————————————

apiVersion: apps/v1beta1

kind: Deployment

metadata:

 name: nginx-deployment

spec:

 replicas: 2 # tells deployment to run 2 pods matching the template

 template: # create pods using pod definition in this template

   metadata:

     # unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is

     # generated from the deployment name

     labels:

       app: nginx

   spec:

     containers:

     – name: nginx

       image: nginx:1.7.9

       ports:

       – containerPort: 80

——————————————————

$ kubectl create -f sample_nginx.yaml

 

Verify pods are getting created or not.

$ kubectl get pods

$ kubectl get deployments

 

Now , lets expose the deployment so that the service will be accessible to other pods in the cluster.

$ kubectl expose deployment nginx-deployment –name=nginx-service –port=80 –target-port=80 –type=NodePort

 

Above command will create service with the name “nginx-service”. Service will be accessible on the port given by “–port” option for the “–target-port”. Target port will be of pod. Service will be accessible within the cluster only. In order to access it using your host IP “NodePort” option will be used.

 

–type=NodePort :- when this option is given k8s tries to find out  one of free port in the range 30000-32767 on all the VMs of the cluster and binds the underlying service with it. If no such port found then it will return with an error.

 

Check service is created or not

$ kubectl get svc

 

Try to curl –

$ curl <service-IP> 80  

From all the VMs including master. Nginx welcome page should be accessible.

$ curl <master-ip> nodePort

$ curl <slave-IP> nodePort

Execute this from all the VMs. Nginx welcome page should be accessible.

Also, Access nginx home page by using browser.

Install KVM Hypervisor on CentOS 7.x and RHEL 7.x

KVM is an open source hardware virtualization software through which we can create and run multiple Linux based and windows based virtual machines simultaneously. KVM is known as Kernel based Virtual Machine because when we install KVM package then KVM module is loaded into the current kernel and turns our Linux machine into a hypervisor.

In this post first we will demonstrate how we can install KVM hypervisor on CentOS 7.x and RHEL 7.x and then we will try to install virtual machines.

Before proceeding KVM installation, let’s check whether your system’s CPU supports Hardware Virtualization.

Run the beneath command from the console.

[root@linuxtechi ~]# grep -E '(vmx|svm)' /proc/cpuinfo

We should get the word either vmx or svm in the output, otherwise CPU doesn’t support virtualization.

Step:1 Install KVM and its associate packages

Run the following yum command to install KVM and its associated packages.

[root@linuxtechi ~]# yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install virt-viewer bridge-utils

Start and enable the libvirtd service

[root@linuxtechi ~]# systemctl start libvirtd
[root@linuxtechi ~]# systemctl enable libvirtd

Run the beneath command to check whether KVM module is loaded or not

[root@linuxtechi ~]# lsmod | grep kvm
kvm_intel             162153  0
kvm                   525409  1 kvm_intel
[root@linuxtechi ~]#

GoCD installation on centos 7

Installation of the GoCD server using the package manager will require root access on the machine. You are also required to have a java version 8 for the server to run.

The installer will create a user called go if one does not exist on the machine. The home directory will be set to /var/go. If you want to create your own go user, make sure you do it before you install the GoCD server.

 

RPM based distributions (ie RedHat/CentOS/Fedora)

The GoCD server RPM installer has been tested on RedHat Enterprise Linux and CentOS. It should work on most RPM based Linux distributions.

If you prefer to use the YUM repository and install via YUM, paste the following in your shell —

sudo curl https://download.gocd.org/gocd.repo -o /etc/yum.repos.d/gocd.repo
sudo yum install -y java-1.8.0-openjdk #atleast Java 8 is required, you may use other jre/jdk if you prefer

Once you have the repository setup, execute

sudo yum install -y go-server

Alternatively, if you have the server RPM downloaded:

sudo yum install -y java-1.8.0-openjdk #atleast Java 8 is required, you may use other jre/jdk if you prefer
sudo rpm -i go-server-${version}.noarch.rpm

Managing the go-server service on linux

To manage the go-server service, you may use the following commands –

sudo /etc/init.d/go-server [start|stop|status|restart]

Once the installation is complete the GoCD server will be started and it will print out the URL for the Dashboard page. This will be http://localhost:8153/go

Location of GoCD server files

The GoCD server installs its files in the following locations on your filesystem:

/var/lib/go-server       #contains the binaries and database
/etc/go                  #contains the pipeline configuration files
/var/log/go-server       #contains the server logs
/usr/share/go-server     #contains the start script
/etc/default/go-server   #contains all the environment variables with default values. These variable values can be changed as per requirement.

Installing GoCD agent on Linux

Installation of the GoCD agent using the package manager will require root access on the machine. You are also required to have a java version 8 (same version as the GoCD server) for the agent to run.

The installer will create a user called go if one does not exist on the machine. The home directory will be set to /var/go. If you want to create your own go user, make sure you do it before you install the GoCD agent.

RPM based distributions (ie RedHat/CentOS/Fedora)

The GoCD agent RPM installer has been tested on RedHat Enterprise Linux and CentOS. It should work on most RPM based Linux distributions.

If you prefer to use the YUM repository and install via YUM, paste the following in your shell —

sudo curl https://download.gocd.org/gocd.repo -o /etc/yum.repos.d/gocd.repo
sudo yum install -y java-1.8.0-openjdk #atleast Java 8 is required, you may use other jre/jdk if you prefer

Once you have the repository setup, execute

sudo yum install -y go-agent

Alternatively, if you have the agent RPM downloaded:

sudo yum install -y java-1.8.0-openjdk #atleast Java 8 is required, you may use other jre/jdk if you prefer
sudo rpm -i go-agent-${version}.noarch.rpm

Managing the go-agent service on linux

To manage the go-agent service, you may use the following commands –

sudo /etc/init.d/go-agent [start|stop|status|restart]

Configuring the go-agent

After installing the go-agent service, you must first configure the service with the hostname (or IP address) of your GoCD server, in order to do this –

  1. Open /etc/default/go-agent in your favourite text editor.
  2. Change the IP address (127.0.0.1) in the line GO_SERVER_URL=https://127.0.0.1:8154/go to the hostname (or IP address) of your GoCD server.
  3. Save the file and exit your editor.
  4. Run /etc/init.d/go-agent [start|restart] to (re)start the agent.

Note: You can override default environment for the GoCD agent by editing the file /etc/defaults/go-agent

The GoCD has been installed you can open the port 8153 and access the following url on the browser:

http://<ip&gt;:8153/go

Yum : Operation too slow. Less than 1000 byt es/sec transferred the last 30 seconds

  • First thing to try is the usual
    yum clean all
  • You might be running 3rd party repositories and do not have yum-plugin-priorities installed.
    This could compromise your system, so please install and configure yum-plugin-priorities.
  • You could also try the following:

yum –disableplugin=fastestmirror update.

  • minrate This sets the low speed threshold in bytes per second. If the server is sending data slower than this for at least timeout' seconds, Yum aborts the connection. The default is1000′.

  timeout Number of seconds to wait for a connection before timing out. Defaults to 30 seconds. This may be too short of a time for extremely overloaded sites.


You can reduce minrate and/or increase timeoute. Just add/edit these parameters in /etc/yum.conf [main] section. For example:

[main]
...
minrate=1
timeout=300