Yum : Operation too slow. Less than 1000 byt es/sec transferred the last 30 seconds

  • First thing to try is the usual
    yum clean all
  • You might be running 3rd party repositories and do not have yum-plugin-priorities installed.
    This could compromise your system, so please install and configure yum-plugin-priorities.
  • You could also try the following:

yum –disableplugin=fastestmirror update.

  • minrate This sets the low speed threshold in bytes per second. If the server is sending data slower than this for at least timeout' seconds, Yum aborts the connection. The default is1000′.

  timeout Number of seconds to wait for a connection before timing out. Defaults to 30 seconds. This may be too short of a time for extremely overloaded sites.


You can reduce minrate and/or increase timeoute. Just add/edit these parameters in /etc/yum.conf [main] section. For example:

[main]
...
minrate=1
timeout=300

Diamond installation on centos 7

$ yum install make rpm-build python-configobj python-setuptools
$ git clone https://github.com/python-diamond/Diamond
$ cd Diamond
$ make buildrpm
Then use the package you built like this:

$ yum localinstall –nogpgcheck dist/diamond-4.0.449-0.noarch.rpm
$ cp /etc/diamond/{diamond.conf.example,diamond.conf}
$ $EDITOR /etc/diamond/diamond.conf
# Start Diamond service via service manager.

$ service diamond start

diamond-setup -C ElasticSearchCollector
diamond-setup -C NetworkCollector

Issues

  1. failed to connect socket to ‘/var/run/libvirt/libvirt-sock-ro’ no such file or directory

execute: egrep ‘(vmx|svm)’ /proc/cpuinfo

2. If the above commands returns with any output showing vmx or svm then your hardware supports VT else it does not.

yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install virt-viewer

Configuring Graphite on Centos 7

Clone the source code:
git clone https://github.com/graphite-project/graphite-web.git
cd graphite-web
git checkout 0.9.x
cd ..
git clone https://github.com/graphite-project/carbon.git
cd carbon
git checkout 0.9.x
cd ..
git clone https://github.com/graphite-project/whisper.git
cd whisper
git checkout 0.9.x
cd ..

Configure whisper:
pushd whisper
sudo python setup.py install
popd

Configure carbon:
pushd carbon
sudo python setup.py install
popd
pushd /opt/graphite/conf/
sudo cp carbon.conf.example carbon.conf
sudo cp storage-schemas.conf.example storage-schemas.conf
popd

storage-schemas.conf has information about schema definitions for Whisper files. We can define the data retention time under this file. By default it retains everything for one day. Once graphite is configured changing this file wont change whisper’s internal metrics. You can use whisper-resize.py for that.

Configure Graphite
pushd graphite-web
python check-dependencies.py
Install the unmet dependencies:
sudo apt-get install python-cairo python-django python-django-tagging python-memcache python-ldap
python-txamqp
popd

Configure the webapp:
pushd graphite-web
sudo python setup.py install
popd

Configure Graphite webapp using Apache:
Install apache and mod_wsgi:
sudo apt-get install apache2 libapache2-mod-wsgi

Configure graphite virtual host:
sudo cp graphite-web/examples/example-graphite-vhost.conf
/etc/apache2/sites-available/graphite-vhost.conf
sudo ln -s /etc/apache2/sites-available/graphite-vhost.conf /etc/apache2/sites-enabled/graphite-vhost.conf
sudo unlink /etc/apache2/sites-enabled/000-default.conf

Edit /etc/apache2/sites-available/graphite-vhost.conf and add
WSGISocketPrefix /var/run/apache2/wsgi
Edit /etc/apache2/sites-available/graphite-vhost.conf and add
<Directory /opt/graphite/conf/>
Options FollowSymlinks
AllowOverride none
Require all granted
</Directory>
Reload apache configurations:
sudo service apache2 reload

Sync sqlite database for graphite-web:
cp /opt/graphite/webapp/graphite/local_settings.py.example
/opt/graphite/webapp/graphite/local_settings.py
cd /opt/graphite/webapp/graphite
You can add turn on debugging for graphite-web by adding following to local_settings.py:
DEBUG=True
sudo python manage.py syncdb
while doing db-sync you will be asked to create superuser for graphite. Create a superuser and password for it.
Change owner of graphite storage directory to a user through which apache is being run:
sudo chown -R www-data:www-data /opt/graphite/storage/

Configure nginx instead of apache:

Install necessary packages:
sudo apt-get install nginx php5-fpm uwsgi-plugin-python uwsgi
Configure nginx and uwsgi:
cd /opt/graphite/conf/
sudo cp graphite.wsgi.example wsgi.py
Create a file /etc/nginx/sites-available/graphite-vhost.conf and add following to it:
server {
listen 8080;
server_name graphite;
root /opt/graphite/webapp;
error_log /opt/graphite/storage/log/webapp/error.log error;
access_log /opt/graphite/storage/log/webapp/access.log;

location / {
include uwsgi_params;
uwsgi_pass 127.0.0.1:3031;
}

}
Enable the nginx server
sudo ln -s /etc/nginx/sites-available/graphite-vhost.conf /etc/nginx/sites-enabled/graphite-vhost.conf
Create a file /etc/uwsgi/apps-available/graphite.ini and add following to it:
[uwsgi]
processes = 2
socket = 127.0.0.1:3031
gid = www-data
uid = www-data
chdir = /opt/graphite/conf
module = wsgi:application
sudo ln -s /etc/uwsgi/apps-available/graphite.ini /etc/uwsgi/apps-enabled/graphite.ini

Restart services:
sudo /etc/init.d/uwsgi restart
sudo /etc/init.d/nginx restart

Start Carbon (the data aggregator):
cd /opt/graphite/
./bin/carbon-cache.py start

 

Access the graphite home page:
Graphite homepage is available at http://<ip-of-graphite-host&gt;

Connect graphite to DSP-Core:

For connecting your DSP-Core with graphite you need to install carbon agnet over your
DSP-Core machine so that carbon agent will send the data to your graphite host which can be
displayed over web-UI. Follow these steps to connect your DSP-Core with graphite.
Install carbon over DSP-Core machine:
git clone https://github.com/graphite-project/carbon.git
cd carbon
git checkout 0.9.x

cd ..
pushd carbon
sudo python setup.py install
popd
pushd /opt/graphite/conf/
sudo cp carbon.conf.example carbon.conf
sudo cp storage-schemas.conf.example storage-schemas.conf

popd
Configure DSP-Core to send data to graphite:
You need to add following to /data1/deploy/dsp/current/dsp-core/conf/bootstrap.json
“carbon-uri”: [“<IP-of-graphite-host>:2003”]

Rabbitmq standalone and cluster installation

  • Install rabbitMQ in the VM. Following are the installations steps.
    ·         Verify if the earlang package is installed
  • rpm -q erlang-solutions-1.0-1.nonarch.rpm
  • wget http://packages.erlang-solutions.com/erlang-solutions-1.0-1.noarch.rpm
  • sudo wget http://packages.erlang-solutions.com/erlang-solutions-1.0-1.noarch.rpm
  • sudo yum update NOTE : use command “yum –releasever=6.7 update” if you want a specific version.
  • su -c ‘yum list rabbitmq’   Or use
  • yum install rabbitmq-server
  • sudo rpm -Uvh http://www.rabbitmq.com/releases/rabbitmq-server/v3.6.0/rabbitmq-server-3.6.0-1.noarch.rpm
  • sudo /etc/init.d/rabbitmq-server start·
  • Uncomment the loopback line in security section of rabbitMq.config :  {loopback_users, []}ss
  • rabbitmq-plugins enable rabbitmq_management·
  • Configure port firewall rule should be in place to accept the tcp connection.
  • Use following command : lokkit –p <rabbitMQ port>:tcp , lokkit –p <rabbitMQ management port>:tcp·
  • Default guest/guest account should be disabled. Change the user and user permissions using following commands :
  • Note : password should be 16 characters , no special characters allowed and should be generated by keypass.
  • rabbitmqctl set_user_tags <username> administrator      rabbitmqctl change_password guest guest123
  • Disable the guest user by changing the password once the created user is tested.
  • rabbitmqctl add_user <username> <password>
  • Avoid use of RabbitMQ default port and configure to use our own choice. Edit the port in rabbitMq.config file. uncomment following line and edit the port : {tcp_listeners, [<rabbitMQ port>]} and {listener, [{port,    <rabbitMQ management port>}.
  • Install management console of rabbitmq using following command :
  • Copy  /usr/share/doc/rabbitmq-server/ rabbitmq.config.example in /etc/rabbitmq folder and rename it as rabbitmq.config. Edit the permissions for the file to: 666
  • sudo chkconfig rabbitmq-server on
  • sudo rpm –import http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
    for rabbitmq 3.6.*  ,require socat dependency:
    steps : sudo yum install epel-release
    sudo yum install socat
  • sudo yum install -y erlang-18.2-1.el6
  • sudo rpm -Uvh erlang-solutions-1.0-1.noarch.rpm
  • Install erlang package:
  • dowload the erlang package from web site:
  • Restart the rabbitmq server using commnad : sudo service rabbitmq_server restart.
  • Make the following changes on rabbitmq console:  Got to Admin > click on user and click on set permissions. Check the permissions of the user. It should be same as user guest.
  • Try to create new queue to check it is working fine.

 

Create RabbitMQ High Availability Cluster:

1) Stop RabbitMQ in Master and slave nodes. Ensure service is stopped properly.

/etc/init.d/rabbitmq-server stop

2) Copy the file below to all nodes from the master. This cookie file needs to be the same across all nodes.

$ sudo cat /var/lib/rabbitmq/.erlang.cookie

3) Make sure you start all nodes after copying the cookie file from the master.

Start RabbitMQ in master and all nodes.

$ /etc/init.d/rabbitmq-server start

4) Then run the following commands in all the nodes, except the master node:

$ rabbitmqctl stop_app$ rabbitmqctl reset$ rabbitmqctl start_app

5) Now, run the following commands in the master node:

$ rabbitmqctl stop_app$ rabbitmqctl reset

6) Do not start the app yet.

Open port 4369 and 25672: lokkit -p 4369:tcp -p 25672:tcp

Stop the iptables on both master and slaves.

The following command is executed to join the slaves to the cluster:

$ rabbitmqctl join_cluster rabbit@slave1 rabbit@slave2

Update slave1 and slave2 with the hostnames/IP address of the slave nodes. You can add as many slave nodes as needed in the cluster.

7) Start master app in master machine

$ rabbitmqctl start_app

8) Check the cluster status from any node in the cluster:

$ rabbitmqctl cluster_status

9) In rabbitmq management console check if you can login with previous user and have all the previous settings in place.

If not create users by following command:

rabbitmqctl add_user <username> <password>

give admin rights:

rabbitmqctl set_user_tags <username> administrator

rabbitmqctl add_vhost /

Give vhost rights by:

rabbitmqctl set_permissions -p / <username> “.*” “.*” “.*”

10) Create ha mirroring by:

rabbitmqctl set_policy ha-all “” ‘{“ha-mode”:”all”,”ha-sync-mode”:”automatic”}’This will mirror all queues.

11) Now start iptables. You will have created rabbitmq HA cluster.

pyDash – A Web Based Linux Performance Monitoring Tool

pydash is a lightweight web-based monitoring tool for Linux written in Python and Django plus Chart.js. It has been tested and can run on the following mainstream Linux distributions: CentOS, Fedora, Ubuntu, Debian, Arch Linux, Raspbian as well as Pidora.

You can use it to keep an eye on your Linux PC/server resources such as CPUs, RAM, network stats, processes including online users and more. The dashboard is developed entirely using Python libraries provided in the main Python distribution, therefore it has a few dependencies; you don’t need to install many packages or libraries to run it.

In this article, we will show you how to install pydash to monitor Linux server performance.

How to Install pyDash in Linux System

1. First install required packages: git and Python pip as follows:

-------------- On Debian/Ubuntu -------------- 
$ sudo apt-get install git python-pip
-------------- On CentOS/RHEL -------------- 
# yum install epel-release
# yum install git python-pip
-------------- On Fedora 22+ --------------
# dnf install git python-pip

2. If you have git and Python pip installed, next, install virtualenv which helps to deal with dependency issues for Python projects, as below:

# pip install virtualenv
OR
$ sudo pip install virtualenv

3. Now using git command, clone the pydash directory into your home directory like so:

# git clone https://github.com/k3oni/pydash.git
# cd pydash

4. Next, create a virtual environment for your project called pydashtest using the virtualenv command below.

$ virtualenv pydashtest #give a name for your virtual environment like pydashtest

Create Virtual Environment

Important: Take note the virtual environment’s bin directory path highlighted in the screenshot above, yours could be different depending on where you cloned the pydash folder.

5. Once you have created the virtual environment (pydashtest), you must activate it before using it as follows.

$ source /home/aaronkilik/pydash/pydashtest/bin/activate

Active Virtual Environment

From the screenshot above, you’ll note that the PS1 prompt changes indicating that your virtual environment has been activated and is ready for use.

6. Now install the pydash project requirements; if you are curious enough, view the contents of requirements.txt using the cat command and the install them using as shown below.

$ cat requirements.txt
$ pip install -r requirements.txt

7. Now move into the pydash directory containing settings.py or simple run the command below to open this file to change the SECRET_KEY to a custom value.

$ vi pydash/settings.py

Set Secret Key

Save the file and exit.

8. Afterward, run the django command below to create the project database and install Django’s auth system and create a project super user.

$ python manage.py syncdb

Answer the questions below according to your scenario:

Would you like to create one now? (yes/no): yes
Username (leave blank to use 'root'): admin
Email address: aaronkilik@gmail.com
Password: ###########
Password (again): ############

Create Project Database

9. At this point, all should be set, now run the following command to start the Django development server.

$ python manage.py runserver

10. Next, open your web browser and type the URL: http://127.0.0.1:8000/ to get the web dashboard login interface. Enter the super user name and password you created while creating the database and installing Django’s auth system in step 8 and click Sign In.

pyDash Login Interface

11. Once you login into pydash main interface, you will get a section for monitoring general system info, CPU, memory and disk usage together with system load average.

Simply scroll down to view more sections.

pyDash Server Performance Overview

12. Next, screenshot of the pydash showing a section for keeping track of interfaces, IP addresses, Internet traffic, disk read/writes, online users and netstats.

pyDash Network Overview

13. Next is a screenshot of the pydash main interface showing a section to keep an eye on active processes on the system.

pyDash Active Linux Processes