Configure prometheus with kubernetes

Quick start

To quickly start all the things just do this:

kubectl apply \

This will create the namespace monitoring and bring up all components in there.

To shut down all components again you can just delete that namespace:

kubectl delete namespace monitoring

Default Dashboards

If you want to re-import the default dashboards from this setup run this job:

kubectl apply --filename ./manifests/grafana/grafana-import-dashboards-job.yaml

In case the job already exists from an earlier run, delete it before:

kubectl --namespace monitoring delete job grafana-import-dashboards

More Dashboards

See for some example dashboards and plugins.

To add a new graph Grafana UI -> Dashboards -> New

Select Graph -> Click on “Panel title” -> Edit. Then In the metrics section in the query box add the following: sum by (status) (irate(kubelet_docker_operations[5m]))

Select source as Prometheus.

You will see graph lines appearing. Thus this way you can add graphs in grafana.


What is Graphite?

What Graphite is and is not

Graphite does two things:

  1. Store numeric time-series data
  2. Render graphs of this data on demand

What Graphite does not do is collect data for you, however there are some tools out there that know how to send data to graphite. Even though it often requires a little code, sending data to Graphite is very simple.

About the project

Graphite is an enterprise-scale monitoring tool that runs well on cheap hardware. It was originally designed and written by Chris Davis at Orbitz in 2006 as side project that ultimately grew to be a foundational monitoring tool. In 2008, Orbitz allowed Graphite to be released under the open source Apache 2.0 license. Since then Chris has continued to work on Graphite and has deployed it at other companies including Sears, where it serves as a pillar of the e-commerce monitoring system. Today many large companies use it.

The architecture in a nutshell

Graphite consists of 3 software components:

  1. carbon – a Twisted daemon that listens for time-series data
  2. whisper – a simple database library for storing time-series data (similar in design to RRD)
  3. graphite webapp – A Django webapp that renders graphs on-demand using Cairo

Feeding in your data is pretty easy, typically most of the effort is in collecting the data to begin with. As you send datapoints to Carbon, they become immediately available for graphing in the webapp. The webapp offers several ways to create and display graphs including a simple URL APIfor rendering that makes it easy to embed graphs in other webpages.


What is Graphite?

Graphite is a highly scalable real-time graphing system. As a user, you write an application that collects numeric time-series data that you are interested in graphing, and send it to Graphite’s processing backend, carbon, which stores the data in Graphite’s specialized database. The data can then be visualized through graphite’s web interfaces.

The Carbon Daemons

When we talk about “Carbon” we mean one or more of various daemons that make up the storage backend of a Graphite installation. In simple installations, there is typically only one daemon, As an installation grows, the and daemons can be introduced to distribute metrics load and perform custom aggregations, respectively.

All of the carbon daemons listen for time-series data and can accept it over a common set of protocols. However, they differ in what they do with the data once they receive it. This document gives a brief overview of what each daemon does and how you can use them to build a more sophisticated storage backend. accepts metrics over various protocols and writes them to disk as efficiently as possible. This requires caching metric values in RAM as they are received, and flushing them to disk on an interval using the underlying whisper library. It also provides a query service for in-memory metric datapoints, used by the Graphite webapp to retrieve “hot data”. requires some basic configuration files to run:

The [cache] section tells what ports (2003/2004/7002), protocols (newline delimited, pickle) and transports (TCP/UDP) to listen on.
Defines a retention policy for incoming metrics based on regex patterns. This policy is passed to whisper when the .wsp file is pre-allocated, and dictates how long data is stored for.

As the number of incoming metrics increases, one instance may not be enough to handle the I/O load. To scale out, simply run multiple instances (on one or more machines) behind a or serves two distinct purposes: replication and sharding.

When running with RELAY_METHOD = rules, a instance can run in place of a server and relay all incoming metrics to multiple backend‘s running on different ports or hosts.

In RELAY_METHOD = consistent-hashing mode, a DESTINATIONS setting defines a sharding strategy across multiple backends. The same consistent hashing list can be provided to the graphite webapp via CARBONLINK_HOSTS to spread reads across the multiple backends. is configured via:

The [relay] section defines listener host/ports and a RELAY_METHOD
With RELAY_METHOD = rules set, pattern/servers tuples in this file define which metrics matching certain regex rules are forwarded to which hosts. can be run in front of to buffer metrics over time before reporting them into whisper. This is useful when granular reporting is not required, and can help reduce I/O load and whisper file sizes due to lower retention policies. is configured via:

The [aggregator] section defines listener and destination host/ports.
Defines a time interval (in seconds) and aggregation function (sum or average) for incoming metrics matching a certain pattern. At the end of each interval, the values received are aggregated and published to as a single metric. combines both and This is useful to reduce the resource and administration overhead of running both daemons. is configured via:

The [aggregator-cache] section defines listener and destination host/ports.
See section.
See section.

The Whisper Database

Whisper is a fixed-size database, similar in design and purpose to RRD (round-robin-database). It provides fast, reliable storage of numeric data over time. Whisper allows for higher resolution (seconds per point) of recent data to degrade into lower resolutions for long-term retention of historical data.

Data Points

Data points in Whisper are stored on-disk as big-endian double-precision floats. Each value is paired with a timestamp in seconds since the UNIX Epoch (01-01-1970). The data value is parsed by the Python float() function and as such behaves in the same way for special strings such as 'inf'. Maximum and minimum values are determined by the Python interpreter’s allowable range for float values which can be found by executing:

python -c 'import sys; print sys.float_info'

Archives: Retention and Precision

Whisper databases contain one or more archives, each with a specific data resolution and retention (defined in number of points or max timestamp age). Archives are ordered from the highest-resolution and shortest retention archive to the lowest-resolution and longest retention period archive.

To support accurate aggregation from higher to lower resolution archives, the precision of a longer retention archive must be divisible by precision of next lower retention archive. For example, an archive with 1 data point every 60 seconds can have a lower-resolution archive following it with a resolution of 1 data point every 300 seconds because 60 cleanly divides 300. In contrast, a 180 second precision (3 minutes) could not be followed by a 600 second precision (10 minutes) because the ratio of points to be propagated from the first archive to the next would be 3 1/3 and Whisper will not do partial point interpolation.

The total retention time of the database is determined by the archive with the highest retention as the time period covered by each archive is overlapping (see Multi-Archive Storage and Retrieval Behavior). That is, a pair of archives with retentions of 1 month and 1 year will not provide 13 months of data storage as may be guessed. Instead, it will provide 1 year of storage – the length of its longest archive.

Rollup Aggregation

Whisper databases with more than a single archive need a strategy to collapse multiple data points for when the data rolls up a lower precision archive. By default, an average function is used. Available aggregation methods are:

  • average
  • sum
  • last
  • max
  • min

Multi-Archive Storage and Retrieval Behavior

When Whisper writes to a database with multiple archives, the incoming data point is written to all archives at once. The data point will be written to the highest resolution archive as-is, and will be aggregated by the configured aggregation method (see Rollup Aggregation) and placed into each of the higher-retention archives. If you are in need for aggregation of the highest resolution points, please consider using carbon-aggregator for that purpose.

When data is retrieved (scoped by a time range), the first archive which can satisfy the entire time period is used. If the time period overlaps an archive boundary, the lower-resolution archive will be used. This allows for a simpler behavior while retrieving data as the data’s resolution is consistent through an entire returned series.

Configuring Graphite on Centos 7

Clone the source code:
git clone
cd graphite-web
git checkout 0.9.x
cd ..
git clone
cd carbon
git checkout 0.9.x
cd ..
git clone
cd whisper
git checkout 0.9.x
cd ..

Configure whisper:
pushd whisper
sudo python install

Configure carbon:
pushd carbon
sudo python install
pushd /opt/graphite/conf/
sudo cp carbon.conf.example carbon.conf
sudo cp storage-schemas.conf.example storage-schemas.conf

storage-schemas.conf has information about schema definitions for Whisper files. We can define the data retention time under this file. By default it retains everything for one day. Once graphite is configured changing this file wont change whisper’s internal metrics. You can use for that.

Configure Graphite
pushd graphite-web
Install the unmet dependencies:
sudo apt-get install python-cairo python-django python-django-tagging python-memcache python-ldap

Configure the webapp:
pushd graphite-web
sudo python install

Configure Graphite webapp using Apache:
Install apache and mod_wsgi:
sudo apt-get install apache2 libapache2-mod-wsgi

Configure graphite virtual host:
sudo cp graphite-web/examples/example-graphite-vhost.conf
sudo ln -s /etc/apache2/sites-available/graphite-vhost.conf /etc/apache2/sites-enabled/graphite-vhost.conf
sudo unlink /etc/apache2/sites-enabled/000-default.conf

Edit /etc/apache2/sites-available/graphite-vhost.conf and add
WSGISocketPrefix /var/run/apache2/wsgi
Edit /etc/apache2/sites-available/graphite-vhost.conf and add
<Directory /opt/graphite/conf/>
Options FollowSymlinks
AllowOverride none
Require all granted
Reload apache configurations:
sudo service apache2 reload

Sync sqlite database for graphite-web:
cp /opt/graphite/webapp/graphite/
cd /opt/graphite/webapp/graphite
You can add turn on debugging for graphite-web by adding following to
sudo python syncdb
while doing db-sync you will be asked to create superuser for graphite. Create a superuser and password for it.
Change owner of graphite storage directory to a user through which apache is being run:
sudo chown -R www-data:www-data /opt/graphite/storage/

Configure nginx instead of apache:

Install necessary packages:
sudo apt-get install nginx php5-fpm uwsgi-plugin-python uwsgi
Configure nginx and uwsgi:
cd /opt/graphite/conf/
sudo cp graphite.wsgi.example
Create a file /etc/nginx/sites-available/graphite-vhost.conf and add following to it:
server {
listen 8080;
server_name graphite;
root /opt/graphite/webapp;
error_log /opt/graphite/storage/log/webapp/error.log error;
access_log /opt/graphite/storage/log/webapp/access.log;

location / {
include uwsgi_params;

Enable the nginx server
sudo ln -s /etc/nginx/sites-available/graphite-vhost.conf /etc/nginx/sites-enabled/graphite-vhost.conf
Create a file /etc/uwsgi/apps-available/graphite.ini and add following to it:
processes = 2
socket =
gid = www-data
uid = www-data
chdir = /opt/graphite/conf
module = wsgi:application
sudo ln -s /etc/uwsgi/apps-available/graphite.ini /etc/uwsgi/apps-enabled/graphite.ini

Restart services:
sudo /etc/init.d/uwsgi restart
sudo /etc/init.d/nginx restart

Start Carbon (the data aggregator):
cd /opt/graphite/
./bin/ start


Access the graphite home page:
Graphite homepage is available at http://<ip-of-graphite-host&gt;

Connect graphite to DSP-Core:

For connecting your DSP-Core with graphite you need to install carbon agnet over your
DSP-Core machine so that carbon agent will send the data to your graphite host which can be
displayed over web-UI. Follow these steps to connect your DSP-Core with graphite.
Install carbon over DSP-Core machine:
git clone
cd carbon
git checkout 0.9.x

cd ..
pushd carbon
sudo python install
pushd /opt/graphite/conf/
sudo cp carbon.conf.example carbon.conf
sudo cp storage-schemas.conf.example storage-schemas.conf

Configure DSP-Core to send data to graphite:
You need to add following to /data1/deploy/dsp/current/dsp-core/conf/bootstrap.json
“carbon-uri”: [“<IP-of-graphite-host>:2003”]

pyDash – A Web Based Linux Performance Monitoring Tool

pydash is a lightweight web-based monitoring tool for Linux written in Python and Django plus Chart.js. It has been tested and can run on the following mainstream Linux distributions: CentOS, Fedora, Ubuntu, Debian, Arch Linux, Raspbian as well as Pidora.

You can use it to keep an eye on your Linux PC/server resources such as CPUs, RAM, network stats, processes including online users and more. The dashboard is developed entirely using Python libraries provided in the main Python distribution, therefore it has a few dependencies; you don’t need to install many packages or libraries to run it.

In this article, we will show you how to install pydash to monitor Linux server performance.

How to Install pyDash in Linux System

1. First install required packages: git and Python pip as follows:

-------------- On Debian/Ubuntu -------------- 
$ sudo apt-get install git python-pip
-------------- On CentOS/RHEL -------------- 
# yum install epel-release
# yum install git python-pip
-------------- On Fedora 22+ --------------
# dnf install git python-pip

2. If you have git and Python pip installed, next, install virtualenv which helps to deal with dependency issues for Python projects, as below:

# pip install virtualenv
$ sudo pip install virtualenv

3. Now using git command, clone the pydash directory into your home directory like so:

# git clone
# cd pydash

4. Next, create a virtual environment for your project called pydashtest using the virtualenv command below.

$ virtualenv pydashtest #give a name for your virtual environment like pydashtest

Create Virtual Environment

Important: Take note the virtual environment’s bin directory path highlighted in the screenshot above, yours could be different depending on where you cloned the pydash folder.

5. Once you have created the virtual environment (pydashtest), you must activate it before using it as follows.

$ source /home/aaronkilik/pydash/pydashtest/bin/activate

Active Virtual Environment

From the screenshot above, you’ll note that the PS1 prompt changes indicating that your virtual environment has been activated and is ready for use.

6. Now install the pydash project requirements; if you are curious enough, view the contents of requirements.txt using the cat command and the install them using as shown below.

$ cat requirements.txt
$ pip install -r requirements.txt

7. Now move into the pydash directory containing or simple run the command below to open this file to change the SECRET_KEY to a custom value.

$ vi pydash/

Set Secret Key

Save the file and exit.

8. Afterward, run the django command below to create the project database and install Django’s auth system and create a project super user.

$ python syncdb

Answer the questions below according to your scenario:

Would you like to create one now? (yes/no): yes
Username (leave blank to use 'root'): admin
Email address:
Password: ###########
Password (again): ############

Create Project Database

9. At this point, all should be set, now run the following command to start the Django development server.

$ python runserver

10. Next, open your web browser and type the URL: to get the web dashboard login interface. Enter the super user name and password you created while creating the database and installing Django’s auth system in step 8 and click Sign In.

pyDash Login Interface

11. Once you login into pydash main interface, you will get a section for monitoring general system info, CPU, memory and disk usage together with system load average.

Simply scroll down to view more sections.

pyDash Server Performance Overview

12. Next, screenshot of the pydash showing a section for keeping track of interfaces, IP addresses, Internet traffic, disk read/writes, online users and netstats.

pyDash Network Overview

13. Next is a screenshot of the pydash main interface showing a section to keep an eye on active processes on the system.

pyDash Active Linux Processes