pyDash – A Web Based Linux Performance Monitoring Tool

pydash is a lightweight web-based monitoring tool for Linux written in Python and Django plus Chart.js. It has been tested and can run on the following mainstream Linux distributions: CentOS, Fedora, Ubuntu, Debian, Arch Linux, Raspbian as well as Pidora.

You can use it to keep an eye on your Linux PC/server resources such as CPUs, RAM, network stats, processes including online users and more. The dashboard is developed entirely using Python libraries provided in the main Python distribution, therefore it has a few dependencies; you don’t need to install many packages or libraries to run it.

In this article, we will show you how to install pydash to monitor Linux server performance.

How to Install pyDash in Linux System

1. First install required packages: git and Python pip as follows:

-------------- On Debian/Ubuntu -------------- 
$ sudo apt-get install git python-pip
-------------- On CentOS/RHEL -------------- 
# yum install epel-release
# yum install git python-pip
-------------- On Fedora 22+ --------------
# dnf install git python-pip

2. If you have git and Python pip installed, next, install virtualenv which helps to deal with dependency issues for Python projects, as below:

# pip install virtualenv
OR
$ sudo pip install virtualenv

3. Now using git command, clone the pydash directory into your home directory like so:

# git clone https://github.com/k3oni/pydash.git
# cd pydash

4. Next, create a virtual environment for your project called pydashtest using the virtualenv command below.

$ virtualenv pydashtest #give a name for your virtual environment like pydashtest

Create Virtual Environment

Important: Take note the virtual environment’s bin directory path highlighted in the screenshot above, yours could be different depending on where you cloned the pydash folder.

5. Once you have created the virtual environment (pydashtest), you must activate it before using it as follows.

$ source /home/aaronkilik/pydash/pydashtest/bin/activate

Active Virtual Environment

From the screenshot above, you’ll note that the PS1 prompt changes indicating that your virtual environment has been activated and is ready for use.

6. Now install the pydash project requirements; if you are curious enough, view the contents of requirements.txt using the cat command and the install them using as shown below.

$ cat requirements.txt
$ pip install -r requirements.txt

7. Now move into the pydash directory containing settings.py or simple run the command below to open this file to change the SECRET_KEY to a custom value.

$ vi pydash/settings.py

Set Secret Key

Save the file and exit.

8. Afterward, run the django command below to create the project database and install Django’s auth system and create a project super user.

$ python manage.py syncdb

Answer the questions below according to your scenario:

Would you like to create one now? (yes/no): yes
Username (leave blank to use 'root'): admin
Email address: aaronkilik@gmail.com
Password: ###########
Password (again): ############

Create Project Database

9. At this point, all should be set, now run the following command to start the Django development server.

$ python manage.py runserver

10. Next, open your web browser and type the URL: http://127.0.0.1:8000/ to get the web dashboard login interface. Enter the super user name and password you created while creating the database and installing Django’s auth system in step 8 and click Sign In.

pyDash Login Interface

11. Once you login into pydash main interface, you will get a section for monitoring general system info, CPU, memory and disk usage together with system load average.

Simply scroll down to view more sections.

pyDash Server Performance Overview

12. Next, screenshot of the pydash showing a section for keeping track of interfaces, IP addresses, Internet traffic, disk read/writes, online users and netstats.

pyDash Network Overview

13. Next is a screenshot of the pydash main interface showing a section to keep an eye on active processes on the system.

pyDash Active Linux Processes

Cloud Foundry

cloud foundry: automatic deployment of applications.

It has BOSH which communicates with the infrastructure and ensures that the systems are up aand running, health monitoring.

docker hit memory for some accounting bu can be disab

Security hardening for nginx (reverse proxy)

This document can be used when enhancing the security of your nginx server.

Features provided in Security Hardening for nginx server

  • In this security hardening we first update the nginx server. Its advantages are that it has SPDY 3.1 support, authentication via subrequests, SSL session ticket support, IPv6 support for DNS, PROXY protocol support. It also includes following features error logging, cache revalidation directives, SMTP pipelining, buffering options for FastCGI, improved support for MP4 streaming, and extended handling of byte-range requests for streaming and caching.

  • We also remove the SSL support and add TLS support. It used to be believed that TLS v1.0 was marginally more secure than SSL v3.0, its predecessor.  However, SSL v3.0 is getting very old and recent developments, such as the POODLE vulnerability have shown that SSL v3.0 is now completely insecure. Subsequent versions of TLS — v1.1 and v1.2 are significantly more secure and fix many vulnerabilities present in SSL v3.0 and TLS v1.0.  For example, the BEAST attack that can completely break web sites running on older SSL v3.0 and TLS v1.0 protocols. The newer TLS versions, if properly configured, prevent the BEAST and other attack vectors and provide many stronger ciphers and encryption methods.

  • We have also added SPDY support. SPDY is a two-layer HTTP-compatible protocol. The “upper” layer provides HTTP’s request and response semantics, while the “lower” layer manages encoding and sending the data. The lower layer of SPDY provides a number of benefits over standard HTTP. Namely, it sends fewer packets, uses fewer TCP connections and uses the TCP connections it makes more effectively. A single SPDY session allows concurrent HTTP requests to run over a single TCP/IP session. SPDY cuts down on the number of TCP handshakes required, and it cuts down on packet loss and bufferbloat

  • We have also added the HTTP Strict Transport Security (HSTS) support. It prevents sslstrip-like attacks and provides zero tolerance for certification problems.
  • We have also added  Deffie Helman key support. Diffie-Hellman key exchange, also called exponential key exchange, is a method of digital encryption that uses numbers raised to specific powers to produce decryption keys on the basis of components that are never directly transmitted. That makes it a very secure key exchange and prevents man-in-middle attack.
 

Step-by-step guide

Following are the steps for security hardening of nginx server.

  1. Firstly, you will need to update the existing nginx server.
    • Login to your nginx server as root.
    • Check for the existing nginx version with the command nginx -v. The version should be > 1.5.
    • If your version is > 1.5 then goto step 2. If your version < 1.5 then execute the following commands.
    • Check if there is a file names nginx.repo in /etc/yum.repos.d/.
    • cd /etc/yum.repos.d
    • vi nginx.repo
    • Enter the following lines into the file then save it.
      [nginx]
      name=nginx repo
      baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
      gpgcheck=0
      enabled=1
    • then execute the following command yum update nginx. This will update your nginx server to the latest version.

2. Following changes need to be done in all of the .conf files of the nginx. The .conf files are  present in /etc/nginx/conf.d/ folder.

    • In the server block for port 443 disable the SSLv2 and SSLv3 protocol. To achieve this replace the line
      ssl_protocols SSLv2 SSLv3 TLSv1 with ssl_protocols TLSv1 TLSv1.1 TLSv1.2.
      SSLv2 and SSLv3 are considered to be insecure so we have to disable them and add TLS in place.
    • Next we have to add the SPDY protocol configurations. SPDY (pronounced speedy) is an open networking protocol developed primarily at 
      Google for transporting web content. SPDY manipulates HTTP traffic, with particular goals of reducing web page load latency and improving web security.
      To achieve this add the following lines before the location block in server tab.
       
      spdy_keepalive_timeout 300;spdy_headers_comp 9;
    • Below the SPDY configuration add the following lines for the HTTP Strict Transport Security (HSTS) is a web security policy mechanism 
      which helps to protect websites against protocol downgrade attacks and cookie hijacking.
       add_header Strict-Transport-Security “max-age=63072000; includeSubDomains; preload”;
      add_header X-Frame-Options DENY;
      add_header X-Content-Type-Options nosniff;
    • Now we have to add the Deffie Helman key into our conf files. Diffie Hellman is an algorithm used to establish a shared secret between two parties. 
      It is primarily used as a method of exchanging cryptography keys for use in symmetric encryption algorithms like AES.
      For that check if openssl is installed on the nginx server. If not install it by yum install openssl. 
      1. After that execute the following commands cd /etc/nginx/certs/
      2. Then execute the following command: openssl dhparam -out dhparams.pem 1024. This will generate a dhparams.pem file in your  /etc/nginx/certs/ directory.
      3. Now in your conf file comment the line which says ssl_ciphers and add the following line.ssl_ciphers ‘ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:
        DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:
        ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:
        ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:
        DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA’;
      4. After this line ensure you have the following configuration ssl_prefer_server_ciphers on; After this line add the following line ssl_dhparam /etc/nginx/certs/dhparams.pem;

After this save your .conf files and execute the following command-  service nginx restart. The security of your nginx server will have been increased.

Creating custom YUM repo

This page will be useful when creating your own/ custom yum repo and using it for your yum installations.

The standard RPM package management tool in Fedora, Red Hat Enterprise Linux, and CentOS is the yum package manager. Yum works quite well, if a little bit slower than other RPM package managers like apt4rpm and urpmi, but it is solid and handles dependencies extremely well. Using it with official and third-party repositories is a breeze to set up, but what if you want to use your own repository? Perhaps you manage a large computer lab or network and need to have — or want to have — certain packages available to these systems that you maintain in-house. Or perhaps you simply want to set up your own repository to share a few RPM packages.

The following are the steps to create your own repo.

Step-by-step guide

Follow these steps on server machine.

  1. Creating your own yum repository is very simple, and very straightforward. In order to do it, you need the createrepo tool, which can be found in the createrepo package, so to install it, execute as root:yum install createrepo
  2. Install Apache HTTPD. Apache will act as a server to your clients where you want to install the packages.You may need to respond to some prompts to confirm you want to complete the installation.

    yum install httpd

  3. Creating a custom repository requires RPM or DEB files. If you already have these files, proceed to the next step. Otherwise, download the appropriate versions of the rpms and their dependencies.

  4. Move your files to the web server directory and modify file permissions. For example, you might use the following commands:

    mv /tmp/repo/* /var/www/html/repos/centos/6/7

  5. After moving files, visit http://:80/repos/centos/6/7
  6. Add a user and password to your apache directory. Execute the following command.
    htpasswd -c /etc/repo repouser
    where repouser is your username and you will be prompted for the password. /etc/repo can be any location where you want to store your password file which will be created after this command.
  7. Add authentication to your apache to add security. Add the following in your /etc/httpd/conf/httpd.conf file

    AuthType Basic
    AuthName “Password Required”
    AuthUserFile /etc/repo
    Require valid-user

    /var/www/html/repos is the path of your repo where you have your rpms stored. /etc/repo is the path of your password file.

  8. Execute the following command to create your repo.
    createrepo  /var/www/html/repos/centos/6/7
  9. Add the port 80 in firewall to accept incoming requests to Apache.
    sudo lokkit -p 80:tcp
  10. Start your apache server service httpd start
    visit http://:80/repos/centos/6/7 in browser to verify that you see an index of files

Follow these steps on client machine.

  1. Rename/ delete the .repo files present in  /etc/yum.repos.d/ to xyz.repo.bak
  2. Create files on client systems with the following information and format, where hostname is the name of the web server you created in the previous step.
    vi /etc/yum.repos.d/custom.repo
    the file name can be anything you want. Your repo will be refered to from this file.
  3. Add the following data in the newly created file or you can add it in /etc/yum.conf
    [custom]
    name=CentOS-$releasever – Base
    baseurl=http://:@/repos/centos/6/7/
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
    enabled = 1#released updates
    [custom-updates]
    name=CentOS-$releasever – Updates
    baseurl=http://:@/repos/centos/6/7/
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6

    #additional packages that may be useful
    [custom-extras]
    name=CentOS-$releasever – Extras
    baseurl=http://:@/repos/centos/6/7/
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6

    #additional packages that extend functionality of existing packages
    [custom-centosplus]
    name=CentOS-$releasever – Plus
    baseurl=http://:@/repos/centos/6/7/
    gpgcheck=1
    enabled=0
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6

    #contrib – packages by Centos Users
    [custom-contrib]
    name=CentOS-$releasever – Contrib
    baseurl=http://:@/repos/centos/6/7/
    gpgcheck=1
    enabled=0
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
    Here will be repouser or the user you added, is the password you entered. For ex: repouser@PassW0rd@172.16.120.50

  4. Execute the following commands to update yum.
    yum clean all
    yum update
  5. To use your custom repo execute the following command:
    yum install packagename
    For ex. yum install tomcat6
    It will look for the packages in the newly created repo and install from there.

In case you want to add custom group to your yum repo, use the following steps:

  1. Change directory to where your repo is(i.e where your rpms are stored).Ex. if the repo is at /var/www/html/repos/centos/6/7 then

    cd  /var/www/html/repos/centos/6/7
  2. Suppose your groupinstall command is as sudo yum groupinstall test-package and the mandatory packages required are yum and glibc and optional package is rpm

    yum-groups-manager -n “test-package” –id=test-package –save=test-package.xml –mandatory yum glibc –optional rpm

  3. A file named test-package.xml will be created in your current working directory. Execute the following command.
    sudo createrepo -g /path/to/xml/file /path/to/repo
    Example: sudo createrepo -g /var/www/html/repos/centos/6/7/test-package.xml /var/www/html/repos/centos/6/7

Hashicorp Vault

What is Vault?

Vault is a tool for securely accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, certificates, and more. Vault provides a unified interface to any secret, while providing tight access control and recording a detailed audit log.

A modern system requires access to a multitude of secrets: database credentials, API keys for external services, credentials for service-oriented architecture communication, etc. Understanding who is accessing what secrets is already very difficult and platform-specific. Adding on key rolling, secure storage, and detailed audit logs is almost impossible without a custom solution. This is where Vault steps in.

The key features of Vault are:

1) Secure Secret Storage

2) Dynamic Secrets

3) Data Encryption

4) Leasing and Renewal

5) Revocation

 

Terms used in Vault

 

  • Storage Backend – A storage backend is responsible for durable storage of encrypted data. Backends are not trusted by Vault and are only expected to provide durability. The storage backend is configured when starting the Vault server.
  • Barrier – The barrier is cryptographic steel and concrete around the Vault. All data that flows between Vault and the Storage Backend passes through the barrier.
  • Secret Backend – A secret backend is responsible for managing secrets.
  • Audit Backend – An audit backend is responsible for managing audit logs. Every request to Vault and response from Vault goes through the configured audit backends.
  • Credential Backend – A credential backend is used to authenticate users or applications which are connecting to Vault. Once authenticated, the backend returns the list of applicable policies which should be applied. Vault takes an authenticated user and returns a client token that can be used for future requests.
  • Client Token – A client token is a conceptually similar to a session cookie on a web site. Once a user authenticates, Vault returns a client token which is used for future requests. The token is used by Vault to verify the identity of the client and to enforce the applicable ACL policies. This token is passed via HTTP headers.
  • Secret – A secret is the term for anything returned by Vault which contains confidential or cryptographic material. Not everything returned by Vault is a secret, for example system configuration, status information, or backend policies are not considered Secrets.
  • Server – Vault depends on a long-running instance which operates as a server. The Vault server provides an API which clients interact with and manages the interaction between all the backends, ACL enforcement, and secret lease revocation. Having a server based architecture decouples clients from the security keys and policies, enables centralized audit logging and simplifies administration for operators.

Vault Architecture

A very high level overview of Vault looks like this:

 

There is a clear separation of components that are inside or outside of the security barrier. Only the storage backend and the HTTP API are outside, all other components are inside the barrier.

 

The storage backend is untrusted and is used to durably store encrypted data. When the Vault server is started, it must be provided with a storage backend so that data is available across restarts. The HTTP API similarly must be started by the Vault server on start so that clients can interact with it.

Once started, the Vault is in a sealed state. Before any operation can be performed on the Vault it must be unsealed. This is done by providing the unseal keys. When the Vault is initialized it generates an encryption key which is used to protect all the data. That key is protected by a master key. By default, Vault uses a technique known as Shamir’s secret sharing algorithm to split the master key into 5 shares, any 3 of which are required to reconstruct the master key.

Keys

The number of shares and the minimum threshold required can both be specified. Shamir’s technique can be disabled, and the master key used directly for unsealing. Once Vault retrieves the encryption key, it is able to decrypt the data in the storage backend, and enters the unsealed state. Once unsealed, Vault loads all of the configured audit, credential and secret backends.

The configuration of those backends must be stored in Vault since they are security sensitive. Only users with the correct permissions should be able to modify them, meaning they cannot be specified outside of the barrier. By storing them in Vault, any changes to them are protected by the ACL system and tracked by audit logs.

After the Vault is unsealed, requests can be processed from the HTTP API to the Core. The core is used to manage the flow of requests through the system, enforce ACLs, and ensure audit logging is done.

When a client first connects to Vault, it needs to authenticate. Vault provides configurable credential backends providing flexibility in the authentication mechanism used. Human friendly mechanisms such as username/password or GitHub might be used for operators, while applications may use public/private keys or tokens to authenticate. An authentication request flows through core and into a credential backend, which determines if the request is valid and returns a list of associated policies.

Policies are just a named ACL rule. For example, the “root” policy is built-in and permits access to all resources. You can create any number of named policies with fine-grained control over paths. Vault operates exclusively in a whitelist mode, meaning that unless access is explicitly granted via a policy, the action is not allowed. Since a user may have multiple policies associated, an action is allowed if any policy permits it. Policies are stored and managed by an internal policy store. This internal store is manipulated through the system backend, which is always mounted at sys/.

Once authentication takes place and a credential backend provides a set of applicable policies, a new client token is generated and managed by the token store. This client token is sent back to the client, and is used to make future requests. This is similar to a cookie sent by a website after a user logs in. The client token may have a lease associated with it depending on the credential backend configuration. This means the client token may need to be periodically renewed to avoid invalidation.

Once authenticated, requests are made providing the client token. The token is used to verify the client is authorized and to load the relevant policies. The policies are used to authorize the client request. The request is then routed to the secret backend, which is processed depending on the type of backend. If the backend returns a secret, the core registers it with the expiration manager and attaches a lease ID. The lease ID is used by clients to renew or revoke their secret. If a client allows the lease to expire, the expiration manager automatically revokes the secret.

The core handles logging of requests and responses to the audit broker, which fans the request out to all the configured audit backends. Outside of the request flow, the core performs certain background activity. Lease management is critical, as it allows expired client tokens or secrets to be revoked automatically. Additionally, Vault handles certain partial failure cases by using write ahead logging with a rollback manager. This is managed transparently within the core and is not user visible.

Steps to Install Vault


1) Installing Vault is simple. There are two approaches to installing Vault: downloading a precompiled binary for your system, or installing from source. We will use the precompiled binary format. To install the precompiled binary, download the appropriate package for your system. 

2) You can use the following command as well: wget https://releases.hashicorp.com/vault/0.6.0/vault_0.6.0_linux_amd64.zip

Unzip by the command unzip vault_0.6.0_linux_amd64.zip

You will have a binary called vault in it. 

3) Once the zip is downloaded, unzip it into any directory. The vault binary inside is all that is necessary to run Vault . Any additional files, if any, aren’t required to run Vault.

Copy the binary to anywhere on your system. If you intend to access it from the command-line, make sure to place it somewhere on your PATH.

4) Add the path of your vault binary to your .bash_profile file in your home directory.

Execute the following to do it vi ~/bash_profile

export PATH=$PATH:/home/compose/vault  (If your vault binary is in /home/compose/vault/ directory)

Alternatively you can also add the unzipped vault binary file in /usr/bin so that you will be able to access vault as a command.

Verifying the Installation

To verify Vault is properly installed, execute the vault binary on your system. You should see help output. If you are executing it from the command line, make sure it is on your PATH or you may get an error about vault not being found.

Starting and configuring vault

1) Vault operates as a client/server application. The Vault server is the only piece of the Vault architecture that interacts with the data storage and backends. All operations done via the Vault CLI interact with the server over a TLS connection.

2) Before starting vault you will need to set the following environment variable VAULT_ADDR. To set it execute the following command export VAULT_ADDR=’http://127.0.0.1:8200&#8242;. 8200 is the default port for vault. You can set this environment variable permanently across all sessions by adding the following line in /etc/environment–   VAULT_ADDR=’http://127.0.0.1:8200&#8242;

3) The dev server is a built-in flag to start a pre-configured server that is not very secure but useful for playing with Vault locally. 

 

To start the Vault dev server, run vault server -dev

 

$ vault server -dev
WARNING: Dev mode is enabled!

In this mode, Vault is completely in-memory and unsealed.
Vault is configured to only have a single unseal key. The root
token has already been authenticated with the CLI, so you can
immediately begin using the Vault CLI.

The only step you need to take is to set the following
environment variable since Vault will be talking without TLS:

    export VAULT_ADDR='http://127.0.0.1:8200'

The unseal key and root token are reproduced below in case you
want to seal/unseal the Vault or play with authentication.

Unseal Key: 2252546b1a8551e8411502501719c4b3
Root Token: 79bd8011-af5a-f147-557e-c58be4fedf6c

==> Vault server configuration:

         Log Level: info
           Backend: inmem
        Listener 1: tcp (addr: "127.0.0.1:8200", tls: "disabled")

...

 

You should see output similar to that above. Vault does not fork, so it will continue to run in the foreground; to connect to it with later commands, open another shell.

 

As you can see, when you start a dev server, Vault warns you loudly. The dev server stores all its data in-memory (but still encrypted), listens on localhost without TLS, and automatically unseals and shows you the unseal key and root access key. The important thing about the dev server is that it is meant for development only. Do not run the dev server in production. Even if it was run in production, it wouldn’t be very useful since it stores data in-memory and every restart would clear all your secrets. You can practise vault read/write commands here. We won’t be using vault in dev mode as we want our data to stored permanently.

In the next steps you will see how to start and configue a durable vault server.

3) Now you need to make a hcl file to add the configurations of vault in it.

 HCL (HashiCorp Configuration Language) is a configuration language built by HashiCorp. The goal of HCL is to build a structured configuration language that is both human and machine friendly for use with command-line tools, but specifically targeted towards DevOps tools, servers, etc. HCL is also fully JSON compatible. That is, JSON can be used as completely valid input to a system expecting HCL. This helps makes systems interoperable with other systems. HCL is heavily inspired by libucl, nginx configuration, and others similar. you can find more details about HCL on https://github.com/hashicorp/hcl

4) You will need to mention a physical backend for the vault. There are various options in the physical backend.

The only physical backends actively maintained by HashiCorp areconsulinmem, and file.

  • consul – Store data within Consul. This backend supports HA. It is the most recommended backend for Vault and has been shown to work at high scale under heavy load.
  • etcd – Store data within etcd. This backend supports HA. This is a community-supported backend.
  • zookeeper – Store data within Zookeeper. This backend supports HA. This is a community-supported backend.
  • dynamodb – Store data in a DynamoDB table. This backend supports HA. This is a community-supported backend.
  • s3 – Store data within an S3 bucket S3. This backend does not support HA. This is a community-supported backend.
  • azure – Store data in an Azure Storage container Azure. This backend does not support HA. This is a community-supported backend.
  • swift – Store data within an OpenStack Swift container Swift. This backend does not support HA. This is a community-supported backend.
  • mysql – Store data within MySQL. This backend does not support HA. This is a community-supported backend.
  • postgresql – Store data within PostgreSQL. This backend does not support HA. This is a community-supported backend.
  • inmem – Store data in-memory. This is only really useful for development and experimentation. Data is lost whenever Vault is restarted.
  • file – Store data on the filesystem using a directory structure. This backend does not support HA.

Each of these backend has a different options for configuration. for simplicity we will be using file backend here. A sample fhcl file can be:

You can save the following file with any name but with .hcl extension for example config.hcl which we will store in /home/compose/data/ folder.

backend “file” {
  path = “/home/compose/data”
}
listener “tcp” {
 address = “0.0.0.0:8200” 
 tls_disable = 1
}

 

backend “file” specifies that the data produced by the vault will be stored in a file format

path specifies that the files will be stored in can be any folder.

listener will be tcp

address specifies that which machines will be able to access vault. 127.0.0.1:8200 will give access for requests only from localhost. 8.8.8.8:8200 or 0.0.0.0:8200 will give access to vault from anywhere.

tls_disable will be 1 if you are not providing any SSL certificates for authentication from client.

This is the basic file which you can use.

5) Start your vault server with the following command

vault server -config=/home/compose/data/config.hcl

point the -config to the config hcl file you just created. You need to run this command either as root or with sudo. All the next commands can be run by either root or compose without sudo.

6) If you have started your vault server for the first time then you will need to initialize it. Run the following command

vault init This will give an output as follows:

 

Unseal Key 1: a33a2812dskfybjgdbgy85a7d6da375bc9bc6c137e65778676f97b3f1482b26401
Unseal Key 2: fa91a7128dfd30f7c500ce1ffwefgtnghjj2871f3519773ada9d04bbcc3620ad02
Unseal Key 3: bb8d5e6d9372c3331044ffe678a4356912035209d6fca68f542f52cf2f3d5e0203
Unseal Key 4: 8c5977a14f8da814fa2f204ac5c2160927cdcf354fhfghfgjbgdbbb0347e4f8b04
Unseal Key 5: cd458edecf025bd02f6b11b3e43341dgdgewtea77756fagh6dc0ba4d775d312405
Initial Root Token: f15db23h-eae6-974f-45b7-se47u52d96ea
Vault initialized with 5 keys and a key threshold of 3. Please
securely distribute the above keys. When the Vault is re-sealed,
restarted, or stopped, you must provide at least 3 of these keys
to unseal it again.
Vault does not store the master key. Without at least 3 keys,
your Vault will remain permanently sealed.

Save these someplace safe as you will need this everytime you unseal your vault server to write or access data. By default you will need to enter any three of the five unseal key to unseal the vault completely.

You can refer to the architecture above for understanding the working of keys. VaultArchitecture

But if you want to change the default and want to use only one key then you can initialize the vault as: vault init -key-share=1 -key-threshold=1 which will generate only one unseal key.

7) After initializing your vault next step you have to do is unseal your vault, otherwise you won’t be able to perform any operations on the vault. Execute the following command:

vault unseal   you will be asked for an unseal key enter any one of the unseal keys generated during initialization.By default vault needs three keys out of five to be completely unsealed.

See the screenshot below:

2016-07-22 10_35_28-172.16.120.138 - Remote Desktop - __Remote

You can check that the Unseal Progress is 1. That means your first key was correct. The Unseal Progress count will increase everytime you execute unseal and enter a key.

So you will need to repeat the above step in total of three times. Entering a different key each time. 

2016-07-22 10_36_38-172.16.120.138 - Remote Desktop - __Remote

In the end of third time your vault will be completely unsealed.

8) You will now need to login into the vault server to read/write into the vault. Execute the following command to login.

vault auth

Where is token given to you when you initialised the vault. It is present after the five keys. This gives you the root access to vault to perform any activities.

Vault Commands

1) vault status to get the status of vault whether it is running or not.

2) vault write secret/hello excited=yes to write a key-value pair into the vault. where secret/hello is path to access your key. “excited” is your key-name and “yes” is the value. Key and value can be anything.

3) vault read secret/hello to read the value of the key you just wrote.

4) vault write secret/hello excited=very-much to change/update the value of your key

5) vault write secret/hello excited=yes city=Pune to add multiple keys. you can just separate them with space.

6)  vault write secret/hello abc=xyz will remove the existing keys (excited and city and create a new one abc)

7) vault read -format=json secret/hello return keys and values in json

8) vault delete secret/hello to delete your path.

9) If you don’t want your path start with secret/ then you can mount other backend like generic.

Execute vault mount generic Then you will be able to add paths like generic/hello instead to secret/hello. You can get more info on secret backends here https://www.vaultproject.io/docs/secrets/index.html

10) vault mounts to see the list of mounts

11) vault write generic/hello world=Today to write to newly mounted secret backend.

12) vault read generic/hello to read it.

13) vault token-create with this vault will create a token which you can give to a user so that he an login to the vault. This will add a new user to your server.

The new user can login with vault auth . You can renew or revoke the user with vault renew or vault revoke

To add a user with username and password and not with token use the following commands.

14) vault auth-enable userpass

vault auth -methods               //This will display the authentication methods you should see userpass in this

vault write auth/userpass/users/user1 password=Canopy1! policies=root //To add username user1 with password Canopy1! will have root policy attached to it.

vault auth -method=userpass username=compose password=Canopy1!    //user can login with this

To add read-only policy to a user execute the following commands

15) Create a file with extension .hcl. Here I have created read-only.hcl

path “secret/*” {
policy = “read”
}

path “auth/token/lookup-self” {
policy = “read”
}

vault policy-write read-policy read-only.hcl //to add the policy named read-policy from file read-only.hcl

vault policies  //to display list of policies

vault policies read-policy //to display newly created policy

vault write auth/userpass/users/read-user password=Canopy1! policies=read-policy   //to add user with that policy

Now if the new user logs in he/she won’t be able to write anything in vault and just read it.

16) vault audit-enable file file_path=/home/compose/data/vault_audit.log //This will add the logs in vault_audit.log file.

Configure vault and AMP

You can add the following lines in brooklyn.properties to access the vault key-values

brooklyn.external.vault=org.apache.brooklyn.core.config.external.vault.VaultUserPassExternalConfigSupplier
brooklyn.external.vault.username=user1 //Login username you created
brooklyn.external.vault.password=Canopy1!  //Login password

brooklyn.external.vault.endpoint=http://172.16.120.159:8200/   //Ip address of your vault server
brooklyn.external.vault.path=secret/CP0000/AWS     //Path to your secrets

brooklyn.location.jclouds.aws-ec2.identity=$brooklyn:external(“vault”, “identity”)
brooklyn.location.jclouds.aws-ec2.credential=$brooklyn:external(“vault”, “credential”)

This will make AMP access your creds from vault.

Backup and recovery

All of the required vault data is present in the folder you mentioned in your config.hcl as path variable here /home/compose/data. So just take backup of the folder and paste that folder into the recovered machine. A prerequisite is that vault binary should be present in that machine.

Backup can be taken via cronjob as

0 0 * * *  rsync -avz –delete root@vault:/home/compose/data /backup/vault/

Upload artifacts to AWS S3

This document can be used when you want to upload files to AWS s3

Step-by-step guide

Execute the following steps:

  1. Install ruby with the following commands in data machine where backup will be stored
    gpg2 –keyserver hkp://keys.gnupg.net –recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
    sudo \curl -L https://get.rvm.io | bash -s stable –ruby
    source /home/compose/.rvm/scripts/rvm
    rvm list known   #####This command will show available ruby versions
    You can install the version of your choice by the following command:
    rvm install ruby 2.3.0  ###Where 2.3.0 is ruby version to be installed
    You can install latest ruby version by the following command:
    rvm install ruby –latest
    Check the version of ruby installed by:
    ruby -v
  2. Check if ruby gem is present in your machine: gem -v
  3. If not present install by sudo yum install ‘rubygems’
  4. Then install aws-sdk:  gem install aws-sdk
  5. Add the code as below in a file upload-to-s3.rb:
    # Note: Please replace below keys with your production settings
    # 1. access_key_id
    # 2. secret_access_key
    # 3. region
    # 4. buckets[name] is actual bucket name in S3require ‘aws-sdk’

    def upload( file_name, destination, directory, bucket)

    destination_file_name = destination

    puts “Creating #{destination_file_name} file…. “

    # Zip cloudsoft persisted folder
    `tar -cvzf #{destination_file_name} #{directory}`

    puts “Created #{destination_file_name} file… “

    puts “uploading #{destination} file to aws…”
    ENV[‘AWS_ACCESS_KEY_ID’]=’Your key here’
    ENV[‘AWS_SECRET_ACCESS_KEY’]=’Your secret here’
    ENV[‘AWS_REGION’]=’Your region here’

    s3 = Aws::S3::Client.new(
    )

    File.open(destination_file_name, ‘rb’) do |file|
    s3.put_object(bucket: ‘bucket_name’, key: file_name, body: file)
    end
    #@s3 = Aws::S3::Client.new(aws_credentials)
    #@s3_bucket = @s3.buckets[bucket]
    #@s3_bucket.objects[file_name].write(file:destination_file_name)

    puts “uploaded #{destination} file to aws…”

    puts “deleting #{destination} file…”
    `rm -rf #{destination}`
    puts “deleted #{destination} file…”

    end

    def clear(nfsLoc)

    # Removing all existing .tar.zip file from folders
    nfsLoc.each_pair do |key, value|
    puts “deleting #{key} file…”
    Dir[“#{key}/*.tar.gz”].each do |path|
    puts path
    `rm -rf #{path}`
    end

    puts “deleted #{key} file…”
    end
    end

    def start()

    nfsLoc = {‘/backup_dir’ => ‘bucket_name/data’}

    nfsLoc.each_pair do |key, value|
    puts “#{key} #{value}”

    Dir.glob(“#{key}/*”) do |dname|
    filename = ‘%s.%s’ % [dname, ‘tar.gz’]

    file = File.basename(filename)
    folderName = File.basename(dname)
    bucket = ‘%s/%s’ % [“#{value}”, folderName]

    puts “….. Uploading started for %s file to AWS S3 …..” % [file]
    t = ‘%s/’ % dname
    puts upload(file, filename, t, bucket)

    puts “….. Uploding finished for %s file to AWS S3 …..” % [file]
    end
    end
    end

    start()

  6. After that execute the following:
    ruby upload-to-s3.rb
  7. If adding to jenkins job add the following line in pre-build script:
    source ~/.rvm/scripts/rvm

Terraform

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied.

The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc.

The key features of Terraform are:

  • Infrastructure as Code: Infrastructure is described using a high-level configuration syntax. This allows a blueprint of your datacenter to be versioned and treated as you would any other code. Additionally, infrastructure can be shared and re-used.
  • Execution Plans: Terraform has a “planning” step where it generates an execution plan. The execution plan shows what Terraform will do when you call apply. This lets you avoid any surprises when Terraform manipulates infrastructure.
  • Resource Graph: Terraform builds a graph of all your resources, and parallelizes the creation and modification of any non-dependent resources. Because of this, Terraform builds infrastructure as efficiently as possible, and operators get insight into dependencies in their infrastructure.
  • Change Automation: Complex changesets can be applied to your infrastructure with minimal human interaction. With the previously mentioned execution plan and resource graph, you know exactly what Terraform will change and in what order, avoiding many possible human errors.

Why terraform:

Terraform provides a flexible abstraction of resources and providers. This model allows for representing everything from physical hardware, virtual machines, and containers, to email and DNS providers. Because of this flexibility, Terraform can be used to solve many different problems. This means there are a number of existing tools that overlap with the capabilities of Terraform. 

 

Multi-Tier Applications

 

A very common pattern is the N-tier architecture. The most common 2-tier architecture is a pool of web servers that use a database tier. Additional tiers get added for API servers, caching servers, routing meshes, etc. This pattern is used because the tiers can be scaled independently and provide a separation of concerns.

 

Terraform is an ideal tool for building and managing these infrastructures. Each tier can be described as a collection of resources, and the dependencies between each tier are handled automatically; Terraform will ensure the database tier is available before the web servers are started and that the load balancers are aware of the web nodes. Each tier can then be scaled easily using Terraform by modifying a single count configuration value. Because the creation and provisioning of a resource is codified and automated, elastically scaling with load becomes trivial.

 

Self-Service Clusters

At a certain organizational size, it becomes very challenging for a centralized operations team to manage a large and growing infrastructure. Instead it becomes more attractive to make “self-serve” infrastructure, allowing product teams to manage their own infrastructure using tooling provided by the central operations team.

Using Terraform, the knowledge of how to build and scale a service can be codified in a configuration. Terraform configurations can be shared within an organization enabling customer teams to use the configuration as a black box and use Terraform as a tool to manage their services.

Terraform basics:

Terraform must first be installed on your machine. Terraform is distributed as a binary package for all supported platforms and architecture. This page will not cover how to compile Terraform from source.

Installing Terraform

To install Terraform, find the appropriate package for your system and download it. Terraform is packaged as a zip archive.

After downloading Terraform, unzip the package into a directory where Terraform will be installed. The directory will contain a binary program terraform. The final step is to make sure the directory you installed Terraform to is on the PATH. See this page for instructions on setting the PATH on Linux and Mac. This page contains instructions for setting the PATH on Windows.

Example for Linux/Mac – Type the following into your terminal:

PATH=/usr/local/terraform/bin:/home/your-user-name/terraform:$PATH

Example for Windows – Type the following into Powershell:

[Environment]::SetEnvironmentVariable("PATH", $env:PATH + ({;C:\terraform},{C:\terraform})[$env:PATH[-1] -eq ';'], "User")

Verifying the Installation

After installing Terraform, verify the installation worked by opening a new terminal session and checking that terraform is available. By executingterraform you should see help output similar to that below:

$ terraform
Usage: terraform [--version] [--help]  [args]

The available commands for execution are listed below.
The most common, useful commands are shown first, followed by
less common or more advanced commands. If you're just getting
started with Terraform, stick with the common commands. For the
other commands, please read the help and docs before usage.

Common commands:
    apply              Builds or changes infrastructure
    destroy            Destroy Terraform-managed infrastructure
    fmt                Rewrites config files to canonical format
    get                Download and install modules for the configuration
    graph              Create a visual graph of Terraform resources
    import             Import existing infrastructure into Terraform
    init               Initializes Terraform configuration from a module
    output             Read an output from a state file
    plan               Generate and show an execution plan
    push               Upload this Terraform module to Atlas to run
    refresh            Update local state file against real resources
    remote             Configure remote state storage
    show               Inspect Terraform state or plan
    taint              Manually mark a resource for recreation
    untaint            Manually unmark a resource as tainted
    validate           Validates the Terraform files
    version            Prints the Terraform version

All other commands:
    state              Advanced state management

If you get an error that terraform could not be found, then your PATH environment variable was not setup properly. Please go back and ensure that your PATH variable contains the directory where Terraform was installed.

Otherwise, Terraform is installed and ready to go!

 

Configuration

 

The set of files used to describe infrastructure in Terraform is simply known as a Terraform configuration. We’re going to write our first configuration now to launch a single AWS EC2 instance.

 

The format of the configuration files is documented here. Configuration files can also be JSON, but we recommend only using JSON when the configuration is generated by a machine.

 

The entire configuration is shown below. We’ll go over each part after. Save the contents to a file named example.tf. Verify that there are no other *.tf files in your directory, since Terraform loads all of them.

 

provider "aws" {
  access_key = "ACCESS_KEY_HERE"
  secret_key = "SECRET_KEY_HERE"
  region     = "us-east-1"
}

resource "aws_instance" "example" {
  ami           = "ami-0d729a60"
  instance_type = "t2.micro"
}

Replace the ACCESS_KEY_HERE and SECRET_KEY_HERE with your AWS access key and secret key, available from this page. We’re hardcoding them for now, but will extract these into variables later in the getting started guide.

This is a complete configuration that Terraform is ready to apply. The general structure should be intuitive and straightforward.

The provider block is used to configure the named provider, in our case “aws.” A provider is responsible for creating and managing resources. Multiple provider blocks can exist if a Terraform configuration is composed of multiple providers, which is a common situation.

The resource block defines a resource that exists within the infrastructure. A resource might be a physical component such as an EC2 instance, or it can be a logical resource such as a Heroku application.

The resource block has two strings before opening the block: the resource type and the resource name. In our example, the resource type is “aws_instance” and the name is “example.” The prefix of the type maps to the provider. In our case “aws_instance” automatically tells Terraform that it is managed by the “aws” provider.

Within the resource block itself is configuration for that resource. This is dependent on each resource provider and is fully documented within ourproviders reference. For our EC2 instance, we specify an AMI for Ubuntu, and request a “t2.micro” instance so we qualify under the free tier.

In the same directory as the example.tf file you created, run terraform plan. You should see output similar to what is copied below. We’ve truncated some of the output to save space.

$ terraform plan
...

+ aws_instance.example
    ami:                      "ami-0d729a60"
    availability_zone:        ""
    ebs_block_device.#:       ""
    ephemeral_block_device.#: ""
    instance_state:           ""
    instance_type:            "t2.micro"
    key_name:                 ""
    placement_group:          ""
    private_dns:              ""
    private_ip:               ""
    public_dns:               ""
    public_ip:                ""
    root_block_device.#:      ""
    security_groups.#:        ""
    source_dest_check:        "true"
    subnet_id:                ""
    tenancy:                  ""
    vpc_security_group_ids.#: ""

terraform plan shows what changes Terraform will apply to your infrastructure given the current state of your infrastructure as well as the current contents of your configuration.

If terraform plan failed with an error, read the error message and fix the error that occurred. At this stage, it is probably a syntax error in the configuration.

The output format is similar to the diff format generated by tools such as Git. The output has a “+” next to “aws_instance.example”, meaning that Terraform will create this resource. Beneath that, it shows the attributes that will be set. When the value displayed is , it means that the value won’t be known until the resource is created.

Apply

The plan looks good, our configuration appears valid, so it’s time to create real resources. Run terraform apply in the same directory as your example.tf, and watch it go! It will take a few minutes since Terraform waits for the EC2 instance to become available.

$ terraform apply
aws_instance.example: Creating...
  ami:                      "" => "ami-0d729a60"
  instance_type:            "" => "t2.micro"
  [...]

aws_instance.example: Still creating... (10s elapsed)
aws_instance.example: Creation complete

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

...

Done! You can go to the AWS console to prove to yourself that the EC2 instance has been created.

Terraform also puts some state into the terraform.tfstate file by default. This state file is extremely important; it maps various resource metadata to actual resource IDs so that Terraform knows what it is managing. This file must be saved and distributed to anyone who might run Terraform. It is generally recommended to setup remote state when working with Terraform. This will mean that any potential secrets stored in the state file, will not be checked into version control

You can inspect the state using terraform show:

$ terraform show
aws_instance.example:
  id = i-32cf65a8
  ami = ami-0d729a60
  availability_zone = us-east-1a
  instance_state = running
  instance_type = t2.micro
  private_ip = 172.31.30.244
  public_dns = ec2-52-90-212-55.compute-1.amazonaws.com
  public_ip = 52.90.212.55
  subnet_id = subnet-1497024d
  vpc_security_group_ids.# = 1
  vpc_security_group_ids.3348721628 = sg-67652003

You can see that by creating our resource, we’ve also gathered a lot more metadata about it. This metadata can actually be referenced for other resources or outputs, which will be covered later in the getting started guide.

Provisioning

The EC2 instance we launched at this point is based on the AMI given, but has no additional software installed. If you’re running an image-based infrastructure (perhaps creating images with Packer), then this is all you need.

Destroying your infrastructure is a rare event in production environments. But if you’re using Terraform to spin up multiple environments such as development, test, QA environments, then destroying is a useful action.

Plan

Before destroying our infrastructure, we can use the plan command to see what resources Terraform will destroy.

$ terraform plan -destroy
...

- aws_instance.example

With the -destroy flag, we’re asking Terraform to plan a destroy, where all resources under Terraform management are destroyed. You can use this output to verify exactly what resources Terraform is managing and will destroy.

Destroy

Let’s destroy the infrastructure now:

$ terraform destroy
aws_instance.example: Destroying...

Apply complete! Resources: 0 added, 0 changed, 1 destroyed.

...

The terraform destroy command should ask you to verify that you really want to destroy the infrastructure. Terraform only accepts the literal “yes” as an answer as a safety mechanism. Once entered, Terraform will go through and destroy the infrastructure.

Just like with apply, Terraform is smart enough to determine what order things should be destroyed. In our case, we only had one resource, so there wasn’t any ordering necessary. But in more complicated cases with multiple resources, Terraform will destroy in the proper order.

Implicit and Explicit Dependencies

Most dependencies in Terraform are implicit: Terraform is able to infer dependencies based on usage of attributes of other resources.

Using this information, Terraform builds a graph of resources. This tells Terraform not only in what order to create resources, but also what resources can be created in parallel. In our example, since the IP address depended on the EC2 instance, they could not be created in parallel.

Implicit dependencies work well and are usually all you ever need. However, you can also specify explicit dependencies with the depends_on parameter which is available on any resource. For example, we could modify the “aws_eip” resource to the following, which effectively does the same thing and is redundant:

resource "aws_eip" "ip" {
    instance = "${aws_instance.example.id}"
    depends_on = ["aws_instance.example"]
}

If you’re ever unsure about the dependency chain that Terraform is creating, you can use the terraform graph command to view the graph. This command outputs a dot-formatted graph which can be viewed with Graphviz.

Non-Dependent Resources

We can now augment the configuration with another EC2 instance. Because this doesn’t rely on any other resource, it can be created in parallel to everything else.

resource "aws_instance" "another" {
  ami           = "ami-13be557e"
  instance_type = "t2.micro"
}

You can view the graph with terraform graph to see that nothing depends on this and that it will likely be created in parallel.

Before moving on, remove this resource from your configuration and terraform apply again to destroy it. We won’t use the second instance anymore in the getting started guide.

Defining Variables

Let’s first extract our access key, secret key, and region into a few variables. Create another file variables.tf with the following contents.

Note: that the file can be named anything, since Terraform loads all files ending in .tf in a directory.

variable "access_key" {}
variable "secret_key" {}
variable "region" {
  default = "us-east-1"
}

This defines three variables within your Terraform configuration. The first two have empty blocks {}. The third sets a default. If a default value is set, the variable is optional. Otherwise, the variable is required. If you run terraform plan now, Terraform will prompt you for the values for unset string variables.

Using Variables in Configuration

Next, replace the AWS provider configuration with the following:

provider "aws" {
  access_key = "${var.access_key}"
  secret_key = "${var.secret_key}"
  region     = "${var.region}"
}

This uses more interpolations, this time prefixed with var.. This tells Terraform that you’re accessing variables. This configures the AWS provider with the given variables.

Assigning Variables

There are multiple ways to assign variables. Below is also the order in which variable values are chosen. The following is the descending order of precedence in which variables are considered.

Command-line flags

You can set variables directly on the command-line with the -var flag. Any command in Terraform that inspects the configuration accepts this flag, such as applyplan, and refresh:

$ terraform plan \
  -var 'access_key=foo' \
  -var 'secret_key=bar'
...

Once again, setting variables this way will not save them, and they’ll have to be input repeatedly as commands are executed.

From a file

To persist variable values, create a file and assign variables within this file. Create a file named terraform.tfvars with the following contents:

access_key = "foo"
secret_key = "bar"

If a terraform.tfvars file is present in the current directory, Terraform automatically loads it to populate variables. If the file is named something else, you can use the -var-file flag directly to specify a file. These files are the same syntax as Terraform configuration files. And like Terraform configuration files, these files can also be JSON.

From environment variables

Terraform will read environment variables in the form of TF_VAR_name to find the value for a variable. For example, the TF_VAR_access_key variable can be set to set the access_key variable.

We don’t recommend saving usernames and password to version control, But you can create a local secret variables file and use -var-file to load it.

You can use multiple -var-file arguments in a single command, with some checked in to version control and others not checked in. For example:

$ terraform plan \
  -var-file="secret.tfvars" \
  -var-file="production.tfvars"

Note: Environment variables can only populate string-type variables. List and map type variables must be populated via one of the other mechanisms.

Creating AWS resources:

  1. VPC:resource “aws_vpc” “main” { cidr_block = “10.0.0.0/16” }
  2. Internet Gateway
    resource "aws_internet_gateway" "gw" {
        vpc_id = "${aws_vpc.main.id}"
    
        tags {
            Name = "main"
        }
    }
    

    The following arguments are supported:

    • vpc_id – (Required) The VPC ID to create in.
    • tags – (Optional) A mapping of tags to assign to the resource.
  3. Security Group
    Basic usage

    resource "aws_security_group" "allow_all" {
      name = "allow_all"
      description = "Allow all inbound traffic"
    
      ingress {
          from_port = 0
          to_port = 0
          protocol = "-1"
          cidr_blocks = ["0.0.0.0/0"]
      }
    
      egress {
          from_port = 0
          to_port = 0
          protocol = "-1"
          cidr_blocks = ["0.0.0.0/0"]
          prefix_list_ids = ["pl-12c4e678"]
      }
    }
  4. Subnet
    variable "subnet_id" {}
    
    data "aws_subnet" "selected" {
      id = "${var.subnet_id}"
    }
    
    resource "aws_security_group" "subnet" {
      vpc_id = "${data.aws_subnet.selected.vpc_id}"
    
      ingress {
        cidr_blocks = ["${data.aws_subnet.selected.cidr_block}"]
        from_port   = 80
        to_port     = 80
        protocol    = "tcp"
      }
    }
  5. Route Table
    variable "subnet_id" {}
    
    data "aws_route_table" "selected" {
      subnet_id = "${var.subnet_id}"
    }
    
    resource "aws_route" "route" {
      route_table_id = "${data.aws_route_table.selected.id}"
      destination_cidr_block = "10.0.1.0/22"
      vpc_peering_connection_id = "pcx-45ff3dc1"
    }
  6. Route table association
    resource "aws_route_table_association" "a" {
        subnet_id = "${aws_subnet.foo.id}"
        route_table_id = "${aws_route_table.bar.id}"
    }
  7. EC2 instance
    # Create a new instance of the latest Ubuntu 14.04 on an
    # t2.micro node with an AWS Tag naming it "HelloWorld"
    provider "aws" {
        region = "us-west-2"
    }
    
    data "aws_ami" "ubuntu" {
      most_recent = true
      filter {
        name = "name"
        values = ["ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-*"]
      }
      filter {
        name = "virtualization-type"
        values = ["hvm"]
      }
      owners = ["099720109477"] # Canonical
    }
    
    resource "aws_instance" "web" {
        ami = "${data.aws_ami.ubuntu.id}"
        instance_type = "t2.micro"
        tags {
            Name = "HelloWorld"
        }
    }
  8. EC2 instance elastic IP
    resource "aws_eip" "lb" {
      instance = "${aws_instance.web.id}"
      vpc      = true
    }
  9. Provisioner without bastion

    Many provisioners require access to the remote resource. For example, a provisioner may need to use SSH or WinRM to connect to the resource.

    Terraform uses a number of defaults when connecting to a resource, but these can be overridden using a connection block in either a resource or provisioner. Any connection information provided in a resource will apply to all the provisioners, but it can be scoped to a single provisioner as well. One use case is to have an initial provisioner connect as the root user to setup user accounts, and have subsequent provisioners connect as a user with more limited permissions.

    # Copies the file as the root user using SSH
    provisioner "file" {
        source = "conf/myapp.conf"
        destination = "/etc/myapp.conf"
        connection {
            type = "ssh"
            user = "root"
            password = "${var.root_password}"
        }
    }
    
    # Copies the file as the Administrator user using WinRM
    provisioner "file" {
        source = "conf/myapp.conf"
        destination = "C:/App/myapp.conf"
        connection {
            type = "winrm"
            user = "Administrator"
            password = "${var.admin_password}"
        }
    }
  10. Provisioner with bastion:provisioner “file” {
    source = “configure.sh”
    destination = “/tmp/configure.sh”
    connection {
    agent = false
    bastion_host = “${var.nat_public_ip}”
    bastion_user = “ec2-user”
    bastion_port = 22
    bastion_private_key = “${file(“${var.tf_home}/${var.aws_key_path}”)}”
    user = “centos”
    host = “${self.private_ip}”
    private_key = “${file(“${var.tf_home}/${var.aws_key_path}”)}”
    timeout = “2m”
    }
    }
  11. Remote-exec provisioner
    The remote-exec provisioner invokes a script on a remote resource after it is created. This can be used to run a configuration management tool, bootstrap into a cluster, etc. To invoke a local process, see the local-exec provisioner instead. The remote-exec provisioner supports both ssh and winrm type connections.

    # Run puppet and join our Consul cluster
    resource "aws_instance" "web" {
        ...
        provisioner "remote-exec" {
            inline = [
            "puppet apply",
            "consul join ${aws_instance.web.private_ip}"
            ]
        }
    }
  12. Output
    Outputs are a way to tell Terraform what data is important. This data is outputted when apply is called, and can be queried using the terraform output command.Let’s define an output to show us the public IP address of the elastic IP address that we create. Add this to any of your *.tf files:

    output "ip" {
        value = "${aws_eip.ip.public_ip}"
    }
    

    This defines an output variable named “ip”. The value field specifies what the value will be, and almost always contains one or more interpolations, since the output data is typically dynamic. In this case, we’re outputting the public_ipattribute of the elastic IP address.

    Multiple output blocks can be defined to specify multiple output variables.

  13. Modules
    Modules in Terraform are self-contained packages of Terraform configurations that are managed as a group. Modules are used to create reusable components, improve organization, and to treat pieces of infrastructure as a black box.
     Create a configuration file with the following contents: 

    provider "aws" {
        access_key = "AWS ACCESS KEY"
        secret_key = "AWS SECRET KEY"
        region = "AWS REGION"
    }
    
    module "consul" {
        source = "github.com/hashicorp/consul/terraform/aws"
    
        key_name = "AWS SSH KEY NAME"
        key_path = "PATH TO ABOVE PRIVATE KEY"
        region = "us-east-1"
        servers = "3"
    }
    

      The module block tells Terraform to create and manage a module. It is very similar to the resource block. It has a logical name — in this case “consul” — and a set of configurations.

     The source configuration is the only mandatory key for modules. It tells Terraform where the module can be retrieved. Terraform automatically downloads and manages modules for you. For our example, we’re getting the module directly from GitHub. Terraform can retrieve modules from a variety of sources including Git, Mercurial, HTTP, and file paths.

     The other configurations are parameters to our module. Please fill them in with the proper values.

     Prior to running any command such as plan with a configuration that uses modules, you’ll have to get the modules. This is done using the get command.

     

    $ terraform get
    ...
    

     This command will download the modules if they haven’t been already. By default, the command will not check for updates, so it is safe (and fast) to run multiple times. You can use the -u flag to check and download updates.

     

    With the modules downloaded, we can now plan and apply it. If you runterraform plan, you should see output similar to below:

    $ terraform plan
    ...
    + module.consul.aws_instance.server.0
    ...
    + module.consul.aws_instance.server.1
    ...
    + module.consul.aws_instance.server.2
    ...
    + module.consul.aws_security_group.consul
    ...
    Plan: 4 to add, 0 to change, 0 to destroy.
    

    Conceptually, the module is treated like a black box. In the plan, however Terraform shows each resource the module manages so you can see each detail about what the plan will do. If you’d like compressed plan output, you can specify the -module-depth= flag to get Terraform to output summaries by module.

    Next, run terraform apply to create the module. Note that as we warned above, the resources this module creates are outside of the AWS free tier, so this will have some cost associated with it.

    $ terraform apply
    ...
    Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
    

    After a few minutes, you’ll have a three server Consul cluster up and running! Without any knowledge of how Consul works, how to install Consul, or how to configure Consul into a cluster