GoCD installation on centos 7

Installation of the GoCD server using the package manager will require root access on the machine. You are also required to have a java version 8 for the server to run.

The installer will create a user called go if one does not exist on the machine. The home directory will be set to /var/go. If you want to create your own go user, make sure you do it before you install the GoCD server.

 

RPM based distributions (ie RedHat/CentOS/Fedora)

The GoCD server RPM installer has been tested on RedHat Enterprise Linux and CentOS. It should work on most RPM based Linux distributions.

If you prefer to use the YUM repository and install via YUM, paste the following in your shell —

sudo curl https://download.gocd.org/gocd.repo -o /etc/yum.repos.d/gocd.repo
sudo yum install -y java-1.8.0-openjdk #atleast Java 8 is required, you may use other jre/jdk if you prefer

Once you have the repository setup, execute

sudo yum install -y go-server

Alternatively, if you have the server RPM downloaded:

sudo yum install -y java-1.8.0-openjdk #atleast Java 8 is required, you may use other jre/jdk if you prefer
sudo rpm -i go-server-${version}.noarch.rpm

Managing the go-server service on linux

To manage the go-server service, you may use the following commands –

sudo /etc/init.d/go-server [start|stop|status|restart]

Once the installation is complete the GoCD server will be started and it will print out the URL for the Dashboard page. This will be http://localhost:8153/go

Location of GoCD server files

The GoCD server installs its files in the following locations on your filesystem:

/var/lib/go-server       #contains the binaries and database
/etc/go                  #contains the pipeline configuration files
/var/log/go-server       #contains the server logs
/usr/share/go-server     #contains the start script
/etc/default/go-server   #contains all the environment variables with default values. These variable values can be changed as per requirement.

Installing GoCD agent on Linux

Installation of the GoCD agent using the package manager will require root access on the machine. You are also required to have a java version 8 (same version as the GoCD server) for the agent to run.

The installer will create a user called go if one does not exist on the machine. The home directory will be set to /var/go. If you want to create your own go user, make sure you do it before you install the GoCD agent.

RPM based distributions (ie RedHat/CentOS/Fedora)

The GoCD agent RPM installer has been tested on RedHat Enterprise Linux and CentOS. It should work on most RPM based Linux distributions.

If you prefer to use the YUM repository and install via YUM, paste the following in your shell —

sudo curl https://download.gocd.org/gocd.repo -o /etc/yum.repos.d/gocd.repo
sudo yum install -y java-1.8.0-openjdk #atleast Java 8 is required, you may use other jre/jdk if you prefer

Once you have the repository setup, execute

sudo yum install -y go-agent

Alternatively, if you have the agent RPM downloaded:

sudo yum install -y java-1.8.0-openjdk #atleast Java 8 is required, you may use other jre/jdk if you prefer
sudo rpm -i go-agent-${version}.noarch.rpm

Managing the go-agent service on linux

To manage the go-agent service, you may use the following commands –

sudo /etc/init.d/go-agent [start|stop|status|restart]

Configuring the go-agent

After installing the go-agent service, you must first configure the service with the hostname (or IP address) of your GoCD server, in order to do this –

  1. Open /etc/default/go-agent in your favourite text editor.
  2. Change the IP address (127.0.0.1) in the line GO_SERVER_URL=https://127.0.0.1:8154/go to the hostname (or IP address) of your GoCD server.
  3. Save the file and exit your editor.
  4. Run /etc/init.d/go-agent [start|restart] to (re)start the agent.

Note: You can override default environment for the GoCD agent by editing the file /etc/defaults/go-agent

The GoCD has been installed you can open the port 8153 and access the following url on the browser:

http://<ip&gt;:8153/go

Advertisements

pyDash – A Web Based Linux Performance Monitoring Tool

pydash is a lightweight web-based monitoring tool for Linux written in Python and Django plus Chart.js. It has been tested and can run on the following mainstream Linux distributions: CentOS, Fedora, Ubuntu, Debian, Arch Linux, Raspbian as well as Pidora.

You can use it to keep an eye on your Linux PC/server resources such as CPUs, RAM, network stats, processes including online users and more. The dashboard is developed entirely using Python libraries provided in the main Python distribution, therefore it has a few dependencies; you don’t need to install many packages or libraries to run it.

In this article, we will show you how to install pydash to monitor Linux server performance.

How to Install pyDash in Linux System

1. First install required packages: git and Python pip as follows:

-------------- On Debian/Ubuntu -------------- 
$ sudo apt-get install git python-pip
-------------- On CentOS/RHEL -------------- 
# yum install epel-release
# yum install git python-pip
-------------- On Fedora 22+ --------------
# dnf install git python-pip

2. If you have git and Python pip installed, next, install virtualenv which helps to deal with dependency issues for Python projects, as below:

# pip install virtualenv
OR
$ sudo pip install virtualenv

3. Now using git command, clone the pydash directory into your home directory like so:

# git clone https://github.com/k3oni/pydash.git
# cd pydash

4. Next, create a virtual environment for your project called pydashtest using the virtualenv command below.

$ virtualenv pydashtest #give a name for your virtual environment like pydashtest

Create Virtual Environment

Important: Take note the virtual environment’s bin directory path highlighted in the screenshot above, yours could be different depending on where you cloned the pydash folder.

5. Once you have created the virtual environment (pydashtest), you must activate it before using it as follows.

$ source /home/aaronkilik/pydash/pydashtest/bin/activate

Active Virtual Environment

From the screenshot above, you’ll note that the PS1 prompt changes indicating that your virtual environment has been activated and is ready for use.

6. Now install the pydash project requirements; if you are curious enough, view the contents of requirements.txt using the cat command and the install them using as shown below.

$ cat requirements.txt
$ pip install -r requirements.txt

7. Now move into the pydash directory containing settings.py or simple run the command below to open this file to change the SECRET_KEY to a custom value.

$ vi pydash/settings.py

Set Secret Key

Save the file and exit.

8. Afterward, run the django command below to create the project database and install Django’s auth system and create a project super user.

$ python manage.py syncdb

Answer the questions below according to your scenario:

Would you like to create one now? (yes/no): yes
Username (leave blank to use 'root'): admin
Email address: aaronkilik@gmail.com
Password: ###########
Password (again): ############

Create Project Database

9. At this point, all should be set, now run the following command to start the Django development server.

$ python manage.py runserver

10. Next, open your web browser and type the URL: http://127.0.0.1:8000/ to get the web dashboard login interface. Enter the super user name and password you created while creating the database and installing Django’s auth system in step 8 and click Sign In.

pyDash Login Interface

11. Once you login into pydash main interface, you will get a section for monitoring general system info, CPU, memory and disk usage together with system load average.

Simply scroll down to view more sections.

pyDash Server Performance Overview

12. Next, screenshot of the pydash showing a section for keeping track of interfaces, IP addresses, Internet traffic, disk read/writes, online users and netstats.

pyDash Network Overview

13. Next is a screenshot of the pydash main interface showing a section to keep an eye on active processes on the system.

pyDash Active Linux Processes

Cloud Foundry

cloud foundry: automatic deployment of applications.

It has BOSH which communicates with the infrastructure and ensures that the systems are up aand running, health monitoring.

docker hit memory for some accounting bu can be disab

Security hardening for nginx (reverse proxy)

This document can be used when enhancing the security of your nginx server.

Features provided in Security Hardening for nginx server

  • In this security hardening we first update the nginx server. Its advantages are that it has SPDY 3.1 support, authentication via subrequests, SSL session ticket support, IPv6 support for DNS, PROXY protocol support. It also includes following features error logging, cache revalidation directives, SMTP pipelining, buffering options for FastCGI, improved support for MP4 streaming, and extended handling of byte-range requests for streaming and caching.

  • We also remove the SSL support and add TLS support. It used to be believed that TLS v1.0 was marginally more secure than SSL v3.0, its predecessor.  However, SSL v3.0 is getting very old and recent developments, such as the POODLE vulnerability have shown that SSL v3.0 is now completely insecure. Subsequent versions of TLS — v1.1 and v1.2 are significantly more secure and fix many vulnerabilities present in SSL v3.0 and TLS v1.0.  For example, the BEAST attack that can completely break web sites running on older SSL v3.0 and TLS v1.0 protocols. The newer TLS versions, if properly configured, prevent the BEAST and other attack vectors and provide many stronger ciphers and encryption methods.

  • We have also added SPDY support. SPDY is a two-layer HTTP-compatible protocol. The “upper” layer provides HTTP’s request and response semantics, while the “lower” layer manages encoding and sending the data. The lower layer of SPDY provides a number of benefits over standard HTTP. Namely, it sends fewer packets, uses fewer TCP connections and uses the TCP connections it makes more effectively. A single SPDY session allows concurrent HTTP requests to run over a single TCP/IP session. SPDY cuts down on the number of TCP handshakes required, and it cuts down on packet loss and bufferbloat

  • We have also added the HTTP Strict Transport Security (HSTS) support. It prevents sslstrip-like attacks and provides zero tolerance for certification problems.
  • We have also added  Deffie Helman key support. Diffie-Hellman key exchange, also called exponential key exchange, is a method of digital encryption that uses numbers raised to specific powers to produce decryption keys on the basis of components that are never directly transmitted. That makes it a very secure key exchange and prevents man-in-middle attack.
 

Step-by-step guide

Following are the steps for security hardening of nginx server.

  1. Firstly, you will need to update the existing nginx server.
    • Login to your nginx server as root.
    • Check for the existing nginx version with the command nginx -v. The version should be > 1.5.
    • If your version is > 1.5 then goto step 2. If your version < 1.5 then execute the following commands.
    • Check if there is a file names nginx.repo in /etc/yum.repos.d/.
    • cd /etc/yum.repos.d
    • vi nginx.repo
    • Enter the following lines into the file then save it.
      [nginx]
      name=nginx repo
      baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
      gpgcheck=0
      enabled=1
    • then execute the following command yum update nginx. This will update your nginx server to the latest version.

2. Following changes need to be done in all of the .conf files of the nginx. The .conf files are  present in /etc/nginx/conf.d/ folder.

    • In the server block for port 443 disable the SSLv2 and SSLv3 protocol. To achieve this replace the line
      ssl_protocols SSLv2 SSLv3 TLSv1 with ssl_protocols TLSv1 TLSv1.1 TLSv1.2.
      SSLv2 and SSLv3 are considered to be insecure so we have to disable them and add TLS in place.
    • Next we have to add the SPDY protocol configurations. SPDY (pronounced speedy) is an open networking protocol developed primarily at 
      Google for transporting web content. SPDY manipulates HTTP traffic, with particular goals of reducing web page load latency and improving web security.
      To achieve this add the following lines before the location block in server tab.
       
      spdy_keepalive_timeout 300;spdy_headers_comp 9;
    • Below the SPDY configuration add the following lines for the HTTP Strict Transport Security (HSTS) is a web security policy mechanism 
      which helps to protect websites against protocol downgrade attacks and cookie hijacking.
       add_header Strict-Transport-Security “max-age=63072000; includeSubDomains; preload”;
      add_header X-Frame-Options DENY;
      add_header X-Content-Type-Options nosniff;
    • Now we have to add the Deffie Helman key into our conf files. Diffie Hellman is an algorithm used to establish a shared secret between two parties. 
      It is primarily used as a method of exchanging cryptography keys for use in symmetric encryption algorithms like AES.
      For that check if openssl is installed on the nginx server. If not install it by yum install openssl. 
      1. After that execute the following commands cd /etc/nginx/certs/
      2. Then execute the following command: openssl dhparam -out dhparams.pem 1024. This will generate a dhparams.pem file in your  /etc/nginx/certs/ directory.
      3. Now in your conf file comment the line which says ssl_ciphers and add the following line.ssl_ciphers ‘ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:
        DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:
        ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:
        ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:
        DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA’;
      4. After this line ensure you have the following configuration ssl_prefer_server_ciphers on; After this line add the following line ssl_dhparam /etc/nginx/certs/dhparams.pem;

After this save your .conf files and execute the following command-  service nginx restart. The security of your nginx server will have been increased.

Creating custom YUM repo

This page will be useful when creating your own/ custom yum repo and using it for your yum installations.

The standard RPM package management tool in Fedora, Red Hat Enterprise Linux, and CentOS is the yum package manager. Yum works quite well, if a little bit slower than other RPM package managers like apt4rpm and urpmi, but it is solid and handles dependencies extremely well. Using it with official and third-party repositories is a breeze to set up, but what if you want to use your own repository? Perhaps you manage a large computer lab or network and need to have — or want to have — certain packages available to these systems that you maintain in-house. Or perhaps you simply want to set up your own repository to share a few RPM packages.

The following are the steps to create your own repo.

Step-by-step guide

Follow these steps on server machine.

  1. Creating your own yum repository is very simple, and very straightforward. In order to do it, you need the createrepo tool, which can be found in the createrepo package, so to install it, execute as root:yum install createrepo
  2. Install Apache HTTPD. Apache will act as a server to your clients where you want to install the packages.You may need to respond to some prompts to confirm you want to complete the installation.

    yum install httpd

  3. Creating a custom repository requires RPM or DEB files. If you already have these files, proceed to the next step. Otherwise, download the appropriate versions of the rpms and their dependencies.

  4. Move your files to the web server directory and modify file permissions. For example, you might use the following commands:

    mv /tmp/repo/* /var/www/html/repos/centos/6/7

  5. After moving files, visit http://:80/repos/centos/6/7
  6. Add a user and password to your apache directory. Execute the following command.
    htpasswd -c /etc/repo repouser
    where repouser is your username and you will be prompted for the password. /etc/repo can be any location where you want to store your password file which will be created after this command.
  7. Add authentication to your apache to add security. Add the following in your /etc/httpd/conf/httpd.conf file

    AuthType Basic
    AuthName “Password Required”
    AuthUserFile /etc/repo
    Require valid-user

    /var/www/html/repos is the path of your repo where you have your rpms stored. /etc/repo is the path of your password file.

  8. Execute the following command to create your repo.
    createrepo  /var/www/html/repos/centos/6/7
  9. Add the port 80 in firewall to accept incoming requests to Apache.
    sudo lokkit -p 80:tcp
  10. Start your apache server service httpd start
    visit http://:80/repos/centos/6/7 in browser to verify that you see an index of files

Follow these steps on client machine.

  1. Rename/ delete the .repo files present in  /etc/yum.repos.d/ to xyz.repo.bak
  2. Create files on client systems with the following information and format, where hostname is the name of the web server you created in the previous step.
    vi /etc/yum.repos.d/custom.repo
    the file name can be anything you want. Your repo will be refered to from this file.
  3. Add the following data in the newly created file or you can add it in /etc/yum.conf
    [custom]
    name=CentOS-$releasever – Base
    baseurl=http://:@/repos/centos/6/7/
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
    enabled = 1#released updates
    [custom-updates]
    name=CentOS-$releasever – Updates
    baseurl=http://:@/repos/centos/6/7/
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6

    #additional packages that may be useful
    [custom-extras]
    name=CentOS-$releasever – Extras
    baseurl=http://:@/repos/centos/6/7/
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6

    #additional packages that extend functionality of existing packages
    [custom-centosplus]
    name=CentOS-$releasever – Plus
    baseurl=http://:@/repos/centos/6/7/
    gpgcheck=1
    enabled=0
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6

    #contrib – packages by Centos Users
    [custom-contrib]
    name=CentOS-$releasever – Contrib
    baseurl=http://:@/repos/centos/6/7/
    gpgcheck=1
    enabled=0
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
    Here will be repouser or the user you added, is the password you entered. For ex: repouser@PassW0rd@172.16.120.50

  4. Execute the following commands to update yum.
    yum clean all
    yum update
  5. To use your custom repo execute the following command:
    yum install packagename
    For ex. yum install tomcat6
    It will look for the packages in the newly created repo and install from there.

In case you want to add custom group to your yum repo, use the following steps:

  1. Change directory to where your repo is(i.e where your rpms are stored).Ex. if the repo is at /var/www/html/repos/centos/6/7 then

    cd  /var/www/html/repos/centos/6/7
  2. Suppose your groupinstall command is as sudo yum groupinstall test-package and the mandatory packages required are yum and glibc and optional package is rpm

    yum-groups-manager -n “test-package” –id=test-package –save=test-package.xml –mandatory yum glibc –optional rpm

  3. A file named test-package.xml will be created in your current working directory. Execute the following command.
    sudo createrepo -g /path/to/xml/file /path/to/repo
    Example: sudo createrepo -g /var/www/html/repos/centos/6/7/test-package.xml /var/www/html/repos/centos/6/7

Hashicorp Vault

What is Vault?

Vault is a tool for securely accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, certificates, and more. Vault provides a unified interface to any secret, while providing tight access control and recording a detailed audit log.

A modern system requires access to a multitude of secrets: database credentials, API keys for external services, credentials for service-oriented architecture communication, etc. Understanding who is accessing what secrets is already very difficult and platform-specific. Adding on key rolling, secure storage, and detailed audit logs is almost impossible without a custom solution. This is where Vault steps in.

The key features of Vault are:

1) Secure Secret Storage

2) Dynamic Secrets

3) Data Encryption

4) Leasing and Renewal

5) Revocation

 

Terms used in Vault

 

  • Storage Backend – A storage backend is responsible for durable storage of encrypted data. Backends are not trusted by Vault and are only expected to provide durability. The storage backend is configured when starting the Vault server.
  • Barrier – The barrier is cryptographic steel and concrete around the Vault. All data that flows between Vault and the Storage Backend passes through the barrier.
  • Secret Backend – A secret backend is responsible for managing secrets.
  • Audit Backend – An audit backend is responsible for managing audit logs. Every request to Vault and response from Vault goes through the configured audit backends.
  • Credential Backend – A credential backend is used to authenticate users or applications which are connecting to Vault. Once authenticated, the backend returns the list of applicable policies which should be applied. Vault takes an authenticated user and returns a client token that can be used for future requests.
  • Client Token – A client token is a conceptually similar to a session cookie on a web site. Once a user authenticates, Vault returns a client token which is used for future requests. The token is used by Vault to verify the identity of the client and to enforce the applicable ACL policies. This token is passed via HTTP headers.
  • Secret – A secret is the term for anything returned by Vault which contains confidential or cryptographic material. Not everything returned by Vault is a secret, for example system configuration, status information, or backend policies are not considered Secrets.
  • Server – Vault depends on a long-running instance which operates as a server. The Vault server provides an API which clients interact with and manages the interaction between all the backends, ACL enforcement, and secret lease revocation. Having a server based architecture decouples clients from the security keys and policies, enables centralized audit logging and simplifies administration for operators.

Vault Architecture

A very high level overview of Vault looks like this:

 

There is a clear separation of components that are inside or outside of the security barrier. Only the storage backend and the HTTP API are outside, all other components are inside the barrier.

 

The storage backend is untrusted and is used to durably store encrypted data. When the Vault server is started, it must be provided with a storage backend so that data is available across restarts. The HTTP API similarly must be started by the Vault server on start so that clients can interact with it.

Once started, the Vault is in a sealed state. Before any operation can be performed on the Vault it must be unsealed. This is done by providing the unseal keys. When the Vault is initialized it generates an encryption key which is used to protect all the data. That key is protected by a master key. By default, Vault uses a technique known as Shamir’s secret sharing algorithm to split the master key into 5 shares, any 3 of which are required to reconstruct the master key.

Keys

The number of shares and the minimum threshold required can both be specified. Shamir’s technique can be disabled, and the master key used directly for unsealing. Once Vault retrieves the encryption key, it is able to decrypt the data in the storage backend, and enters the unsealed state. Once unsealed, Vault loads all of the configured audit, credential and secret backends.

The configuration of those backends must be stored in Vault since they are security sensitive. Only users with the correct permissions should be able to modify them, meaning they cannot be specified outside of the barrier. By storing them in Vault, any changes to them are protected by the ACL system and tracked by audit logs.

After the Vault is unsealed, requests can be processed from the HTTP API to the Core. The core is used to manage the flow of requests through the system, enforce ACLs, and ensure audit logging is done.

When a client first connects to Vault, it needs to authenticate. Vault provides configurable credential backends providing flexibility in the authentication mechanism used. Human friendly mechanisms such as username/password or GitHub might be used for operators, while applications may use public/private keys or tokens to authenticate. An authentication request flows through core and into a credential backend, which determines if the request is valid and returns a list of associated policies.

Policies are just a named ACL rule. For example, the “root” policy is built-in and permits access to all resources. You can create any number of named policies with fine-grained control over paths. Vault operates exclusively in a whitelist mode, meaning that unless access is explicitly granted via a policy, the action is not allowed. Since a user may have multiple policies associated, an action is allowed if any policy permits it. Policies are stored and managed by an internal policy store. This internal store is manipulated through the system backend, which is always mounted at sys/.

Once authentication takes place and a credential backend provides a set of applicable policies, a new client token is generated and managed by the token store. This client token is sent back to the client, and is used to make future requests. This is similar to a cookie sent by a website after a user logs in. The client token may have a lease associated with it depending on the credential backend configuration. This means the client token may need to be periodically renewed to avoid invalidation.

Once authenticated, requests are made providing the client token. The token is used to verify the client is authorized and to load the relevant policies. The policies are used to authorize the client request. The request is then routed to the secret backend, which is processed depending on the type of backend. If the backend returns a secret, the core registers it with the expiration manager and attaches a lease ID. The lease ID is used by clients to renew or revoke their secret. If a client allows the lease to expire, the expiration manager automatically revokes the secret.

The core handles logging of requests and responses to the audit broker, which fans the request out to all the configured audit backends. Outside of the request flow, the core performs certain background activity. Lease management is critical, as it allows expired client tokens or secrets to be revoked automatically. Additionally, Vault handles certain partial failure cases by using write ahead logging with a rollback manager. This is managed transparently within the core and is not user visible.

Steps to Install Vault


1) Installing Vault is simple. There are two approaches to installing Vault: downloading a precompiled binary for your system, or installing from source. We will use the precompiled binary format. To install the precompiled binary, download the appropriate package for your system. 

2) You can use the following command as well: wget https://releases.hashicorp.com/vault/0.6.0/vault_0.6.0_linux_amd64.zip

Unzip by the command unzip vault_0.6.0_linux_amd64.zip

You will have a binary called vault in it. 

3) Once the zip is downloaded, unzip it into any directory. The vault binary inside is all that is necessary to run Vault . Any additional files, if any, aren’t required to run Vault.

Copy the binary to anywhere on your system. If you intend to access it from the command-line, make sure to place it somewhere on your PATH.

4) Add the path of your vault binary to your .bash_profile file in your home directory.

Execute the following to do it vi ~/bash_profile

export PATH=$PATH:/home/compose/vault  (If your vault binary is in /home/compose/vault/ directory)

Alternatively you can also add the unzipped vault binary file in /usr/bin so that you will be able to access vault as a command.

Verifying the Installation

To verify Vault is properly installed, execute the vault binary on your system. You should see help output. If you are executing it from the command line, make sure it is on your PATH or you may get an error about vault not being found.

Starting and configuring vault

1) Vault operates as a client/server application. The Vault server is the only piece of the Vault architecture that interacts with the data storage and backends. All operations done via the Vault CLI interact with the server over a TLS connection.

2) Before starting vault you will need to set the following environment variable VAULT_ADDR. To set it execute the following command export VAULT_ADDR=’http://127.0.0.1:8200&#8242;. 8200 is the default port for vault. You can set this environment variable permanently across all sessions by adding the following line in /etc/environment–   VAULT_ADDR=’http://127.0.0.1:8200&#8242;

3) The dev server is a built-in flag to start a pre-configured server that is not very secure but useful for playing with Vault locally. 

 

To start the Vault dev server, run vault server -dev

 

$ vault server -dev
WARNING: Dev mode is enabled!

In this mode, Vault is completely in-memory and unsealed.
Vault is configured to only have a single unseal key. The root
token has already been authenticated with the CLI, so you can
immediately begin using the Vault CLI.

The only step you need to take is to set the following
environment variable since Vault will be talking without TLS:

    export VAULT_ADDR='http://127.0.0.1:8200'

The unseal key and root token are reproduced below in case you
want to seal/unseal the Vault or play with authentication.

Unseal Key: 2252546b1a8551e8411502501719c4b3
Root Token: 79bd8011-af5a-f147-557e-c58be4fedf6c

==> Vault server configuration:

         Log Level: info
           Backend: inmem
        Listener 1: tcp (addr: "127.0.0.1:8200", tls: "disabled")

...

 

You should see output similar to that above. Vault does not fork, so it will continue to run in the foreground; to connect to it with later commands, open another shell.

 

As you can see, when you start a dev server, Vault warns you loudly. The dev server stores all its data in-memory (but still encrypted), listens on localhost without TLS, and automatically unseals and shows you the unseal key and root access key. The important thing about the dev server is that it is meant for development only. Do not run the dev server in production. Even if it was run in production, it wouldn’t be very useful since it stores data in-memory and every restart would clear all your secrets. You can practise vault read/write commands here. We won’t be using vault in dev mode as we want our data to stored permanently.

In the next steps you will see how to start and configue a durable vault server.

3) Now you need to make a hcl file to add the configurations of vault in it.

 HCL (HashiCorp Configuration Language) is a configuration language built by HashiCorp. The goal of HCL is to build a structured configuration language that is both human and machine friendly for use with command-line tools, but specifically targeted towards DevOps tools, servers, etc. HCL is also fully JSON compatible. That is, JSON can be used as completely valid input to a system expecting HCL. This helps makes systems interoperable with other systems. HCL is heavily inspired by libucl, nginx configuration, and others similar. you can find more details about HCL on https://github.com/hashicorp/hcl

4) You will need to mention a physical backend for the vault. There are various options in the physical backend.

The only physical backends actively maintained by HashiCorp areconsulinmem, and file.

  • consul – Store data within Consul. This backend supports HA. It is the most recommended backend for Vault and has been shown to work at high scale under heavy load.
  • etcd – Store data within etcd. This backend supports HA. This is a community-supported backend.
  • zookeeper – Store data within Zookeeper. This backend supports HA. This is a community-supported backend.
  • dynamodb – Store data in a DynamoDB table. This backend supports HA. This is a community-supported backend.
  • s3 – Store data within an S3 bucket S3. This backend does not support HA. This is a community-supported backend.
  • azure – Store data in an Azure Storage container Azure. This backend does not support HA. This is a community-supported backend.
  • swift – Store data within an OpenStack Swift container Swift. This backend does not support HA. This is a community-supported backend.
  • mysql – Store data within MySQL. This backend does not support HA. This is a community-supported backend.
  • postgresql – Store data within PostgreSQL. This backend does not support HA. This is a community-supported backend.
  • inmem – Store data in-memory. This is only really useful for development and experimentation. Data is lost whenever Vault is restarted.
  • file – Store data on the filesystem using a directory structure. This backend does not support HA.

Each of these backend has a different options for configuration. for simplicity we will be using file backend here. A sample fhcl file can be:

You can save the following file with any name but with .hcl extension for example config.hcl which we will store in /home/compose/data/ folder.

backend “file” {
  path = “/home/compose/data”
}
listener “tcp” {
 address = “0.0.0.0:8200” 
 tls_disable = 1
}

 

backend “file” specifies that the data produced by the vault will be stored in a file format

path specifies that the files will be stored in can be any folder.

listener will be tcp

address specifies that which machines will be able to access vault. 127.0.0.1:8200 will give access for requests only from localhost. 8.8.8.8:8200 or 0.0.0.0:8200 will give access to vault from anywhere.

tls_disable will be 1 if you are not providing any SSL certificates for authentication from client.

This is the basic file which you can use.

5) Start your vault server with the following command

vault server -config=/home/compose/data/config.hcl

point the -config to the config hcl file you just created. You need to run this command either as root or with sudo. All the next commands can be run by either root or compose without sudo.

6) If you have started your vault server for the first time then you will need to initialize it. Run the following command

vault init This will give an output as follows:

 

Unseal Key 1: a33a2812dskfybjgdbgy85a7d6da375bc9bc6c137e65778676f97b3f1482b26401
Unseal Key 2: fa91a7128dfd30f7c500ce1ffwefgtnghjj2871f3519773ada9d04bbcc3620ad02
Unseal Key 3: bb8d5e6d9372c3331044ffe678a4356912035209d6fca68f542f52cf2f3d5e0203
Unseal Key 4: 8c5977a14f8da814fa2f204ac5c2160927cdcf354fhfghfgjbgdbbb0347e4f8b04
Unseal Key 5: cd458edecf025bd02f6b11b3e43341dgdgewtea77756fagh6dc0ba4d775d312405
Initial Root Token: f15db23h-eae6-974f-45b7-se47u52d96ea
Vault initialized with 5 keys and a key threshold of 3. Please
securely distribute the above keys. When the Vault is re-sealed,
restarted, or stopped, you must provide at least 3 of these keys
to unseal it again.
Vault does not store the master key. Without at least 3 keys,
your Vault will remain permanently sealed.

Save these someplace safe as you will need this everytime you unseal your vault server to write or access data. By default you will need to enter any three of the five unseal key to unseal the vault completely.

You can refer to the architecture above for understanding the working of keys. VaultArchitecture

But if you want to change the default and want to use only one key then you can initialize the vault as: vault init -key-share=1 -key-threshold=1 which will generate only one unseal key.

7) After initializing your vault next step you have to do is unseal your vault, otherwise you won’t be able to perform any operations on the vault. Execute the following command:

vault unseal   you will be asked for an unseal key enter any one of the unseal keys generated during initialization.By default vault needs three keys out of five to be completely unsealed.

See the screenshot below:

2016-07-22 10_35_28-172.16.120.138 - Remote Desktop - __Remote

You can check that the Unseal Progress is 1. That means your first key was correct. The Unseal Progress count will increase everytime you execute unseal and enter a key.

So you will need to repeat the above step in total of three times. Entering a different key each time. 

2016-07-22 10_36_38-172.16.120.138 - Remote Desktop - __Remote

In the end of third time your vault will be completely unsealed.

8) You will now need to login into the vault server to read/write into the vault. Execute the following command to login.

vault auth

Where is token given to you when you initialised the vault. It is present after the five keys. This gives you the root access to vault to perform any activities.

Vault Commands

1) vault status to get the status of vault whether it is running or not.

2) vault write secret/hello excited=yes to write a key-value pair into the vault. where secret/hello is path to access your key. “excited” is your key-name and “yes” is the value. Key and value can be anything.

3) vault read secret/hello to read the value of the key you just wrote.

4) vault write secret/hello excited=very-much to change/update the value of your key

5) vault write secret/hello excited=yes city=Pune to add multiple keys. you can just separate them with space.

6)  vault write secret/hello abc=xyz will remove the existing keys (excited and city and create a new one abc)

7) vault read -format=json secret/hello return keys and values in json

8) vault delete secret/hello to delete your path.

9) If you don’t want your path start with secret/ then you can mount other backend like generic.

Execute vault mount generic Then you will be able to add paths like generic/hello instead to secret/hello. You can get more info on secret backends here https://www.vaultproject.io/docs/secrets/index.html

10) vault mounts to see the list of mounts

11) vault write generic/hello world=Today to write to newly mounted secret backend.

12) vault read generic/hello to read it.

13) vault token-create with this vault will create a token which you can give to a user so that he an login to the vault. This will add a new user to your server.

The new user can login with vault auth . You can renew or revoke the user with vault renew or vault revoke

To add a user with username and password and not with token use the following commands.

14) vault auth-enable userpass

vault auth -methods               //This will display the authentication methods you should see userpass in this

vault write auth/userpass/users/user1 password=Canopy1! policies=root //To add username user1 with password Canopy1! will have root policy attached to it.

vault auth -method=userpass username=compose password=Canopy1!    //user can login with this

To add read-only policy to a user execute the following commands

15) Create a file with extension .hcl. Here I have created read-only.hcl

path “secret/*” {
policy = “read”
}

path “auth/token/lookup-self” {
policy = “read”
}

vault policy-write read-policy read-only.hcl //to add the policy named read-policy from file read-only.hcl

vault policies  //to display list of policies

vault policies read-policy //to display newly created policy

vault write auth/userpass/users/read-user password=Canopy1! policies=read-policy   //to add user with that policy

Now if the new user logs in he/she won’t be able to write anything in vault and just read it.

16) vault audit-enable file file_path=/home/compose/data/vault_audit.log //This will add the logs in vault_audit.log file.

Configure vault and AMP

You can add the following lines in brooklyn.properties to access the vault key-values

brooklyn.external.vault=org.apache.brooklyn.core.config.external.vault.VaultUserPassExternalConfigSupplier
brooklyn.external.vault.username=user1 //Login username you created
brooklyn.external.vault.password=Canopy1!  //Login password

brooklyn.external.vault.endpoint=http://172.16.120.159:8200/   //Ip address of your vault server
brooklyn.external.vault.path=secret/CP0000/AWS     //Path to your secrets

brooklyn.location.jclouds.aws-ec2.identity=$brooklyn:external(“vault”, “identity”)
brooklyn.location.jclouds.aws-ec2.credential=$brooklyn:external(“vault”, “credential”)

This will make AMP access your creds from vault.

Backup and recovery

All of the required vault data is present in the folder you mentioned in your config.hcl as path variable here /home/compose/data. So just take backup of the folder and paste that folder into the recovered machine. A prerequisite is that vault binary should be present in that machine.

Backup can be taken via cronjob as

0 0 * * *  rsync -avz –delete root@vault:/home/compose/data /backup/vault/

Upload artifacts to AWS S3

This document can be used when you want to upload files to AWS s3

Step-by-step guide

Execute the following steps:

  1. Install ruby with the following commands in data machine where backup will be stored
    gpg2 –keyserver hkp://keys.gnupg.net –recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
    sudo \curl -L https://get.rvm.io | bash -s stable –ruby
    source /home/compose/.rvm/scripts/rvm
    rvm list known   #####This command will show available ruby versions
    You can install the version of your choice by the following command:
    rvm install ruby 2.3.0  ###Where 2.3.0 is ruby version to be installed
    You can install latest ruby version by the following command:
    rvm install ruby –latest
    Check the version of ruby installed by:
    ruby -v
  2. Check if ruby gem is present in your machine: gem -v
  3. If not present install by sudo yum install ‘rubygems’
  4. Then install aws-sdk:  gem install aws-sdk
  5. Add the code as below in a file upload-to-s3.rb:
    # Note: Please replace below keys with your production settings
    # 1. access_key_id
    # 2. secret_access_key
    # 3. region
    # 4. buckets[name] is actual bucket name in S3require ‘aws-sdk’

    def upload( file_name, destination, directory, bucket)

    destination_file_name = destination

    puts “Creating #{destination_file_name} file…. “

    # Zip cloudsoft persisted folder
    `tar -cvzf #{destination_file_name} #{directory}`

    puts “Created #{destination_file_name} file… “

    puts “uploading #{destination} file to aws…”
    ENV[‘AWS_ACCESS_KEY_ID’]=’Your key here’
    ENV[‘AWS_SECRET_ACCESS_KEY’]=’Your secret here’
    ENV[‘AWS_REGION’]=’Your region here’

    s3 = Aws::S3::Client.new(
    )

    File.open(destination_file_name, ‘rb’) do |file|
    s3.put_object(bucket: ‘bucket_name’, key: file_name, body: file)
    end
    #@s3 = Aws::S3::Client.new(aws_credentials)
    #@s3_bucket = @s3.buckets[bucket]
    #@s3_bucket.objects[file_name].write(file:destination_file_name)

    puts “uploaded #{destination} file to aws…”

    puts “deleting #{destination} file…”
    `rm -rf #{destination}`
    puts “deleted #{destination} file…”

    end

    def clear(nfsLoc)

    # Removing all existing .tar.zip file from folders
    nfsLoc.each_pair do |key, value|
    puts “deleting #{key} file…”
    Dir[“#{key}/*.tar.gz”].each do |path|
    puts path
    `rm -rf #{path}`
    end

    puts “deleted #{key} file…”
    end
    end

    def start()

    nfsLoc = {‘/backup_dir’ => ‘bucket_name/data’}

    nfsLoc.each_pair do |key, value|
    puts “#{key} #{value}”

    Dir.glob(“#{key}/*”) do |dname|
    filename = ‘%s.%s’ % [dname, ‘tar.gz’]

    file = File.basename(filename)
    folderName = File.basename(dname)
    bucket = ‘%s/%s’ % [“#{value}”, folderName]

    puts “….. Uploading started for %s file to AWS S3 …..” % [file]
    t = ‘%s/’ % dname
    puts upload(file, filename, t, bucket)

    puts “….. Uploding finished for %s file to AWS S3 …..” % [file]
    end
    end
    end

    start()

  6. After that execute the following:
    ruby upload-to-s3.rb
  7. If adding to jenkins job add the following line in pre-build script:
    source ~/.rvm/scripts/rvm