An Easy Way to Hide Files and Directories in Linux

Do you occasionally share your Linux desktop machine with family members, friends or perhaps with colleagues at your workplace, then you have a reason to hide certain private files as well as folders or directories. The question is how can you do this?

In this tutorial, we will explain an easy and effective way to hide files and directories and view hidden files/directories in Linux from the terminal and GUI.

As we’ll see below, hiding files and directories in Linux is so simple.

How to Hide Files and Directories in Linux

To hide a file or directory from the terminal, simply append a dot . at the start of its name as follows using the mv command.

$ ls
$ mv mv sync.ffs_db .sync.ffs_db
$ ls

Hide File in Linux Terminal

Using GUI method, the same idea applies here, just rename the file by adding a . at the start of its name as shown below.

Hide File in Linux Using File Manager

Once you have renamed it, the file will still be seen, move out of the directory and open it again, it will be hidden thereafter.

How to View Hide Files and Directories in Linux

To view hidden files, run the ls command with the -a flag which enables viewing of all files in a directory or -al flag for long listing.

$ ls -a
OR
$ ls -al

View Hidden Files in Linux Terminal

From a GUI file manager, go to View and check the option Show Hidden Files to view hidden files or directories.

View Hidden File Using File Manager

How to Compress Files and Directories with a Password

In order to add a little security to your hidden files, you can compress them with a password and then hide them from a GUI file manager as follows.

Select the file or directory and right click on it, then choose Compress from the menu list, after seeing the compression preferences interface, click on “Other options” to get the password option as shown in the screenshot below.

Once you have set the password, click on Create.

Compress Files with Password in Linux

From now on, each time anyone wants to open the file, they’ll be asked to provide the password created above.

Enter Password to View Files

Now you can hide the file by renaming it with a . as we explained before.

Docker Security

2017-03-19 10_39_45-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 10_40_30-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 10_40_45-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 10_41_21-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 10_41_31-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 10_41_44-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player

Docker contaniners share the kernel wth the machine they are running on.

2017-03-19 10_44_12-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player.png

If any of the containers starts using up more resources like CPU, RAM the other containers might run ino /do/s issue.

2017-03-19 10_45_35-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player.png

The attack can break out from a container into the host  machine or other containers.

2017-03-19 10_46_35-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player.png

Make sure that the images coming from dockerhub are from trusted sources.

2017-03-19 10_47_30-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player

You should be careful with what secrets you store in your containers.2017-03-19 10_47_51-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 10_48_01-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 10_48_32-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player

2017-03-19 10_52_52-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player.png

You can use the commands:

docker network disconnect nh

nh is the name of the container. This will disconnect your containers from the network and they will be inaccessible.

docker diff

Docker diff will show you which files have been modified.

If you do not want external invalid/destructive files to modify your containersthen you can make your containers read-only

2017-03-19 10_56_31-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player.png

Specify –read-only option while running your container.

2017-03-19 11_00_00-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_00_14-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_00_35-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player

2017-03-19 11_03_31-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_03_41-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_04_27-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_04_51-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_05_01-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_05_39-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_06_11-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_06_35-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_06_46-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player

2017-03-19 11_07_41-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player

2017-03-19 11_10_19-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_10_37-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_10_53-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_11_21-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_11_45-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_12_03-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_12_44-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_12_55-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_13_11-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_13_55-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_14_06-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_14_21-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_14_35-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_15_00-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_15_43-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_16_29-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_17_06-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_17_17-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_17_34-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_18_37-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player2017-03-19 11_18_48-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player

 

2017-03-19 11_19_44-GOTO2016•Docker-Download-From2-YTPak.com.mp4 - VLC media player

Security hardening for nginx (reverse proxy)

This document can be used when enhancing the security of your nginx server.

Features provided in Security Hardening for nginx server

  • In this security hardening we first update the nginx server. Its advantages are that it has SPDY 3.1 support, authentication via subrequests, SSL session ticket support, IPv6 support for DNS, PROXY protocol support. It also includes following features error logging, cache revalidation directives, SMTP pipelining, buffering options for FastCGI, improved support for MP4 streaming, and extended handling of byte-range requests for streaming and caching.

  • We also remove the SSL support and add TLS support. It used to be believed that TLS v1.0 was marginally more secure than SSL v3.0, its predecessor.  However, SSL v3.0 is getting very old and recent developments, such as the POODLE vulnerability have shown that SSL v3.0 is now completely insecure. Subsequent versions of TLS — v1.1 and v1.2 are significantly more secure and fix many vulnerabilities present in SSL v3.0 and TLS v1.0.  For example, the BEAST attack that can completely break web sites running on older SSL v3.0 and TLS v1.0 protocols. The newer TLS versions, if properly configured, prevent the BEAST and other attack vectors and provide many stronger ciphers and encryption methods.

  • We have also added SPDY support. SPDY is a two-layer HTTP-compatible protocol. The “upper” layer provides HTTP’s request and response semantics, while the “lower” layer manages encoding and sending the data. The lower layer of SPDY provides a number of benefits over standard HTTP. Namely, it sends fewer packets, uses fewer TCP connections and uses the TCP connections it makes more effectively. A single SPDY session allows concurrent HTTP requests to run over a single TCP/IP session. SPDY cuts down on the number of TCP handshakes required, and it cuts down on packet loss and bufferbloat

  • We have also added the HTTP Strict Transport Security (HSTS) support. It prevents sslstrip-like attacks and provides zero tolerance for certification problems.
  • We have also added  Deffie Helman key support. Diffie-Hellman key exchange, also called exponential key exchange, is a method of digital encryption that uses numbers raised to specific powers to produce decryption keys on the basis of components that are never directly transmitted. That makes it a very secure key exchange and prevents man-in-middle attack.
 

Step-by-step guide

Following are the steps for security hardening of nginx server.

  1. Firstly, you will need to update the existing nginx server.
    • Login to your nginx server as root.
    • Check for the existing nginx version with the command nginx -v. The version should be > 1.5.
    • If your version is > 1.5 then goto step 2. If your version < 1.5 then execute the following commands.
    • Check if there is a file names nginx.repo in /etc/yum.repos.d/.
    • cd /etc/yum.repos.d
    • vi nginx.repo
    • Enter the following lines into the file then save it.
      [nginx]
      name=nginx repo
      baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
      gpgcheck=0
      enabled=1
    • then execute the following command yum update nginx. This will update your nginx server to the latest version.

2. Following changes need to be done in all of the .conf files of the nginx. The .conf files are  present in /etc/nginx/conf.d/ folder.

    • In the server block for port 443 disable the SSLv2 and SSLv3 protocol. To achieve this replace the line
      ssl_protocols SSLv2 SSLv3 TLSv1 with ssl_protocols TLSv1 TLSv1.1 TLSv1.2.
      SSLv2 and SSLv3 are considered to be insecure so we have to disable them and add TLS in place.
    • Next we have to add the SPDY protocol configurations. SPDY (pronounced speedy) is an open networking protocol developed primarily at 
      Google for transporting web content. SPDY manipulates HTTP traffic, with particular goals of reducing web page load latency and improving web security.
      To achieve this add the following lines before the location block in server tab.
       
      spdy_keepalive_timeout 300;spdy_headers_comp 9;
    • Below the SPDY configuration add the following lines for the HTTP Strict Transport Security (HSTS) is a web security policy mechanism 
      which helps to protect websites against protocol downgrade attacks and cookie hijacking.
       add_header Strict-Transport-Security “max-age=63072000; includeSubDomains; preload”;
      add_header X-Frame-Options DENY;
      add_header X-Content-Type-Options nosniff;
    • Now we have to add the Deffie Helman key into our conf files. Diffie Hellman is an algorithm used to establish a shared secret between two parties. 
      It is primarily used as a method of exchanging cryptography keys for use in symmetric encryption algorithms like AES.
      For that check if openssl is installed on the nginx server. If not install it by yum install openssl. 
      1. After that execute the following commands cd /etc/nginx/certs/
      2. Then execute the following command: openssl dhparam -out dhparams.pem 1024. This will generate a dhparams.pem file in your  /etc/nginx/certs/ directory.
      3. Now in your conf file comment the line which says ssl_ciphers and add the following line.ssl_ciphers ‘ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:
        DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:
        ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:
        ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:
        DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA’;
      4. After this line ensure you have the following configuration ssl_prefer_server_ciphers on; After this line add the following line ssl_dhparam /etc/nginx/certs/dhparams.pem;

After this save your .conf files and execute the following command-  service nginx restart. The security of your nginx server will have been increased.

Hashicorp Vault

What is Vault?

Vault is a tool for securely accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, certificates, and more. Vault provides a unified interface to any secret, while providing tight access control and recording a detailed audit log.

A modern system requires access to a multitude of secrets: database credentials, API keys for external services, credentials for service-oriented architecture communication, etc. Understanding who is accessing what secrets is already very difficult and platform-specific. Adding on key rolling, secure storage, and detailed audit logs is almost impossible without a custom solution. This is where Vault steps in.

The key features of Vault are:

1) Secure Secret Storage

2) Dynamic Secrets

3) Data Encryption

4) Leasing and Renewal

5) Revocation

 

Terms used in Vault

 

  • Storage Backend – A storage backend is responsible for durable storage of encrypted data. Backends are not trusted by Vault and are only expected to provide durability. The storage backend is configured when starting the Vault server.
  • Barrier – The barrier is cryptographic steel and concrete around the Vault. All data that flows between Vault and the Storage Backend passes through the barrier.
  • Secret Backend – A secret backend is responsible for managing secrets.
  • Audit Backend – An audit backend is responsible for managing audit logs. Every request to Vault and response from Vault goes through the configured audit backends.
  • Credential Backend – A credential backend is used to authenticate users or applications which are connecting to Vault. Once authenticated, the backend returns the list of applicable policies which should be applied. Vault takes an authenticated user and returns a client token that can be used for future requests.
  • Client Token – A client token is a conceptually similar to a session cookie on a web site. Once a user authenticates, Vault returns a client token which is used for future requests. The token is used by Vault to verify the identity of the client and to enforce the applicable ACL policies. This token is passed via HTTP headers.
  • Secret – A secret is the term for anything returned by Vault which contains confidential or cryptographic material. Not everything returned by Vault is a secret, for example system configuration, status information, or backend policies are not considered Secrets.
  • Server – Vault depends on a long-running instance which operates as a server. The Vault server provides an API which clients interact with and manages the interaction between all the backends, ACL enforcement, and secret lease revocation. Having a server based architecture decouples clients from the security keys and policies, enables centralized audit logging and simplifies administration for operators.

Vault Architecture

A very high level overview of Vault looks like this:

 

There is a clear separation of components that are inside or outside of the security barrier. Only the storage backend and the HTTP API are outside, all other components are inside the barrier.

 

The storage backend is untrusted and is used to durably store encrypted data. When the Vault server is started, it must be provided with a storage backend so that data is available across restarts. The HTTP API similarly must be started by the Vault server on start so that clients can interact with it.

Once started, the Vault is in a sealed state. Before any operation can be performed on the Vault it must be unsealed. This is done by providing the unseal keys. When the Vault is initialized it generates an encryption key which is used to protect all the data. That key is protected by a master key. By default, Vault uses a technique known as Shamir’s secret sharing algorithm to split the master key into 5 shares, any 3 of which are required to reconstruct the master key.

Keys

The number of shares and the minimum threshold required can both be specified. Shamir’s technique can be disabled, and the master key used directly for unsealing. Once Vault retrieves the encryption key, it is able to decrypt the data in the storage backend, and enters the unsealed state. Once unsealed, Vault loads all of the configured audit, credential and secret backends.

The configuration of those backends must be stored in Vault since they are security sensitive. Only users with the correct permissions should be able to modify them, meaning they cannot be specified outside of the barrier. By storing them in Vault, any changes to them are protected by the ACL system and tracked by audit logs.

After the Vault is unsealed, requests can be processed from the HTTP API to the Core. The core is used to manage the flow of requests through the system, enforce ACLs, and ensure audit logging is done.

When a client first connects to Vault, it needs to authenticate. Vault provides configurable credential backends providing flexibility in the authentication mechanism used. Human friendly mechanisms such as username/password or GitHub might be used for operators, while applications may use public/private keys or tokens to authenticate. An authentication request flows through core and into a credential backend, which determines if the request is valid and returns a list of associated policies.

Policies are just a named ACL rule. For example, the “root” policy is built-in and permits access to all resources. You can create any number of named policies with fine-grained control over paths. Vault operates exclusively in a whitelist mode, meaning that unless access is explicitly granted via a policy, the action is not allowed. Since a user may have multiple policies associated, an action is allowed if any policy permits it. Policies are stored and managed by an internal policy store. This internal store is manipulated through the system backend, which is always mounted at sys/.

Once authentication takes place and a credential backend provides a set of applicable policies, a new client token is generated and managed by the token store. This client token is sent back to the client, and is used to make future requests. This is similar to a cookie sent by a website after a user logs in. The client token may have a lease associated with it depending on the credential backend configuration. This means the client token may need to be periodically renewed to avoid invalidation.

Once authenticated, requests are made providing the client token. The token is used to verify the client is authorized and to load the relevant policies. The policies are used to authorize the client request. The request is then routed to the secret backend, which is processed depending on the type of backend. If the backend returns a secret, the core registers it with the expiration manager and attaches a lease ID. The lease ID is used by clients to renew or revoke their secret. If a client allows the lease to expire, the expiration manager automatically revokes the secret.

The core handles logging of requests and responses to the audit broker, which fans the request out to all the configured audit backends. Outside of the request flow, the core performs certain background activity. Lease management is critical, as it allows expired client tokens or secrets to be revoked automatically. Additionally, Vault handles certain partial failure cases by using write ahead logging with a rollback manager. This is managed transparently within the core and is not user visible.

Steps to Install Vault


1) Installing Vault is simple. There are two approaches to installing Vault: downloading a precompiled binary for your system, or installing from source. We will use the precompiled binary format. To install the precompiled binary, download the appropriate package for your system. 

2) You can use the following command as well: wget https://releases.hashicorp.com/vault/0.6.0/vault_0.6.0_linux_amd64.zip

Unzip by the command unzip vault_0.6.0_linux_amd64.zip

You will have a binary called vault in it. 

3) Once the zip is downloaded, unzip it into any directory. The vault binary inside is all that is necessary to run Vault . Any additional files, if any, aren’t required to run Vault.

Copy the binary to anywhere on your system. If you intend to access it from the command-line, make sure to place it somewhere on your PATH.

4) Add the path of your vault binary to your .bash_profile file in your home directory.

Execute the following to do it vi ~/bash_profile

export PATH=$PATH:/home/compose/vault  (If your vault binary is in /home/compose/vault/ directory)

Alternatively you can also add the unzipped vault binary file in /usr/bin so that you will be able to access vault as a command.

Verifying the Installation

To verify Vault is properly installed, execute the vault binary on your system. You should see help output. If you are executing it from the command line, make sure it is on your PATH or you may get an error about vault not being found.

Starting and configuring vault

1) Vault operates as a client/server application. The Vault server is the only piece of the Vault architecture that interacts with the data storage and backends. All operations done via the Vault CLI interact with the server over a TLS connection.

2) Before starting vault you will need to set the following environment variable VAULT_ADDR. To set it execute the following command export VAULT_ADDR=’http://127.0.0.1:8200&#8242;. 8200 is the default port for vault. You can set this environment variable permanently across all sessions by adding the following line in /etc/environment–   VAULT_ADDR=’http://127.0.0.1:8200&#8242;

3) The dev server is a built-in flag to start a pre-configured server that is not very secure but useful for playing with Vault locally. 

 

To start the Vault dev server, run vault server -dev

 

$ vault server -dev
WARNING: Dev mode is enabled!

In this mode, Vault is completely in-memory and unsealed.
Vault is configured to only have a single unseal key. The root
token has already been authenticated with the CLI, so you can
immediately begin using the Vault CLI.

The only step you need to take is to set the following
environment variable since Vault will be talking without TLS:

    export VAULT_ADDR='http://127.0.0.1:8200'

The unseal key and root token are reproduced below in case you
want to seal/unseal the Vault or play with authentication.

Unseal Key: 2252546b1a8551e8411502501719c4b3
Root Token: 79bd8011-af5a-f147-557e-c58be4fedf6c

==> Vault server configuration:

         Log Level: info
           Backend: inmem
        Listener 1: tcp (addr: "127.0.0.1:8200", tls: "disabled")

...

 

You should see output similar to that above. Vault does not fork, so it will continue to run in the foreground; to connect to it with later commands, open another shell.

 

As you can see, when you start a dev server, Vault warns you loudly. The dev server stores all its data in-memory (but still encrypted), listens on localhost without TLS, and automatically unseals and shows you the unseal key and root access key. The important thing about the dev server is that it is meant for development only. Do not run the dev server in production. Even if it was run in production, it wouldn’t be very useful since it stores data in-memory and every restart would clear all your secrets. You can practise vault read/write commands here. We won’t be using vault in dev mode as we want our data to stored permanently.

In the next steps you will see how to start and configue a durable vault server.

3) Now you need to make a hcl file to add the configurations of vault in it.

 HCL (HashiCorp Configuration Language) is a configuration language built by HashiCorp. The goal of HCL is to build a structured configuration language that is both human and machine friendly for use with command-line tools, but specifically targeted towards DevOps tools, servers, etc. HCL is also fully JSON compatible. That is, JSON can be used as completely valid input to a system expecting HCL. This helps makes systems interoperable with other systems. HCL is heavily inspired by libucl, nginx configuration, and others similar. you can find more details about HCL on https://github.com/hashicorp/hcl

4) You will need to mention a physical backend for the vault. There are various options in the physical backend.

The only physical backends actively maintained by HashiCorp areconsulinmem, and file.

  • consul – Store data within Consul. This backend supports HA. It is the most recommended backend for Vault and has been shown to work at high scale under heavy load.
  • etcd – Store data within etcd. This backend supports HA. This is a community-supported backend.
  • zookeeper – Store data within Zookeeper. This backend supports HA. This is a community-supported backend.
  • dynamodb – Store data in a DynamoDB table. This backend supports HA. This is a community-supported backend.
  • s3 – Store data within an S3 bucket S3. This backend does not support HA. This is a community-supported backend.
  • azure – Store data in an Azure Storage container Azure. This backend does not support HA. This is a community-supported backend.
  • swift – Store data within an OpenStack Swift container Swift. This backend does not support HA. This is a community-supported backend.
  • mysql – Store data within MySQL. This backend does not support HA. This is a community-supported backend.
  • postgresql – Store data within PostgreSQL. This backend does not support HA. This is a community-supported backend.
  • inmem – Store data in-memory. This is only really useful for development and experimentation. Data is lost whenever Vault is restarted.
  • file – Store data on the filesystem using a directory structure. This backend does not support HA.

Each of these backend has a different options for configuration. for simplicity we will be using file backend here. A sample fhcl file can be:

You can save the following file with any name but with .hcl extension for example config.hcl which we will store in /home/compose/data/ folder.

backend “file” {
  path = “/home/compose/data”
}
listener “tcp” {
 address = “0.0.0.0:8200” 
 tls_disable = 1
}

 

backend “file” specifies that the data produced by the vault will be stored in a file format

path specifies that the files will be stored in can be any folder.

listener will be tcp

address specifies that which machines will be able to access vault. 127.0.0.1:8200 will give access for requests only from localhost. 8.8.8.8:8200 or 0.0.0.0:8200 will give access to vault from anywhere.

tls_disable will be 1 if you are not providing any SSL certificates for authentication from client.

This is the basic file which you can use.

5) Start your vault server with the following command

vault server -config=/home/compose/data/config.hcl

point the -config to the config hcl file you just created. You need to run this command either as root or with sudo. All the next commands can be run by either root or compose without sudo.

6) If you have started your vault server for the first time then you will need to initialize it. Run the following command

vault init This will give an output as follows:

 

Unseal Key 1: a33a2812dskfybjgdbgy85a7d6da375bc9bc6c137e65778676f97b3f1482b26401
Unseal Key 2: fa91a7128dfd30f7c500ce1ffwefgtnghjj2871f3519773ada9d04bbcc3620ad02
Unseal Key 3: bb8d5e6d9372c3331044ffe678a4356912035209d6fca68f542f52cf2f3d5e0203
Unseal Key 4: 8c5977a14f8da814fa2f204ac5c2160927cdcf354fhfghfgjbgdbbb0347e4f8b04
Unseal Key 5: cd458edecf025bd02f6b11b3e43341dgdgewtea77756fagh6dc0ba4d775d312405
Initial Root Token: f15db23h-eae6-974f-45b7-se47u52d96ea
Vault initialized with 5 keys and a key threshold of 3. Please
securely distribute the above keys. When the Vault is re-sealed,
restarted, or stopped, you must provide at least 3 of these keys
to unseal it again.
Vault does not store the master key. Without at least 3 keys,
your Vault will remain permanently sealed.

Save these someplace safe as you will need this everytime you unseal your vault server to write or access data. By default you will need to enter any three of the five unseal key to unseal the vault completely.

You can refer to the architecture above for understanding the working of keys. VaultArchitecture

But if you want to change the default and want to use only one key then you can initialize the vault as: vault init -key-share=1 -key-threshold=1 which will generate only one unseal key.

7) After initializing your vault next step you have to do is unseal your vault, otherwise you won’t be able to perform any operations on the vault. Execute the following command:

vault unseal   you will be asked for an unseal key enter any one of the unseal keys generated during initialization.By default vault needs three keys out of five to be completely unsealed.

See the screenshot below:

2016-07-22 10_35_28-172.16.120.138 - Remote Desktop - __Remote

You can check that the Unseal Progress is 1. That means your first key was correct. The Unseal Progress count will increase everytime you execute unseal and enter a key.

So you will need to repeat the above step in total of three times. Entering a different key each time. 

2016-07-22 10_36_38-172.16.120.138 - Remote Desktop - __Remote

In the end of third time your vault will be completely unsealed.

8) You will now need to login into the vault server to read/write into the vault. Execute the following command to login.

vault auth

Where is token given to you when you initialised the vault. It is present after the five keys. This gives you the root access to vault to perform any activities.

Vault Commands

1) vault status to get the status of vault whether it is running or not.

2) vault write secret/hello excited=yes to write a key-value pair into the vault. where secret/hello is path to access your key. “excited” is your key-name and “yes” is the value. Key and value can be anything.

3) vault read secret/hello to read the value of the key you just wrote.

4) vault write secret/hello excited=very-much to change/update the value of your key

5) vault write secret/hello excited=yes city=Pune to add multiple keys. you can just separate them with space.

6)  vault write secret/hello abc=xyz will remove the existing keys (excited and city and create a new one abc)

7) vault read -format=json secret/hello return keys and values in json

8) vault delete secret/hello to delete your path.

9) If you don’t want your path start with secret/ then you can mount other backend like generic.

Execute vault mount generic Then you will be able to add paths like generic/hello instead to secret/hello. You can get more info on secret backends here https://www.vaultproject.io/docs/secrets/index.html

10) vault mounts to see the list of mounts

11) vault write generic/hello world=Today to write to newly mounted secret backend.

12) vault read generic/hello to read it.

13) vault token-create with this vault will create a token which you can give to a user so that he an login to the vault. This will add a new user to your server.

The new user can login with vault auth . You can renew or revoke the user with vault renew or vault revoke

To add a user with username and password and not with token use the following commands.

14) vault auth-enable userpass

vault auth -methods               //This will display the authentication methods you should see userpass in this

vault write auth/userpass/users/user1 password=Canopy1! policies=root //To add username user1 with password Canopy1! will have root policy attached to it.

vault auth -method=userpass username=compose password=Canopy1!    //user can login with this

To add read-only policy to a user execute the following commands

15) Create a file with extension .hcl. Here I have created read-only.hcl

path “secret/*” {
policy = “read”
}

path “auth/token/lookup-self” {
policy = “read”
}

vault policy-write read-policy read-only.hcl //to add the policy named read-policy from file read-only.hcl

vault policies  //to display list of policies

vault policies read-policy //to display newly created policy

vault write auth/userpass/users/read-user password=Canopy1! policies=read-policy   //to add user with that policy

Now if the new user logs in he/she won’t be able to write anything in vault and just read it.

16) vault audit-enable file file_path=/home/compose/data/vault_audit.log //This will add the logs in vault_audit.log file.

Configure vault and AMP

You can add the following lines in brooklyn.properties to access the vault key-values

brooklyn.external.vault=org.apache.brooklyn.core.config.external.vault.VaultUserPassExternalConfigSupplier
brooklyn.external.vault.username=user1 //Login username you created
brooklyn.external.vault.password=Canopy1!  //Login password

brooklyn.external.vault.endpoint=http://172.16.120.159:8200/   //Ip address of your vault server
brooklyn.external.vault.path=secret/CP0000/AWS     //Path to your secrets

brooklyn.location.jclouds.aws-ec2.identity=$brooklyn:external(“vault”, “identity”)
brooklyn.location.jclouds.aws-ec2.credential=$brooklyn:external(“vault”, “credential”)

This will make AMP access your creds from vault.

Backup and recovery

All of the required vault data is present in the folder you mentioned in your config.hcl as path variable here /home/compose/data. So just take backup of the folder and paste that folder into the recovered machine. A prerequisite is that vault binary should be present in that machine.

Backup can be taken via cronjob as

0 0 * * *  rsync -avz –delete root@vault:/home/compose/data /backup/vault/

Upload artifacts to AWS S3

This document can be used when you want to upload files to AWS s3

Step-by-step guide

Execute the following steps:

  1. Install ruby with the following commands in data machine where backup will be stored
    gpg2 –keyserver hkp://keys.gnupg.net –recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
    sudo \curl -L https://get.rvm.io | bash -s stable –ruby
    source /home/compose/.rvm/scripts/rvm
    rvm list known   #####This command will show available ruby versions
    You can install the version of your choice by the following command:
    rvm install ruby 2.3.0  ###Where 2.3.0 is ruby version to be installed
    You can install latest ruby version by the following command:
    rvm install ruby –latest
    Check the version of ruby installed by:
    ruby -v
  2. Check if ruby gem is present in your machine: gem -v
  3. If not present install by sudo yum install ‘rubygems’
  4. Then install aws-sdk:  gem install aws-sdk
  5. Add the code as below in a file upload-to-s3.rb:
    # Note: Please replace below keys with your production settings
    # 1. access_key_id
    # 2. secret_access_key
    # 3. region
    # 4. buckets[name] is actual bucket name in S3require ‘aws-sdk’

    def upload( file_name, destination, directory, bucket)

    destination_file_name = destination

    puts “Creating #{destination_file_name} file…. “

    # Zip cloudsoft persisted folder
    `tar -cvzf #{destination_file_name} #{directory}`

    puts “Created #{destination_file_name} file… “

    puts “uploading #{destination} file to aws…”
    ENV[‘AWS_ACCESS_KEY_ID’]=’Your key here’
    ENV[‘AWS_SECRET_ACCESS_KEY’]=’Your secret here’
    ENV[‘AWS_REGION’]=’Your region here’

    s3 = Aws::S3::Client.new(
    )

    File.open(destination_file_name, ‘rb’) do |file|
    s3.put_object(bucket: ‘bucket_name’, key: file_name, body: file)
    end
    #@s3 = Aws::S3::Client.new(aws_credentials)
    #@s3_bucket = @s3.buckets[bucket]
    #@s3_bucket.objects[file_name].write(file:destination_file_name)

    puts “uploaded #{destination} file to aws…”

    puts “deleting #{destination} file…”
    `rm -rf #{destination}`
    puts “deleted #{destination} file…”

    end

    def clear(nfsLoc)

    # Removing all existing .tar.zip file from folders
    nfsLoc.each_pair do |key, value|
    puts “deleting #{key} file…”
    Dir[“#{key}/*.tar.gz”].each do |path|
    puts path
    `rm -rf #{path}`
    end

    puts “deleted #{key} file…”
    end
    end

    def start()

    nfsLoc = {‘/backup_dir’ => ‘bucket_name/data’}

    nfsLoc.each_pair do |key, value|
    puts “#{key} #{value}”

    Dir.glob(“#{key}/*”) do |dname|
    filename = ‘%s.%s’ % [dname, ‘tar.gz’]

    file = File.basename(filename)
    folderName = File.basename(dname)
    bucket = ‘%s/%s’ % [“#{value}”, folderName]

    puts “….. Uploading started for %s file to AWS S3 …..” % [file]
    t = ‘%s/’ % dname
    puts upload(file, filename, t, bucket)

    puts “….. Uploding finished for %s file to AWS S3 …..” % [file]
    end
    end
    end

    start()

  6. After that execute the following:
    ruby upload-to-s3.rb
  7. If adding to jenkins job add the following line in pre-build script:
    source ~/.rvm/scripts/rvm