Salt stack issues

  • The function “state.apply” is running as PID

Restart salt-minion with command: service salt-minion restart

  • No matching sls found for ‘init’ in env ‘base’

Add top.sls file in the directory where your main sls file is present.

Create the file as follows:

base:
'web*':
- apache

If the sls is present in a subdirectory elasticsearch/init.sls then write the top.sls as:

base:
'*':
- elasticsearch.init
  • How to execute saltstack-formulas
    1. create file /srv/pillar/top.sls with content:
    base:
      '*':
        - salt
    1. create file /srv/pillar/salt.sls with content:
    salt:
      master:
        worker_threads: 2
        fileserver_backend:
          - roots
          - git
        gitfs_remotes:
          - git://github.com/saltstack-formulas/epel-formula.git
          - git://github.com/saltstack-formulas/git-formula.git
          - git://github.com/saltstack-formulas/nano-formula.git
          - git://github.com/saltstack-formulas/rabbitmq-formula.git
          - git://github.com/saltstack-formulas/remi-formula.git
          - git://github.com/saltstack-formulas/vim-formula.git
          - git://github.com/saltstack-formulas/salt-formula.git
          - git://github.com/saltstack-formulas/users-formula.git
        external_auth:
          pam:
            tiger:
              - .*
              - '@runner'
              - '@wheel'
        file_roots:
          base:
            - /srv/salt
        pillar_roots:
          base:
            - /srv/pillar
        halite:
          level: 'debug'
          server: 'gevent'
          host: '0.0.0.0'
          port: '8080'
          cors: False
          tls: True
          certpath: '/etc/pki/tls/certs/localhost.crt'
          keypath: '/etc/pki/tls/certs/localhost.key'
          pempath: '/etc/pki/tls/certs/localhost.pem'
      minion:
        master: localhost
    1. before you can use saltstack-formula you need to make one change to /etc/salt/master and add next config:
    fileserver_backend:
      - roots
      - git
    gitfs_remotes:
      - git://github.com/saltstack-formulas/salt-formula.git
    1. restart salt-master (e.g. service salt-master restart)
    2. run salt-call state.sls salt.master
  • The Salt Master has cached the public key for this node

Execute the following command:

delete the exiting key on master by:

salt-key -d <minion-id>

then restart minion. Then reaccept the key on master:

salt-key -a <minion-id>

  • If salt-cloud is giving error as below:

Missing dependency: ‘netaddr’. The openstack driver requires ‘netaddr’ to be installed.

Execute the command: yum install python-netaddr

then verify if your provider is loaded with command: salt-cloud –list-providers

  • Remove dead minions keys in salt

salt-run manage.down removekeys=True

Advertisements

Configuring Graphite on Centos 7

Clone the source code:
git clone https://github.com/graphite-project/graphite-web.git
cd graphite-web
git checkout 0.9.x
cd ..
git clone https://github.com/graphite-project/carbon.git
cd carbon
git checkout 0.9.x
cd ..
git clone https://github.com/graphite-project/whisper.git
cd whisper
git checkout 0.9.x
cd ..

Configure whisper:
pushd whisper
sudo python setup.py install
popd

Configure carbon:
pushd carbon
sudo python setup.py install
popd
pushd /opt/graphite/conf/
sudo cp carbon.conf.example carbon.conf
sudo cp storage-schemas.conf.example storage-schemas.conf
popd

storage-schemas.conf has information about schema definitions for Whisper files. We can define the data retention time under this file. By default it retains everything for one day. Once graphite is configured changing this file wont change whisper’s internal metrics. You can use whisper-resize.py for that.

Configure Graphite
pushd graphite-web
python check-dependencies.py
Install the unmet dependencies:
sudo apt-get install python-cairo python-django python-django-tagging python-memcache python-ldap
python-txamqp
popd

Configure the webapp:
pushd graphite-web
sudo python setup.py install
popd

Configure Graphite webapp using Apache:
Install apache and mod_wsgi:
sudo apt-get install apache2 libapache2-mod-wsgi

Configure graphite virtual host:
sudo cp graphite-web/examples/example-graphite-vhost.conf
/etc/apache2/sites-available/graphite-vhost.conf
sudo ln -s /etc/apache2/sites-available/graphite-vhost.conf /etc/apache2/sites-enabled/graphite-vhost.conf
sudo unlink /etc/apache2/sites-enabled/000-default.conf

Edit /etc/apache2/sites-available/graphite-vhost.conf and add
WSGISocketPrefix /var/run/apache2/wsgi
Edit /etc/apache2/sites-available/graphite-vhost.conf and add
<Directory /opt/graphite/conf/>
Options FollowSymlinks
AllowOverride none
Require all granted
</Directory>
Reload apache configurations:
sudo service apache2 reload

Sync sqlite database for graphite-web:
cp /opt/graphite/webapp/graphite/local_settings.py.example
/opt/graphite/webapp/graphite/local_settings.py
cd /opt/graphite/webapp/graphite
You can add turn on debugging for graphite-web by adding following to local_settings.py:
DEBUG=True
sudo python manage.py syncdb
while doing db-sync you will be asked to create superuser for graphite. Create a superuser and password for it.
Change owner of graphite storage directory to a user through which apache is being run:
sudo chown -R www-data:www-data /opt/graphite/storage/

Configure nginx instead of apache:

Install necessary packages:
sudo apt-get install nginx php5-fpm uwsgi-plugin-python uwsgi
Configure nginx and uwsgi:
cd /opt/graphite/conf/
sudo cp graphite.wsgi.example wsgi.py
Create a file /etc/nginx/sites-available/graphite-vhost.conf and add following to it:
server {
listen 8080;
server_name graphite;
root /opt/graphite/webapp;
error_log /opt/graphite/storage/log/webapp/error.log error;
access_log /opt/graphite/storage/log/webapp/access.log;

location / {
include uwsgi_params;
uwsgi_pass 127.0.0.1:3031;
}

}
Enable the nginx server
sudo ln -s /etc/nginx/sites-available/graphite-vhost.conf /etc/nginx/sites-enabled/graphite-vhost.conf
Create a file /etc/uwsgi/apps-available/graphite.ini and add following to it:
[uwsgi]
processes = 2
socket = 127.0.0.1:3031
gid = www-data
uid = www-data
chdir = /opt/graphite/conf
module = wsgi:application
sudo ln -s /etc/uwsgi/apps-available/graphite.ini /etc/uwsgi/apps-enabled/graphite.ini

Restart services:
sudo /etc/init.d/uwsgi restart
sudo /etc/init.d/nginx restart

Start Carbon (the data aggregator):
cd /opt/graphite/
./bin/carbon-cache.py start

 

Access the graphite home page:
Graphite homepage is available at http://<ip-of-graphite-host&gt;

Connect graphite to DSP-Core:

For connecting your DSP-Core with graphite you need to install carbon agnet over your
DSP-Core machine so that carbon agent will send the data to your graphite host which can be
displayed over web-UI. Follow these steps to connect your DSP-Core with graphite.
Install carbon over DSP-Core machine:
git clone https://github.com/graphite-project/carbon.git
cd carbon
git checkout 0.9.x

cd ..
pushd carbon
sudo python setup.py install
popd
pushd /opt/graphite/conf/
sudo cp carbon.conf.example carbon.conf
sudo cp storage-schemas.conf.example storage-schemas.conf

popd
Configure DSP-Core to send data to graphite:
You need to add following to /data1/deploy/dsp/current/dsp-core/conf/bootstrap.json
“carbon-uri”: [“<IP-of-graphite-host>:2003”]

How to Setup Sandstorm Personal Cloud Server in Linux

Sandstorm is an Open Source self-hostable web productivity suite implemented as a security-hardened web app package manager. It is a radically easier way to run personal instances of your web application at one place. It allows you to have your own personal server to install multiple application on it through an app store interface as easily as you would install apps on a phone. Sandstorm keeps a list so you can find everything you create and its unified access control system covers data from every app, and everything is private to you by default. Find any app you want on the App Market and start using it with a few clicks. Every app comes with automatic updates. More than all it protects you, each document, chat room, mail box, notebook, blog, or anything else you create is a “grain” in Sandstorm. It containerizes each one in its own secure sandbox from which it cannot talk to the world without express permission. All your grains are private until you share them. The result is that 95% of security vulnerabilities are automatically mitigated.

Prerequisites

To make Sandstorm run on CentOS 7, we will be required to have systems with following competencies.

  • Linux Kernel 3.10+
  • User namespaces disabled

According to its basic software requirements, you can easily install it on RHEL-7 or CentOS 7 as both have the kernel versions greater than 3.10. Like the same way if you have to install it on Arch Linux, you can do so because of its kernels compiles with ‘CONFIG_USER_NS=n’.

Other than software requirements, you can use 1GB+ of RAM but 2GB+ is recommended. Here in this article we will be using a CentOS 7.2 VM with 2GB RAM , 2 CPUs and 20 GB disk space.

How to update your system

Once you have access to the VM, create a non-root user with sudo privileges to perform all system level tasks. In CentOS 7 you create a new user with sudo rights using below commands.

$ ssh root@server_ip

# adduser new_user

Set your password for the new user, and then Use the ‘usermod’ command to add the user to the ‘wheel’ group.

# usermod -aG wheel new_user

Now using the ‘su’ command, switch to the new user account and run the command with sudo to update your system.

# su – new_user

# sudo yum update -y

After system update with latest updates and security patches, move to the next step to download and install the Sandstorm on CentOS 7.

How to install Sandstorm

This comes with its own installer that provides its automatic installation setup. To install on your own Linux machine, you just need to run below ‘curl’ command.

$ curl https://install.sandstorm.io | bash

Then You can have two options, and you need to choose the appropriate one, either you like to go for 1 or 2 .

1. A typical install, to use Sandstorm (press enter to accept this default)
2. A development server, for working on Sandstorm itself or localhost-based app development

Let’s choose the option ‘1’ and press Enter key to go for its default typical installation.

This complete installation setup with go through the following process.:

* Install Sandstorm in /opt/sandstorm
* Automatically keep Sandstorm up-to-date
* Configure auto-renewing HTTPS if you use a subdomain of sandcats.io
* Create a service user (sandstorm) that owns Sandstorm’s files
* Configure Sandstorm to start on system boot (with systemd)
* Listen for inbound email on port 25.

To set up Sandstorm, we have to provide the sudo privileges, type ‘yes’ to allow sudo access to continue after its password.

Note that Sandstorm’s storage will only be accessible to the group ‘sandstorm’. As a Sandstorm user, you are invited to use a free Internet hostname as a subdomain of sandcats.io, a service operated by the Sandstorm development team. You can choose your desired Sandcats subdomain (alphanumeric, max 20 characters). Type the word ‘none; to skip this step, or ‘help’ for help.

What *.sandcats.io subdomain would you like? [] linox

Next you need to mention your email on file so it help you recover your domain if you lose access.

Enter your email address: [] kashifs@linoxide.com

This register your domain, and you will be provided with a URL that users will enter in browser.

[http://linox.sandicats.io:6080]

Next Sandstorm requires you to set up a wildcard DNS entry pointing at the server. This allows Sandstorm to allocate new hosts on-the-fly for sandboxing purposes. Please enter a DNS hostname containing a ‘*’ which maps to your server. For example, if you have mapped *.foo.example.com to your server, you could enter “*.foo.example.com”. You can also specify that hosts should have a special prefix, like “ss-*.foo.example.com”. Note that if your server’s main page is served over SSL, the wildcard address must support SSL as well, which implies that you must have a wildcard certificate. For local-machine servers, we have mapped *.local.sandstorm.io to 127.0.0.1 for your convenience, so you can use “*.local.sandstorm.io” here. If you are serving off a non-standard port, you must include it here as well.

Wildcard host: [*.linox.sandicats.io:6080] *.linox.sandicats.io.com

Server installation is complete now, Visit the link mentioned in the end of the setup to start using it.

http://ksh-cen-7.domain.com:6080/setup/token/36d4f17a3804ba7e19cc159a844f3e45e7a726c5

installation

As mentioned the URL expires in 15 minutes. You can generate a new setup URL by running below command.

$ sudo sandstorm admin-token

session token

How to configure Sandstorm Web setup

Once you open the URL, you will see a welcome page to begin the admin settings and to configure your login system.

welcome sandstorm

1) Identity providers

To use Sandstorm, you need to create a user account. Every user account on Sandstorm is backed by an identity provider. You’ll use this identity provider to authenticate as the first administrator of this Sandstorm install.

Configure the identity provider or providers you wish to enable by a click on the ‘configure’ button.

identity provider

Let’s see if you want to enable Github on your Sandstorm, click on the configure button, a new window will be opened where you need to provide github login configurations. Once you got your Client ID and Client secret from your github account, click on the ‘Enable’ button to proceed.

github configuration

2) Organization settings

Sandstorm allows you to define an organization. You can automatically apply some settings to all members of your organization. Users within the organization will automatically be able to log in, install apps, and create grains.

Organization settings

3) Email delivery

Sandstorm needs a way to send email. You can skip this step (unless you’re using email login), but email-related features will be unavailable until you configure email in the future. Mention your SMTP host with Port and credentials.

email delivery

4) Pre-installed apps

Here Sandstorm installs the following Productivity Suite apps that are useful for most users shown below. You will be able to configure all pre-installed apps in the Admin Settings panel after setup.

pre install app

5) Create Admin account

Log with your google or Github account that you created in previous step to create your admin account.

admin account

That’s it, now add more users, edit other settings or start user your awesome personal cloud platform.

start using

Conclusion

In the end of this article, you are now able to install, configure and use your own personal cloud platform on CentOS 7. It aims to tackle the authentication and security problems that software-As-A-Service poses for many companies through the use of fine-grained containerization. Using Sandstorm now it’s much easier than setting up yourself because you just to point and click, your click install and you have the app running. It takes like 5 seconds to spin up a container that help’s you build your own applications within seconds.

Higher order infrastructure

2017-03-11 17_30_38-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

2017-03-11 17_31_50-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

Developer need not to worry about the underlying infrastructure, all he/she has to look into is the services running on them and the stack they write.

You do not have to worry about where your code is running. Which leads to faster rollouts, faster releases, faster deployments. Even rollbacks have become piece of cake with having docker on your infrastructure.

2017-03-11 17_35_15-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

If there is any change in your service all you have to do is change the YAML (yet another markup language) file and you will have a completely new service in minutes.  Docker was build for scalabilty and high availability.

It is very easy to load balance your services in docker, scale up and scale down as per your requirements.

The most basic application that is demoed by docker, is the following cat and dog polling polygot application.

2017-03-11 17_43_31-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png2017-03-11 17_43_44-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

Each part of this application will be written and maintained by a different team. Add it will just get collaborated by docker.

2017-03-11 17_47_59-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

The above are the components required to get the docker application up and running.

2017-03-11 17_51_39-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

2017-03-11 17_51_54-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

2017-03-11 17_52_45-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

Docker swarm is a docker cluster manager that we can run our docker commands on and they will be executed on the whole cluster instead of just one machine.

The following is a docker swarm architecture:

2017-03-11 17_54_34-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

Containers provide an elegant solution for those looking to design and deploy applications at scale. While Docker provides the actual containerizing technology, many other projects assist in developing the tools needed for appropriate bootstrapping and communication in the deployment environment.

One of the core technologies that many Docker environments rely on is service discovery. Service discovery allows an application or component to discover information about their environment and neighbors. This is usually implemented as a distributed key-value store, which can also serve as a more general location to dictate configuration details. Configuring a service discovery tool allows you to separate your runtime configuration from the actual container, which allows you to reuse the same image in a number of environments.

The basic idea behind service discovery is that any new instance of an application should be able to programmatically identify the details of its current environment. This is required in order for the new instance to be able to “plug in” to the existing application environment without manual intervention. Service discovery tools are generally implemented as a globally accessible registry that stores information about the instances or services that are currently operating. Most of the time, in order to make this configuration fault tolerant and scalable, the registry is distributed among the available hosts in the infrastructure.

While the primary purpose of service discovery platforms is to serve connection details to link components together, they can be used more generally to store any type of configuration. Many deployments leverage this ability by writing their configuration data to the discovery tool. If the containers are configured so that they know to look for these details, they can modify their behavior based on what they find.

How Does Service Discovery Work?

Each service discovery tool provides an API that components can use to set or retrieve data. Because of this, for each component, the service discovery address must either be hard-coded into the application/container itself, or provided as an option at runtime. Typically the discovery service is implemented as a key-value store accessible using standard http methods.

The way a service discovery portal works is that each service, as it comes online, registers itself with the discovery tool. It records whatever information a related component might need in order to consume the service it provides. For instance, a MySQL database may register the IP address and port where the daemon is running, and optionally the username and credentials needed to sign in.

When a consumer of that service comes online, it is able to query the service discovery registry for information at a predefined endpoint. It can then interact with the components it needs based on the information it finds. One good example of this is a load balancer. It can find every backend server that it needs to feed traffic to by querying the service discovery portal and adjusting its configuration accordingly.

This takes the configuration details out of the containers themselves. One of the benefits of this is that it makes the component containers more flexible and less bound to a specific configuration. Another benefit is that it makes it simple to make your components react to new instances of a related service, allowing dynamic reconfiguration.

What Are Some Common Service Discovery Tools?

Now that we’ve discussed some of the general features of service discovery tools and globally distributed key-value stores, we can mention a few of the projects that relate to these concepts.

Some of the most common service discovery tools are:

  • etcd: This tool was created by the makers of CoreOS to provide service discovery and globally distributed configuration to both containers and the host systems themselves. It implements an http API and has a command line client available on each host machine.
  • consul: This service discovery platform has many advanced features that make it stand out including configurable health checks, ACL functionality, HAProxy configuration, etc.
  • zookeeper: This example is a bit older than the previous two, providing a more mature platform at the expense of some newer features.

Some other projects that expand basic service discovery are:

  • crypt: Crypt allows components to protect the information they write using public key encryption. The components that are meant to read the data can be given the decryption key. All other parties will be unable to read the data.
  • confd: Confd is a project aimed at allowing dynamic reconfiguration of arbitrary applications based on changes in the service discovery portal. The system involves a tool to watch relevant endpoints for changes, a templating system to build new configuration files based on the information gathered, and the ability to reload affected applications.
  • vulcand: Vulcand serves as a load balancer for groups of components. It is etcd aware and modifies its configuration based on changes detected in the store.
  • marathon: While marathon is mainly a scheduler (covered later), it also implements a basic ability to reload HAProxy when changes are made to the available services it should be balancing between.
  • frontrunner: This project hooks into marathon to provide a more robust solution for updating HAProxy.
  • synapse: This project introduces an embedded HAProxy instance that can route traffic to components.
  • nerve: Nerve is used in conjunction with synapse to provide health checks for individual component instances. If the component becomes unavailable, nerve updates synapse to bring the component out of rotation.

2017-03-11 18_01_52-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player2017-03-11 18_02_10-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player2017-03-11 18_02_27-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player

The command above is used to create a consul machine droplet in digital ocean.

2017-03-11 18_05_06-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

Use the above command to create docker swarm master which will attach to the consul.

2017-03-11 18_09_42-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

In docker swarm you can define your strategies in a very fine grain style.

2017-03-11 18_11_51-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

2017-03-11 18_12_38-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player2017-03-11 18_13_17-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player

2017-03-11 18_17_14-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player2017-03-11 18_17_32-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player2017-03-11 18_18_19-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player

To scale up all you have to type is docker-compose scale <your-service-name> and you are done.

auto-scaling will2017-03-11 18_28_03-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png

Auto-scalng will need a monitoring service to be plugged in.

Hashicorp Vault

What is Vault?

Vault is a tool for securely accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, certificates, and more. Vault provides a unified interface to any secret, while providing tight access control and recording a detailed audit log.

A modern system requires access to a multitude of secrets: database credentials, API keys for external services, credentials for service-oriented architecture communication, etc. Understanding who is accessing what secrets is already very difficult and platform-specific. Adding on key rolling, secure storage, and detailed audit logs is almost impossible without a custom solution. This is where Vault steps in.

The key features of Vault are:

1) Secure Secret Storage

2) Dynamic Secrets

3) Data Encryption

4) Leasing and Renewal

5) Revocation

 

Terms used in Vault

 

  • Storage Backend – A storage backend is responsible for durable storage of encrypted data. Backends are not trusted by Vault and are only expected to provide durability. The storage backend is configured when starting the Vault server.
  • Barrier – The barrier is cryptographic steel and concrete around the Vault. All data that flows between Vault and the Storage Backend passes through the barrier.
  • Secret Backend – A secret backend is responsible for managing secrets.
  • Audit Backend – An audit backend is responsible for managing audit logs. Every request to Vault and response from Vault goes through the configured audit backends.
  • Credential Backend – A credential backend is used to authenticate users or applications which are connecting to Vault. Once authenticated, the backend returns the list of applicable policies which should be applied. Vault takes an authenticated user and returns a client token that can be used for future requests.
  • Client Token – A client token is a conceptually similar to a session cookie on a web site. Once a user authenticates, Vault returns a client token which is used for future requests. The token is used by Vault to verify the identity of the client and to enforce the applicable ACL policies. This token is passed via HTTP headers.
  • Secret – A secret is the term for anything returned by Vault which contains confidential or cryptographic material. Not everything returned by Vault is a secret, for example system configuration, status information, or backend policies are not considered Secrets.
  • Server – Vault depends on a long-running instance which operates as a server. The Vault server provides an API which clients interact with and manages the interaction between all the backends, ACL enforcement, and secret lease revocation. Having a server based architecture decouples clients from the security keys and policies, enables centralized audit logging and simplifies administration for operators.

Vault Architecture

A very high level overview of Vault looks like this:

 

There is a clear separation of components that are inside or outside of the security barrier. Only the storage backend and the HTTP API are outside, all other components are inside the barrier.

 

The storage backend is untrusted and is used to durably store encrypted data. When the Vault server is started, it must be provided with a storage backend so that data is available across restarts. The HTTP API similarly must be started by the Vault server on start so that clients can interact with it.

Once started, the Vault is in a sealed state. Before any operation can be performed on the Vault it must be unsealed. This is done by providing the unseal keys. When the Vault is initialized it generates an encryption key which is used to protect all the data. That key is protected by a master key. By default, Vault uses a technique known as Shamir’s secret sharing algorithm to split the master key into 5 shares, any 3 of which are required to reconstruct the master key.

Keys

The number of shares and the minimum threshold required can both be specified. Shamir’s technique can be disabled, and the master key used directly for unsealing. Once Vault retrieves the encryption key, it is able to decrypt the data in the storage backend, and enters the unsealed state. Once unsealed, Vault loads all of the configured audit, credential and secret backends.

The configuration of those backends must be stored in Vault since they are security sensitive. Only users with the correct permissions should be able to modify them, meaning they cannot be specified outside of the barrier. By storing them in Vault, any changes to them are protected by the ACL system and tracked by audit logs.

After the Vault is unsealed, requests can be processed from the HTTP API to the Core. The core is used to manage the flow of requests through the system, enforce ACLs, and ensure audit logging is done.

When a client first connects to Vault, it needs to authenticate. Vault provides configurable credential backends providing flexibility in the authentication mechanism used. Human friendly mechanisms such as username/password or GitHub might be used for operators, while applications may use public/private keys or tokens to authenticate. An authentication request flows through core and into a credential backend, which determines if the request is valid and returns a list of associated policies.

Policies are just a named ACL rule. For example, the “root” policy is built-in and permits access to all resources. You can create any number of named policies with fine-grained control over paths. Vault operates exclusively in a whitelist mode, meaning that unless access is explicitly granted via a policy, the action is not allowed. Since a user may have multiple policies associated, an action is allowed if any policy permits it. Policies are stored and managed by an internal policy store. This internal store is manipulated through the system backend, which is always mounted at sys/.

Once authentication takes place and a credential backend provides a set of applicable policies, a new client token is generated and managed by the token store. This client token is sent back to the client, and is used to make future requests. This is similar to a cookie sent by a website after a user logs in. The client token may have a lease associated with it depending on the credential backend configuration. This means the client token may need to be periodically renewed to avoid invalidation.

Once authenticated, requests are made providing the client token. The token is used to verify the client is authorized and to load the relevant policies. The policies are used to authorize the client request. The request is then routed to the secret backend, which is processed depending on the type of backend. If the backend returns a secret, the core registers it with the expiration manager and attaches a lease ID. The lease ID is used by clients to renew or revoke their secret. If a client allows the lease to expire, the expiration manager automatically revokes the secret.

The core handles logging of requests and responses to the audit broker, which fans the request out to all the configured audit backends. Outside of the request flow, the core performs certain background activity. Lease management is critical, as it allows expired client tokens or secrets to be revoked automatically. Additionally, Vault handles certain partial failure cases by using write ahead logging with a rollback manager. This is managed transparently within the core and is not user visible.

Steps to Install Vault


1) Installing Vault is simple. There are two approaches to installing Vault: downloading a precompiled binary for your system, or installing from source. We will use the precompiled binary format. To install the precompiled binary, download the appropriate package for your system. 

2) You can use the following command as well: wget https://releases.hashicorp.com/vault/0.6.0/vault_0.6.0_linux_amd64.zip

Unzip by the command unzip vault_0.6.0_linux_amd64.zip

You will have a binary called vault in it. 

3) Once the zip is downloaded, unzip it into any directory. The vault binary inside is all that is necessary to run Vault . Any additional files, if any, aren’t required to run Vault.

Copy the binary to anywhere on your system. If you intend to access it from the command-line, make sure to place it somewhere on your PATH.

4) Add the path of your vault binary to your .bash_profile file in your home directory.

Execute the following to do it vi ~/bash_profile

export PATH=$PATH:/home/compose/vault  (If your vault binary is in /home/compose/vault/ directory)

Alternatively you can also add the unzipped vault binary file in /usr/bin so that you will be able to access vault as a command.

Verifying the Installation

To verify Vault is properly installed, execute the vault binary on your system. You should see help output. If you are executing it from the command line, make sure it is on your PATH or you may get an error about vault not being found.

Starting and configuring vault

1) Vault operates as a client/server application. The Vault server is the only piece of the Vault architecture that interacts with the data storage and backends. All operations done via the Vault CLI interact with the server over a TLS connection.

2) Before starting vault you will need to set the following environment variable VAULT_ADDR. To set it execute the following command export VAULT_ADDR=’http://127.0.0.1:8200&#8242;. 8200 is the default port for vault. You can set this environment variable permanently across all sessions by adding the following line in /etc/environment–   VAULT_ADDR=’http://127.0.0.1:8200&#8242;

3) The dev server is a built-in flag to start a pre-configured server that is not very secure but useful for playing with Vault locally. 

 

To start the Vault dev server, run vault server -dev

 

$ vault server -dev
WARNING: Dev mode is enabled!

In this mode, Vault is completely in-memory and unsealed.
Vault is configured to only have a single unseal key. The root
token has already been authenticated with the CLI, so you can
immediately begin using the Vault CLI.

The only step you need to take is to set the following
environment variable since Vault will be talking without TLS:

    export VAULT_ADDR='http://127.0.0.1:8200'

The unseal key and root token are reproduced below in case you
want to seal/unseal the Vault or play with authentication.

Unseal Key: 2252546b1a8551e8411502501719c4b3
Root Token: 79bd8011-af5a-f147-557e-c58be4fedf6c

==> Vault server configuration:

         Log Level: info
           Backend: inmem
        Listener 1: tcp (addr: "127.0.0.1:8200", tls: "disabled")

...

 

You should see output similar to that above. Vault does not fork, so it will continue to run in the foreground; to connect to it with later commands, open another shell.

 

As you can see, when you start a dev server, Vault warns you loudly. The dev server stores all its data in-memory (but still encrypted), listens on localhost without TLS, and automatically unseals and shows you the unseal key and root access key. The important thing about the dev server is that it is meant for development only. Do not run the dev server in production. Even if it was run in production, it wouldn’t be very useful since it stores data in-memory and every restart would clear all your secrets. You can practise vault read/write commands here. We won’t be using vault in dev mode as we want our data to stored permanently.

In the next steps you will see how to start and configue a durable vault server.

3) Now you need to make a hcl file to add the configurations of vault in it.

 HCL (HashiCorp Configuration Language) is a configuration language built by HashiCorp. The goal of HCL is to build a structured configuration language that is both human and machine friendly for use with command-line tools, but specifically targeted towards DevOps tools, servers, etc. HCL is also fully JSON compatible. That is, JSON can be used as completely valid input to a system expecting HCL. This helps makes systems interoperable with other systems. HCL is heavily inspired by libucl, nginx configuration, and others similar. you can find more details about HCL on https://github.com/hashicorp/hcl

4) You will need to mention a physical backend for the vault. There are various options in the physical backend.

The only physical backends actively maintained by HashiCorp areconsulinmem, and file.

  • consul – Store data within Consul. This backend supports HA. It is the most recommended backend for Vault and has been shown to work at high scale under heavy load.
  • etcd – Store data within etcd. This backend supports HA. This is a community-supported backend.
  • zookeeper – Store data within Zookeeper. This backend supports HA. This is a community-supported backend.
  • dynamodb – Store data in a DynamoDB table. This backend supports HA. This is a community-supported backend.
  • s3 – Store data within an S3 bucket S3. This backend does not support HA. This is a community-supported backend.
  • azure – Store data in an Azure Storage container Azure. This backend does not support HA. This is a community-supported backend.
  • swift – Store data within an OpenStack Swift container Swift. This backend does not support HA. This is a community-supported backend.
  • mysql – Store data within MySQL. This backend does not support HA. This is a community-supported backend.
  • postgresql – Store data within PostgreSQL. This backend does not support HA. This is a community-supported backend.
  • inmem – Store data in-memory. This is only really useful for development and experimentation. Data is lost whenever Vault is restarted.
  • file – Store data on the filesystem using a directory structure. This backend does not support HA.

Each of these backend has a different options for configuration. for simplicity we will be using file backend here. A sample fhcl file can be:

You can save the following file with any name but with .hcl extension for example config.hcl which we will store in /home/compose/data/ folder.

backend “file” {
  path = “/home/compose/data”
}
listener “tcp” {
 address = “0.0.0.0:8200” 
 tls_disable = 1
}

 

backend “file” specifies that the data produced by the vault will be stored in a file format

path specifies that the files will be stored in can be any folder.

listener will be tcp

address specifies that which machines will be able to access vault. 127.0.0.1:8200 will give access for requests only from localhost. 8.8.8.8:8200 or 0.0.0.0:8200 will give access to vault from anywhere.

tls_disable will be 1 if you are not providing any SSL certificates for authentication from client.

This is the basic file which you can use.

5) Start your vault server with the following command

vault server -config=/home/compose/data/config.hcl

point the -config to the config hcl file you just created. You need to run this command either as root or with sudo. All the next commands can be run by either root or compose without sudo.

6) If you have started your vault server for the first time then you will need to initialize it. Run the following command

vault init This will give an output as follows:

 

Unseal Key 1: a33a2812dskfybjgdbgy85a7d6da375bc9bc6c137e65778676f97b3f1482b26401
Unseal Key 2: fa91a7128dfd30f7c500ce1ffwefgtnghjj2871f3519773ada9d04bbcc3620ad02
Unseal Key 3: bb8d5e6d9372c3331044ffe678a4356912035209d6fca68f542f52cf2f3d5e0203
Unseal Key 4: 8c5977a14f8da814fa2f204ac5c2160927cdcf354fhfghfgjbgdbbb0347e4f8b04
Unseal Key 5: cd458edecf025bd02f6b11b3e43341dgdgewtea77756fagh6dc0ba4d775d312405
Initial Root Token: f15db23h-eae6-974f-45b7-se47u52d96ea
Vault initialized with 5 keys and a key threshold of 3. Please
securely distribute the above keys. When the Vault is re-sealed,
restarted, or stopped, you must provide at least 3 of these keys
to unseal it again.
Vault does not store the master key. Without at least 3 keys,
your Vault will remain permanently sealed.

Save these someplace safe as you will need this everytime you unseal your vault server to write or access data. By default you will need to enter any three of the five unseal key to unseal the vault completely.

You can refer to the architecture above for understanding the working of keys. VaultArchitecture

But if you want to change the default and want to use only one key then you can initialize the vault as: vault init -key-share=1 -key-threshold=1 which will generate only one unseal key.

7) After initializing your vault next step you have to do is unseal your vault, otherwise you won’t be able to perform any operations on the vault. Execute the following command:

vault unseal   you will be asked for an unseal key enter any one of the unseal keys generated during initialization.By default vault needs three keys out of five to be completely unsealed.

See the screenshot below:

2016-07-22 10_35_28-172.16.120.138 - Remote Desktop - __Remote

You can check that the Unseal Progress is 1. That means your first key was correct. The Unseal Progress count will increase everytime you execute unseal and enter a key.

So you will need to repeat the above step in total of three times. Entering a different key each time. 

2016-07-22 10_36_38-172.16.120.138 - Remote Desktop - __Remote

In the end of third time your vault will be completely unsealed.

8) You will now need to login into the vault server to read/write into the vault. Execute the following command to login.

vault auth

Where is token given to you when you initialised the vault. It is present after the five keys. This gives you the root access to vault to perform any activities.

Vault Commands

1) vault status to get the status of vault whether it is running or not.

2) vault write secret/hello excited=yes to write a key-value pair into the vault. where secret/hello is path to access your key. “excited” is your key-name and “yes” is the value. Key and value can be anything.

3) vault read secret/hello to read the value of the key you just wrote.

4) vault write secret/hello excited=very-much to change/update the value of your key

5) vault write secret/hello excited=yes city=Pune to add multiple keys. you can just separate them with space.

6)  vault write secret/hello abc=xyz will remove the existing keys (excited and city and create a new one abc)

7) vault read -format=json secret/hello return keys and values in json

8) vault delete secret/hello to delete your path.

9) If you don’t want your path start with secret/ then you can mount other backend like generic.

Execute vault mount generic Then you will be able to add paths like generic/hello instead to secret/hello. You can get more info on secret backends here https://www.vaultproject.io/docs/secrets/index.html

10) vault mounts to see the list of mounts

11) vault write generic/hello world=Today to write to newly mounted secret backend.

12) vault read generic/hello to read it.

13) vault token-create with this vault will create a token which you can give to a user so that he an login to the vault. This will add a new user to your server.

The new user can login with vault auth . You can renew or revoke the user with vault renew or vault revoke

To add a user with username and password and not with token use the following commands.

14) vault auth-enable userpass

vault auth -methods               //This will display the authentication methods you should see userpass in this

vault write auth/userpass/users/user1 password=Canopy1! policies=root //To add username user1 with password Canopy1! will have root policy attached to it.

vault auth -method=userpass username=compose password=Canopy1!    //user can login with this

To add read-only policy to a user execute the following commands

15) Create a file with extension .hcl. Here I have created read-only.hcl

path “secret/*” {
policy = “read”
}

path “auth/token/lookup-self” {
policy = “read”
}

vault policy-write read-policy read-only.hcl //to add the policy named read-policy from file read-only.hcl

vault policies  //to display list of policies

vault policies read-policy //to display newly created policy

vault write auth/userpass/users/read-user password=Canopy1! policies=read-policy   //to add user with that policy

Now if the new user logs in he/she won’t be able to write anything in vault and just read it.

16) vault audit-enable file file_path=/home/compose/data/vault_audit.log //This will add the logs in vault_audit.log file.

Configure vault and AMP

You can add the following lines in brooklyn.properties to access the vault key-values

brooklyn.external.vault=org.apache.brooklyn.core.config.external.vault.VaultUserPassExternalConfigSupplier
brooklyn.external.vault.username=user1 //Login username you created
brooklyn.external.vault.password=Canopy1!  //Login password

brooklyn.external.vault.endpoint=http://172.16.120.159:8200/   //Ip address of your vault server
brooklyn.external.vault.path=secret/CP0000/AWS     //Path to your secrets

brooklyn.location.jclouds.aws-ec2.identity=$brooklyn:external(“vault”, “identity”)
brooklyn.location.jclouds.aws-ec2.credential=$brooklyn:external(“vault”, “credential”)

This will make AMP access your creds from vault.

Backup and recovery

All of the required vault data is present in the folder you mentioned in your config.hcl as path variable here /home/compose/data. So just take backup of the folder and paste that folder into the recovered machine. A prerequisite is that vault binary should be present in that machine.

Backup can be taken via cronjob as

0 0 * * *  rsync -avz –delete root@vault:/home/compose/data /backup/vault/

Upload artifacts to AWS S3

This document can be used when you want to upload files to AWS s3

Step-by-step guide

Execute the following steps:

  1. Install ruby with the following commands in data machine where backup will be stored
    gpg2 –keyserver hkp://keys.gnupg.net –recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
    sudo curl -L https://get.rvm.io | bash -s stable –ruby
    source /home/compose/.rvm/scripts/rvm
    rvm list known   #####This command will show available ruby versions
    You can install the version of your choice by the following command:
    rvm install ruby 2.3.0  ###Where 2.3.0 is ruby version to be installed
    You can install latest ruby version by the following command:
    rvm install ruby –latest
    Check the version of ruby installed by:
    ruby -v
  2. Check if ruby gem is present in your machine: gem -v
  3. If not present install by sudo yum install ‘rubygems’
  4. Then install aws-sdk:  gem install aws-sdk
  5. Add the code as below in a file upload-to-s3.rb:
    # Note: Please replace below keys with your production settings
    # 1. access_key_id
    # 2. secret_access_key
    # 3. region
    # 4. buckets[name] is actual bucket name in S3require ‘aws-sdk’

    def upload( file_name, destination, directory, bucket)

    destination_file_name = destination

    puts “Creating #{destination_file_name} file…. “

    # Zip cloudsoft persisted folder
    `tar -cvzf #{destination_file_name} #{directory}`

    puts “Created #{destination_file_name} file… “

    puts “uploading #{destination} file to aws…”
    ENV[‘AWS_ACCESS_KEY_ID’]=’Your key here’
    ENV[‘AWS_SECRET_ACCESS_KEY’]=’Your secret here’
    ENV[‘AWS_REGION’]=’Your region here’

    s3 = Aws::S3::Client.new(
    )

    File.open(destination_file_name, ‘rb’) do |file|
    s3.put_object(bucket: ‘bucket_name’, key: file_name, body: file)
    end
    #@s3 = Aws::S3::Client.new(aws_credentials)
    #@s3_bucket = @s3.buckets[bucket]
    #@s3_bucket.objects[file_name].write(file:destination_file_name)

    puts “uploaded #{destination} file to aws…”

    puts “deleting #{destination} file…”
    `rm -rf #{destination}`
    puts “deleted #{destination} file…”

    end

    def clear(nfsLoc)

    # Removing all existing .tar.zip file from folders
    nfsLoc.each_pair do |key, value|
    puts “deleting #{key} file…”
    Dir[“#{key}/*.tar.gz”].each do |path|
    puts path
    `rm -rf #{path}`
    end

    puts “deleted #{key} file…”
    end
    end

    def start()

    nfsLoc = {‘/backup_dir’ => ‘bucket_name/data’}

    nfsLoc.each_pair do |key, value|
    puts “#{key} #{value}”

    Dir.glob(“#{key}/*”) do |dname|
    filename = ‘%s.%s’ % [dname, ‘tar.gz’]

    file = File.basename(filename)
    folderName = File.basename(dname)
    bucket = ‘%s/%s’ % [“#{value}”, folderName]

    puts “….. Uploading started for %s file to AWS S3 …..” % [file]
    t = ‘%s/’ % dname
    puts upload(file, filename, t, bucket)

    puts “….. Uploding finished for %s file to AWS S3 …..” % [file]
    end
    end
    end

    start()

  6. After that execute the following:
    ruby upload-to-s3.rb
  7. If adding to jenkins job add the following line in pre-build script:
    source ~/.rvm/scripts/rvm