Apache Mesos

mesos is private cloud for your data center.

Use available resources in existing cluster.

mesos is less like an cluster management but more like a kernel.

mesos is a distributed system kernel.

so this containers will scale up and down as required.

zookeeper elects new master/scheduler

timeout is also used to kill the master/scheduler.

so if the master/scheduler that failed does not reconnect in the threshold period you will just kill all of the tasks pending there

Docker

if you and your friend have vm and want to sync up you may have to transfer 20gb of file. but with docker docker diff to check the difference then docker commit then docker push which will only be the change.

docker containers: are actual containers running the applications and includes os, user added files, and meta data

docker images: helps to launch docker containers

docker file: is a file containing instructions that help automate image creation

layer: each file system that is stacked when docker mounts rootfs.

Install docker using:

sudo rpm install docker

docker commands:

docker pull: pull a pre-built image from public repos

docker run: run in 3 modes background,foreground,interactive

docker logs: logs of running logs

docker commit: save container state

docker images: list of all images

docker diff: changes in files and directories

docker build: build docker images from

dockerfiles

docker inspect: low level info about containers

docker attach: interact with running container

docker kill: kill a container

it is beneficial to separate every server like in lamp have separate php, mysql,apache in diff containers

you can use a supervisor -n to it

docker file:

automates image creation process

set of instruction to create an image

syntax: instruction argument

how docker files different:

dockerfiles run in layers

after every command a new layer is created.

if there is a mistake in line 29 of 30 if you corect in docker then first 28 skip because already run and just resolving the issue.

Dockerfile commands:

MAINTAINER <author name> sets autho name

RUN <command> execute command

ADD <src><dest> copy files from one location to another

from local machine to inside docker

CMD[“executable”,”param1″,”param2″] provides default for executing container

EXPOSE <port>port on which container might get deprecated

ENTRYPOINT [“executable”,”param1″…] configure container as exe from where you want to start execution

WORKDIR /path set working dir

ENV <key> <value> set env variables

USER <uid> set UID for use when running an image

VOLUME [“/data”] enable access to a directory from a working container mount from host to docker

docker create   —–using dockerfile

Layering issue with dockerfile is that docker is restarted after every command you will losse any environment variable

If you want to run a if then you will have to write it on a single line.

you can override this in command line

if container when you execute CMD  and command as main command and if that command quits or goes in background.

boot2docker to bootstrap instead of chef

Openstack

 

http://docs.openstack.org/juno/install-guide/install/apt/content/ch_overview.html

please go through this https://www.rdoproject.org/install/quickstart/

https://radez.fedorapeople.org/

Java

for(int x[]: nums) {
Notice how x is declared. It is a reference to a one-dimensional array of integers. This is
necessary because each iteration of the for obtains the next array in nums, beginning with
the array specified by nums[0]. The inner for loop then cycles through each of these arrays,
displaying the values of each element.

Java supports three jump statements: break, continue, and return. These statements transfer
control to another part of your program. Each is examined here.
NOTE In addition to the jump statements discussed here, Java supports one other way that you can change your program’s flow of execution: through exception handling. Exception handling provides a structured method by which run-time errors can be trapped and handled by your program. It is supported by the keywords try, catch, throw, throws, and finally.

Using break as a Form of Goto
In addition to its uses with the switch statement and loops, the break statement can also be
employed by itself to provide a “civilized” form of the goto statement. Java does not have a
goto statement because it provides a way to branch in an arbitrary and unstructured
manner. This usually makes goto-ridden code hard to understand and hard to maintain. It
also prohibits certain compiler optimizations. There are, however, a few places where the
goto is a valuable and legitimate construct for flow control. For example, the goto can be
useful when you are exiting from a deeply nested set of loops. To handle such situations,
Java defines an expanded form of the break statement. By using this form of break, you can,
for example, break out of one or more blocks of code. These blocks need not be part of a
loop or a switch. They can be any block. Further, you can specify precisely where execution
will resume, because this form of break works with a label. As you will see, break gives you
the benefits of a goto without its problems.

Most often, label is the name of a label that identifies a block of code. This can be a standalone
block of code but it can also be a block that is the target of another statement. When
this form of break executes, control is transferred out of the named block. The labeled
block must enclose the break statement, but it does not need to be the immediately
enclosing block. This means, for example, that you can use a labeled break statement to
exit from a set of nested blocks. But you cannot use break to transfer control out of a block
that does not enclose the break statement.
To name a block, put a label at the start of it. A label is any valid Java identifier followed by
a colon. Once you have labeled a block, you can then use this label as the target of a break
statement. Doing so causes execution to resume at the end of the labeled block. For example,
the following program shows three nested blocks, each with its own label. The break statement
causes execution to jump forward, past the end of the block labeled second, skipping the two
println( ) statements.
// Using break as a civilized form of goto.
class Break {
public static void main(String args[]) {
boolean t = true;
first: {
second: {
third: {
System.out.println(“Before the break.”);
if(t) break second; // break out of second block
System.out.println(“This won’t execute”);
}
System.out.println(“This won’t execute”);
}
System.out.println(“This is after second block.”);
}
}
}

// Using continue with a label.
class ContinueLabel {
public static void main(String args[]) {
outer: for (int i=0; i<10; i++) {
for(int j=0; j<10; j++) {
if(j > i) {
System.out.println();
continue outer;
}
System.out.print(” ” + (i * j));
}
}
System.out.println();
}
}

Box b1 = new Box();
Box b2 = b1;
// …
b1 = null;
Here, b1 has been set to null, but b2 still points to the original object.
REMEMBER When you assign one object reference variable to another object reference variable, you are
not creating a copy of the object, you are only making a copy of the reference.

Sometimes an object will need to perform some action when it is destroyed. For example,
if an object is holding some non-Java resource such as a file handle or character font, then
you might want to make sure these resources are freed before an object is destroyed. To
handle such situations, Java provides a mechanism called finalization. By using finalization,
you can define specific actions that will occur when an object is just about to be reclaimed
by the garbage collector.
To add a finalizer to a class, you simply define the finalize( ) method. The Java run time
calls that method whenever it is about to recycle an object of that class. Inside the finalize( )
method, you will specify those actions that must be performed before an object is destroyed.
The garbage collector runs periodically, checking for objects that are no longer referenced
by any running state or indirectly through other referenced objects. Right before an asset is
freed, the Java run time calls the finalize( ) method on the object.
The finalize( ) method has this general form:
protected void finalize( )
{
// finalization code here
}

It is important to understand that finalize( ) is only called just prior to garbage collection.
It is not called when an object goes out-of-scope, for example. This means that you cannot
know when—or even if—finalize( ) will be executed. Therefore, your program should
provide other means of releasing system resources, etc., used by the object. It must not
rely on finalize( ) for normal program operation.
NOTE If you are familiar with C++, then you know that C++ allows you to define a destructor for a class,
which is called when an object goes out-of-scope. Java does not support this idea or provide for
destructors. The finalize( ) method only approximates the function of a destructor. As you get more
experienced with Java, you will see that the need for destructor functions is minimal because of
Java’s garbage collection subsystem.

Cloud Foundry Part 2

http://pivotal.io/platform

gem install vmc

vmc target api.cloudfoundry.com

vmc passwd to change password

vmc login to login

vmc info

vmc push

vmc instances myfoo +10 to add 10 machines

vmc instances myfoo -10

you can use micro cloud foundry  —very interesting

cloud foundry very interesting

 

Cloud Foundry Part 1

cloud foundry: automatic deployment of applications.

It has BOSH which communicates with the infrastructure and ensures that the systems are up aand running, health monitoring.

docker hit memory for some accounting bu can be disab

AWS

OpsWorks: Application life cycle management

Codedeploy:

AWS CodeDeploy is part of a family of AWS deployment services that includes AWS Elastic BeanstalkAWS CodePipelineAWS CloudFormation, and AWS OpsWorks. AWS CodeDeploy coordinates application deployments to Amazon EC2 instances, on-premises instances, or both. (On-premises instances are physical devices that are not Amazon EC2 instances.)

An application can contain deployable content like code, web, and configuration files, executables, packages, scripts, and so on. AWS CodeDeploy deploys applications from Amazon S3 buckets and GitHub repositories.

You do not need to make changes to your existing code to use AWS CodeDeploy. You can use AWS CodeDeploy to control the pace of deployment across Amazon EC2 instances and to define the actions to be taken at each stage.

AWS CodeDeploy works with various systems for configuration management, source control, continuous integrationcontinuous delivery, and continuous deployment. For more information, see Product and Service Integrations.

ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory caches, instead of relying entirely on slower disk-based databases. ElastiCache supports two open-source in-memory caching engines:

  • Memcached – a widely adopted memory object caching system. ElastiCache is protocol compliant with Memcached, so popular tools that you use today with existing Memcached environments will work seamlessly with the service.
  • Redis – a popular open-source in-memory key-value store that supports data structures such as sorted sets and lists. ElastiCache supports Master / Slave replication and Multi-AZ which can be used to achieve cross AZ redundancy.

AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

You can use AWS CloudFormation’s sample templates or create your own templates to describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application. You don’t need to figure out the order for provisioning AWS services or the subtleties of making those dependencies work. CloudFormation takes care of this for you. After the AWS resources are deployed, you can modify and update them in a controlled and predictable way, in effect applying version control to your AWS infrastructure the same way you do with your software. You can also visualize your templates as diagrams and edit them using a drag-and-drop interface with the AWS CloudFormation Designer.

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers.

Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change. Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use. Amazon EC2 provides developers the tools to build failure resilient applications and isolate themselves from common failure scenarios.

Amazon Elastic Block Store (Amazon EBS) provides persistent block level storage volumes for use with Amazon EC2 instances in the AWS Cloud.  Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure, offering high availability and durability. Amazon EBS volumes offer the consistent and low-latency performance needed to run your workloads. With Amazon EBS, you can scale your usage up or down within minutes – all while paying a low price for only what you provision.

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business. Amazon RDS provides you six familiar database engines to choose from, including Amazon Aurora, Oracle, Microsoft SQL Server, PostgreSQL, MySQL and MariaDB.

Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances in the cloud. It enables you to achieve greater levels of fault tolerance in your applications, seamlessly providing the required amount of load balancing capacity needed to distribute application traffic.

Amazon Web Services (AWS) comprises dozens of services, each of which exposes an area of functionality. While the variety of services offers flexibility for how you want to manage your AWS infrastructure, it can be challenging to figure out which services to use and how to provision them.

With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without worrying about the infrastructure that runs those applications. AWS Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring. Elastic Beanstalk uses highly reliable and scalable services that are available in the AWS Free Usage Tier.

Amazon Simple Storage Service is storage for the Internet. It is designed to make web-scale computing easier for developers.

Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. The service aims to maximize benefits of scale and to pass those benefits on to developers.

Amazon CloudFront is a global content delivery network (CDN) service that accelerates delivery of your websites, APIs, video content or other web assets. It integrates with other Amazon Web Services products to give developers and businesses an easy way to accelerate content to end users with no minimum usage commitments.

Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the Amazon Web Services (AWS) cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways.

You can easily customize the network configuration for your Amazon Virtual Private Cloud. For example, you can create a public-facing subnet for your webservers that has access to the Internet, and place your backend systems such as databases or application servers in a private-facing subnet with no Internet access. You can leverage multiple layers of security, including security groups and network access control lists, to help control access to Amazon EC2 instances in each subnet.

Additionally, you can create a Hardware Virtual Private Network (VPN) connection between your corporate datacenter and your VPC and leverage the AWS cloud as an extension of your corporate datacenter.

AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources for your users. You use IAM to control who can use your AWS resources (authentication) and what resources they can use and in what ways (authorization).

Amazon WorkSpaces is a fully managed, secure desktop computing service which runs on the AWS cloud. Amazon WorkSpaces allows you to easily provision cloud-based virtual desktops and provide your users access to the documents, applications, and resources they need from any supported device, including Windows and Mac computers, Chromebooks, iPads, Kindle Fire tablets, and Android tablets. With just a few clicks in the AWS Management Console, you can deploy high-quality cloud desktops for any number of users at a cost that is competitive with traditional desktops and half the cost of most Virtual Desktop Infrastructure (VDI) solutions.

AWS Lambda is a compute service where you can upload your code to AWS Lambda and the service can run the code on your behalf using AWS infrastructure. After you upload your code and create what we call a Lambda function, AWS Lambda takes care of provisioning and managing the servers that you use to run the code. You can use AWS Lambda as follows:

  • As an event-driven compute service where AWS Lambda runs your code in response to events, such as changes to data in an Amazon S3 bucket or an Amazon DynamoDB table.
  • As a compute service to run your code in response to HTTP requests using Amazon API Gateway or API calls made using AWS SDKs.

AWS Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring and logging. All you need to do is supply your code in one of the languages that AWS Lambda supports (currently Node.js, Java, and Python).

Currently, there are three ways of running code in AWS cloud: Amazon EC2, Amazon ECS, and AWS Elastic Beanstalk. EC2 is a full-blown IaaS while ECS is the hosted container environment. Finally, Elastic Beanstalk is a PaaS layer. AWS Lambda forms the fourth service with the capability to execute code in the cloud. But it’s unique in a sense that it is at the intersection of EC2, ECS, and Elastic Beanstalk.

Since Amazon Elastic Beanstalk is a PaaS layer, developers push the code along with the metadata. The metadata contains the details of the AMI, language, framework, and runtime requirements along with connection information of databases or dependencies. Based on the metadata, AWS Elastic Beanstalk launches an appropriate AMI and configures it to run the code. Similar to other PaaS offerings, developers push the code and configuration to Elastic Beanstalk. The configuration or metadata can be simple or comprehensive depending on the application architecture. Lambda is much simpler than PaaS. It just expects the code and its association with a set of IAM roles. Of course, allocating RAM and defining the timeout may be considered as the configuration and metadata but they are much simpler and consistent across any Lambda function. Most of the code running within PaaS is exposed to the outside world as a Web page or REST endpoint. But Lambda functions are inaccessible from the public Internet. They need to be invoked only through the supported data sources.

Thanks to Docker and containers, microservices are becoming popular. AWS Lambda is one of the first microservices environment on the public cloud. Its innovative pricing model based on the number of requests, execution time, and allocated memory makes it very attractive for moving parts of web-scale applications. When AWS adds additional languages like Ruby, Python, and Java and brings support for EC2, CloudTrail, RDS and other custom event sources,

Amazon Elasticsearch Service is a managed service that makes it easy to deploy, operate, and scale Elasticsearch in the AWS Cloud. Elasticsearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and click stream analytics. You can set up and configure your Amazon Elasticsearch cluster in minutes from the AWS Management Console. Amazon Elasticsearch Service provisions all the resources for your cluster and launches it. The service automatically detects and replaces failed Elasticsearch nodes, reducing the overhead associated with self-managed infrastructure and Elasticsearch software. Amazon Elasticsearch Service allows you to easily scale your cluster via a single API call or a few clicks in the AWS Management Console. With Amazon Elasticsearch Service, you get direct access to the Elasticsearch open-source API so that code and applications you’re already using with your existing Elasticsearch environments will work seamlessly.

Welcome to the Amazon Redshift Cluster Management Guide. Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. You can start with just a few hundred gigabytes of data and scale to a petabyte or more. This enables you to use your data to acquire new insights for your business and customers.

The first step to create a data warehouse is to launch a set of nodes, called an Amazon Redshift cluster. After you provision your cluster, you can upload your data set and then perform data analysis queries. Regardless of the size of the data set, Amazon Redshift offers fast query performance using the same SQL-based tools and business intelligence applications that you use today.