Author: Arun Bagul

Openstack and networking Options

Openstack and networking Options

Introduction –

Before we start talking about Openstack and Networking options lets compare few Cloud and there terms..

 

Openstack AWS Azure
Cloud virtual networking Project VPC(Virtual Private Cloud) Azure VNet (Virtual Network)
Identity Mgmt Keystone AWS Key Management Service Azure KeyVault
Block Storage (Virtual Disk) Glance Amazon Elastic Block Storage (EBS) Azure Page Blobs / Premium Storage
Object Storage Swift Amazon Simple Storage Service (S3) Azure Blob Storage
Shared File System Manila Amazon Elastic File System (EFS) Azure File Storage
DNS Designate Route 53 Azure DNS
Private IP Address + DHCP/DNS Private IP Private IP DIP(Dynamic IP address)
External (NATed IPAddress) Floating IP NATed Elastic IP NATed VIP(Virtual IP address) NATed and PIP (instance-level Public IP address) Directly attached to VM

 

This document will help you to understand what the are different types of network option available in Openstack and how to use them.

 

LXC – Linux Container

LXC – Linux Container

Introduction-

What are the different Container technology?

Container technology has started after 2013. There is a high potential of getting confused about available container types like Docker , LXC/LXD and CoreOS rocket.

What’s LXC?
LXC (Linux Containers) is an operating-system-level virtualization method for running multiple isolated Linux systems (containers) on a control host using a single Linux kernel.
LXC is a userspace interface for the Linux kernel containment features. Through a powerful API and simple tools, it lets Linux users easily create and manage system or application containers.

Benefits of Linux Containers:
1 – Lightweight built-in virtualization
2 – Application/server isolation
3 – Easy deployment and management
4 – No additional licensing

Weaknesses of Linux Containers:
1 – Locked into the host kernel
2 – Supported only on Linux

Current LXC uses the following kernel features to contain processes:
– Kernel namespaces (ipc, uts, mount, pid, network and user)
– Apparmor and SELinux profiles
AppArmor is a Linux kernel security module that allows the system administrator to restrict programs’ capabilities with per-program profiles.
Security-Enhanced Linux is a Linux kernel security module that provides a mechanism for supporting access control security policies.
Seccomp policies
Chroots (using pivot_root)
Kernel capabilities
CGroups (control groups)

LXC is currently made of a few separate components:
– The liblxc library
– A set of standard tools to control the containers
– Distribution container templates
– Several language bindings for the API:
– python3
– Go
– ruby
– Haskell

The Linux kernel provides the cgroups functionality that allows limitation and prioritization of resources (CPU, memory, block I/O, network, etc.) without the need for starting any virtual machines, and also namespace isolation functionality that allows complete isolation of an applications’ view of the operating environment, including process trees, networking, user IDs and mounted file systems.

LXC containers are often considered as something in the middle between a chroot and a full fledged virtual machine. The goal of LXC is to create an environment as close as possible to a standard Linux installation but without the need for a separate kernel.
LXC combines the kernel’s cgroups and support for isolated namespaces to provide an isolated environment for applications. Docker can also use LXC as one of its execution drivers, enabling image management and providing deployment services.

What’s LXD?
LXD is a next generation system container manager. It offers a user experience similar to virtual machines but using Linux containers instead. LXD isn’t a rewrite of LXC, in fact it’s building on top of LXC to provide a new, better user experience. Under the hood, LXD uses LXC through liblxc and its Go binding to create and manage the containers.

What is difference between LXD vs Docker?
– Docker focuses on application delivery from development to production, while LXD’s focus is system containers.
– LXC in market since 2008 as compare to Docker 2013.
– Earlier Docker was based on LXC. Later Docker replaced it with libcontainer.
– Docker specializes in deploying apps
– LXD specializes in deploying (Linux) Virtual Machines

Application build using LXC?
Anbox – Android in a Box
Anbox is a container-based approach to boot a full Android system on a regular GNU/Linux system like Ubuntu. In other words: Anbox will let you run Android on your Linux system without the slowness of virtualization.

Reference –
WebSite: https://linuxcontainers.org
Version: LXC 2.1.x
https://linuxcontainers.org/lxd/getting-started-cli
http://www.tothenew.com/blog/lxc-linux-containers

Thank you,
Arun Bagul

How to login to Windows and Run command from Linux

How to login to Windows and Run command from Linux

Introduction –
In many cloud application you need to login to Windows from Linux server and run Windows native command or Powershell command to perform certain task.
There are two options to do this.

1) Winexe (outdated)-
NOTE – You can use it if it works for you!
Winexe remotely executes commands on Windows NT/2000/XP/2003 systems from GNU/Linux

    eg - winexe --user='<USERNAME>' //<HOSTNAME or IPADDRESS> "ipconfig /all"

2) Ruby or Python and WinRM –

* What is Windows Remote Management (WinRM)?
Windows Remote Management (WinRM) is the Microsoft implementation of WS-Management Protocol, a standard Simple Object Access Protocol (SOAP)-based, firewall-friendly protocol that    allows hardware and operating systems, from different vendors, to interoperate. Refere https://msdn.microsoft.com/en-us/library/aa384426.aspx for more information.

* What is PowerShell Remoting Protocol?
PowerShell Remoting allows me to run commands on a remote system as if I was sitting in front of it. It provides a consistent framework for managing computers across a network.
Windows PowerShell Remoting Protocol, which encodes messages prior to sending them over the Web Services Management Protocol Extensions for the Windows.

NOTE- “Enable-PSRemoting” service should be running on Windows server. (Command: Enable-PSRemoting -Force)

* WinRM and the Python library pywinrm (https://github.com/diyan/pywinrm)
   * WinRM and the Ruby library winrm

In this blog, we will use Ruby winrm library and see how we can monitor Windows service. Please find installation steps.

# yum install ruby-devel
# gem install -r winrm

Scripts-

#!/usr/bin/env /usr/bin/ruby

require 'rubygems'
require 'fileutils'
require 'highline/import'
require 'optparse'
require 'winrm'

############
options = {}
args = OptionParser.new do |opts|
  opts.on('-h', '--help', 'Show help') do
     puts opts
     exit
  end

  opts.on('-', /\A-\Z/) do
    puts opts
    exit(1)
  end

  opts.on('-H', '--hostname HOSTNAME', 'Hostname') do |hostname|
    options[:hostname] = hostname
  end

  opts.on('-u', '--username USERNAME', 'Username') do |username|
    options[:username] = username
  end

  opts.on('-s', '--service SERVICE', 'Window Service') do |winsrv|
    options[:winsrv] = winsrv
  end

  opts.on('-f', '--passfile FILE', 'File with Password') do |v|
    options[:passfile] = v
  end
end
args.parse!
args.banner = "\nUsage: #{$PROGRAM_NAME} <options>\n\n"
#puts options

if !options[:hostname].nil? && !options[:username].nil? && !options[:winsrv].nil?
   my_service = options[:winsrv].to_s
   my_user = options[:username].to_s
   my_pass = nil
   if File.exist?(options[:passfile].to_s)
     windata = File.read(options[:passfile].to_s)
     my_pass = windata.chomp.strip
   else
     print 'Enter Password: '
     pass = ask('Password: ') { |q| q.echo = '*' }
     my_pass = pass.chomp
   end

   ## windows code ##
   win_host = File.join('http://', options[:hostname], ':5985/wsman')
   opts = {
     endpoint: win_host.to_s, user: my_user.to_s, password: my_pass.to_s
   }
   conn = WinRM::Connection.new(opts)

   # powershell
   # ps_cmd = "Get-WMIObject Win32_Service -filter \"name = '#{my_service}'\" "
   #shell = conn.shell(:powershell)
   #output = shell.run(ps_cmd)
   ##puts output.stdout.chomp
   #data = output.stdout.gsub(/\s+|\n+/, ' ')
   #data = data.gsub(/\s+:/, ':')
   #if ( data =~ /ExitCode\s+:\s+0/ ) || ( data =~ /ExitCode:\s+0/ )
   # puts "OK - #{data}"
   # exit 0
   #else
   # puts "CRITICAL - #{data}"
   # exit 2
   #end

  # normal shell
   my_cmd = "sc query #{my_service}"
   shell_cmd = conn.shell(:cmd)
   output1 = shell_cmd.run(my_cmd)
   data1 = output1.to_s.chomp
   data1 = output1.stdout.gsub(/\s+|\n+/, ' ')
   data1 = data1.gsub(/\s+:/, ':')
   #puts data1
   if ( data1 =~ /.*STATE\s+:.*RUNNING/ ) || ( data1 =~ /.*STATE:.*RUNNING/ )
     puts "OK - #{data1}"
     exit 0
   else
     puts "CRITICAL - #{data1}"
     exit 2
   end
else
 STDERR.puts args.banner
 STDERR.puts args.summarize
 exit(1)
end

#eof

* How to user Script

root@localhost# ./winservice-monitoring.rb -u ‘XXXXX’ -H <MYHOST> -f /tmp/password.txt  -s xagt
OK – SERVICE_NAME: xagt TYPE: 10 WIN32_OWN_PROCESS STATE: 4 RUNNING (STOPPABLE, NOT_PAUSABLE, ACCEPTS_SHUTDOWN) WIN32_EXIT_CODE: 0 (0x0) SERVICE_EXIT_CODE: 0 (0x0) CHECKPOINT: 0x0 WAIT_HINT: 0x0
root@localhost #

root@localhost # ./winservice-monitoring.rb -u ‘XXXXX’ -H <MYHOST> -f /tmp/password.txt  -s xxx
CRITICAL – [SC] EnumQueryServicesStatus:OpenService FAILED 1060: The specified service does not exist as an installed service.
root@localhost #

Reference:
https://github.com/WinRb/WinRM
https://sourceforge.net/projects/winexe
http://blogs.msdn.com/PowerShell

Thank you,
Arun Bagul

Top 5 Infrastructure as Code (IaC) software

Top 5 Infrastructure as Code (IaC) software

Introduction

World is moving toward hybrid/multi-Cloud solutions and it is important for every Enterprise/Organizations to use different Cloud providers effectively!. Multi-Cloud strategy will help companies to save cost, make infrastructure highly available and businness continuity plan (disaster recovery) etc.

Infrastructure as Code (IaC) is a type of IT infrastructure that operations teams can automatically manage and provision through code, rather than using a manual process. Infrastructure as Code is sometimes referred to as programmable infrastructure. IaC is useful as it supports and make provisioning, deployment and maintenance of It infrastructure easy and simple in multi-Cloud scenario!

Why IaC?

* Manage infrastructure via source control, thus providing a detailed audit trail for changes.
* Apply testing to infrastructure in the form of unit testing, functional testing, and integration testing.
* Automate Your Deployment and Recovery Processes
* Rollback With the Same Tested Processes
* Don’t Repair, Redeploy
* Focus on Mean Time to Recovery
* Use Testing Tools to Verify Your Infrastructure and Hook Your Tests Into Your Monitoring System
* Documentation, since the code itself will document the state of the machine. This is particularly powerful because it means, for the first time, that infrastructure documentation is always up to date
* Enable collaboration around infrastructure configuration and provisioning, most notably between dev and ops.

Tops 5 Infrastructure as code (IaC) Software –

1) Terraform (https://www.terraform.io)
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. Terraform provides a flexible abstraction of resources and providers. Terraform is used to create, manage, and manipulate infrastructure resources. Providers generally are an IaaS (e.g. AWS, Google Cloud, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Terraform Enterprise, DNSimple, CloudFlare).
NOTE – Vagrant is another tool from HashiCorp. Refer article for more information – https://www.vagrantup.com/intro/vs/terraform.html

2) Spinnaker (https://www.spinnaker.io)
Spinnaker is an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence. Deploy across multiple cloud providers including AWS EC2, Kubernetes, Google Compute Engine, Google Kubernetes Engine, Google App Engine, Microsoft Azure, and Openstack.

3) AWS CloudFormation (https://aws.amazon.com/cloudformation)
AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. You can use AWS CloudFormation’s sample templates or create your own templates to describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application.

4) Google’s Cloud Deployment Manager (https://cloud.google.com/deployment-manager)
Google Cloud Deployment Manager allows you to specify all the resources needed for your application in a declarative format using yaml. You can also use Python or Jinja2 templates to parameterize the configuration and allow reuse of common deployment paradigms such as a load balanced, auto-scaled instance group. Treat your configuration as code and perform repeatable deployments.

5) Azure Automation and Azure Resource Manager(ARM)
Microsoft Azure Automation provides a way for users to automate the manual, long-running, error-prone, and frequently repeated tasks that are commonly performed in a cloud and enterprise environment. It saves time and increases the reliability of regular administrative tasks and even schedules them to be automatically performed at regular intervals. You can automate processes using runbooks or automate configuration management using Desired State Configuration. ARM Templates provides an easy way to create and manage one or more Azure resources consistently and repeatedly in an orderly and predictable manner in a resource group.

 

* Docker Compose (https://docs.docker.com/compose/overview)
NOTE- Docker Compose is mainly for Container technology and is different from above tools.

* Orchestrate containers with docker-compose
The powerful concept of microservices is gradually changing the industry. Large monolithic services are slowly giving way to swarms of small and autonomous microservices that work together. The process is accompanied by another market trend: containerization. Together, they help us build systems of unprecedented resilience. Containerization changes not only the architecture of services, but also the structure of environments used to create them. Now, when software is distributed in containers, developers have full freedom to decide what applications they need.

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. Compose preserves all volumes used by your services. Compose caches the configuration used to create a container. When you restart a service that has not changed, Compose re-uses the existing containers. Re-using containers means that you can make changes to your environment very quickly.

* IaC Tools and DevOps –

When we speak of the DevOps and continuous delivery/integration (CI/CD) toolchain, we’re referring to a superset of tools—many with overlapping capabilities—for helping organizations achieve faster and safer deployment velocity. This encompasses a broad range of solutions: provisioning tools, orchestration tools, testing frameworks, configuration management (CM) and automation platforms, and more. Please refer DevOps – Comparison of different Configuration Management Software for Comparisons between CM. Here we’ll compare different orchestration and management tools for provisioning infrastructures: Terraform and Spinnaker/CloudFormation.

  • CloudFormation is specific to AWS cloud resources, while Terraform/Spinnaker supports all cloud vendors.
  • Terraform allows you to define and manage your infrastructure, but Spinnaker allows you to manage your infrastructure from the perspective of code releases and deployment workflows
  • Infrastructure Lifecycle Management is easy using visualizations such as Terraform graph give developers and operators any easy way to comprehend dependent ordering
  • Docker Compose mainly for containers technology like Docker (https://www.docker.com)
  • Azure Automation is for Azure Cloud using Power-shell scripting

Thank you,
Arun Bagul

DevOps – Comparison of different Configuration Management Software

DevOps – Comparison of different Configuration Management Software

Introduction-

I’m working as DevOps since 2010. Many colleagues, friends asked me about comparison of different Configuration Management Software like Chef, Puppet or Ansible/Salt etc.

It is very important and difficult task to choose right CM(configuration management) software for managing Infrastructure and Application deployments.

I’m attaching pdf file with comparison of different CM hope it will help you.

Document – Devops-Comparison-v3

NOTE: This comparison is purely based on my knowledge and experience. Please feel free to share your updates.

Thank You,

Arun Bagul

Augmented Reality (AR)

Augmented Reality (AR)

I’m sure you might have played “pokemon-go” game!

Augmented Reality (AR) is changing the way we view the world or at least the way its users see the world. Picture yourself walking or driving down the street. With augmented-reality displays, which will eventually look much like a normal pair of glasses, informative graphics will appear in your field of view, and audio will coincide with whatever you see. These enhancements will be refreshed continually to reflect the movements of your head.

What is Augmented reality?
Augmented reality (AR), is a live direct or indirect view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data.
Various technologies are used in Augmented Reality rendering including optical projection systems, monitors, hand held devices, and display systems worn on the human body. Also head-up display, also known as a HUD, is a transparent display that presents data without requiring users to look away from their usual viewpoints. A precursor technology to augmented reality.

Where Augmented reality can be used?
Augmented reality has many applications. First used for military, industrial, and medical applications, by 2012 its use expanded into entertainment and other commercial industries. Also many startup are using AR in navigation, education, maintenance and repair, gaming, interior-design, and advertising and promotion.

What is Virtual reality (VR)?
Virtual reality (VR) is an artificial recreation of the real world that immerses the user in a computer-generated simulated reality. The technology requires expensive enablers such as headsets, which completely block out the users’ surroundings. However, because of the nature of the technology, the user is left blind to the outside world, leaving the technology at a disadvantage when it comes to everyday use.

What is different between Virtual Reality vs. Augmented Reality?

One of the biggest confusions in the world of augmented reality is the difference between augmented reality and virtual reality.  Both are earning a lot of media attention and are promising tremendous growth.

1) Augmented reality and virtual reality are inverse reflections of one in another with what each technology seeks to accomplish and deliver for the user. Virtual reality offers a digital recreation of a real life setting, while augmented reality delivers virtual elements as an overlay to the real world.

2) VR and AR are not really competing technologies, but rather complimentary technologies
3) VR is more immersive, AR provides more freedom for the user, and more possibilities for marketers because it does not need to be a head-mounted display.

What is the Market Size of Augmented reality?
Worldwide revenues for the augmented reality and virtual reality market are projected to approach $14 billion in 2017, according to IDC, the market research firm. But that’s forecast to explode to $143 billion by 2020.
Ref:

http://www.idc.com/getdoc.jsp?containerId=prUS42331217
http://www.marketsandmarkets.com/PressReleases/augmented-reality-virtual-reality.asp

Startup working on Augmented reality?

  • http://www.imaginate.in
    Hyderabad-based Imaginate, a virtual reality (VR) and augmented reality (AR) company, is the developer of NuSpace, a hardware-agnostic collaboration platform that enables people to communicate in an interactive realistic virtual world.
  • http://www.vizexperts.com
    Founded in 2004, VizExperts’ clientele includes Border Security Force (BSF), Defence Research and Development Organisations (DRDO), Indian National Centre for Ocean Information Services (INCOIS) and international conglomerates such as Halliburton, AMD and SGI. The company’s solutions offer improved situational awareness to security forces in a 3D format.
  • http://www.houssup.com

    Thank you,
    Arun Bagul

Top 5 configuration management software

Top 5 configuration management software

Why Configuration Management?

DevOps and CM(Configuration Management) are different. DevOps is about collaboration between people, while CM tools are just that: tools for automating the application of configuration states. Like any other tools, they are designed to solve certain problems in certain ways.
Using CM you can make changes very quickly, but needs to validate those changes. In considering which configuration management tool to select, you should also think about which complementary tool(s) you will use to avoid the costly effects of automating the deployment of bugs in your infrastructure-as-code.  

The advantages of software configuration management (SCM) are:

   –  It reduces redundant work
   –  It effectively manages simultaneous updates
   –  It avoids configuration related problems
   –  It simplifies coordination between team members
   –  It is helpful in tracking defects

Top five(5) tools for configuration management
    
1) Chef –

Like Puppet, Chef is also written in Ruby, and its uses a Ruby-based DSL. Chef utilizes a master-agent model, and in addition to a solo mode call chef-solo.
Chef is one of the most popular SCM tools. It is basically a framework for infrastructure development. It provides support and packages for framing ones infrastructure as code. It offers libraries for building up an infrastructure, which can be deployed easily. It produces consistent, shareable and reusable components, which are known as recipes and are used to automate infrastructure. It comprises the Chef server, workstation, repository and the Chef client.

2) Puppet –
Another SCM tool commonly used is Puppet. It was first introduced in 2005 as an open source configuration management tool. It is written in Ruby. This CM system allows defining the state of the IT infrastructure, and then automatically enforces the correct state. The user describes the systems resources and their state, either by using Puppets declarative language or a Ruby DSL. This information is stored in files known as Puppet manifests. It discovers system information through a utility called Facter and compiles it into a system-specific catalogue containing resources and their dependencies, which are applied against the target systems.

It is frequently stated that Puppet is a tool that was built with sysadmins in mind. The learning curve is less imposing due to Puppet being primarily model driven. Getting your head around JSON data structures in Puppet manifests is far less daunting to a sysadmin who has spent their life at the command line than Ruby syntax is.

3) Ansible –

A newer offering on the market, Ansible has nonetheless gained a solid footing in the industry.
Ansible is an open source platform for CM, orchestration and deployment of compute resources. It manages resources with the use of SSH (Paramiko, a Python SSH2 implementation, or standard SSH). Currently their solutions consists of two offerings: Ansible and Ansible Tower, the latter featuring the platform’s UI and dashboard. Despite being a relatively new player in the arena when compared to competitors like Chef or Puppet, it’s gained quite a favorable reputation amongst DevOps professionals for its straightforward operations and simple management capabilities.

4) SaltStack –

Salt is an open source multitasking CM and remote execution tool. It has a Python-based approach to represent infrastructure as a code philosophy. The remote execution engine is the heart of Salt. It creates a high speed and bi-directional communication network for a group of resources. A Salt state is a fast and flexible CM system on top of the communication system provided by the remote execution engine. It is a CLI-based tool.
It was also developed in response to dissatisfaction with the Puppet/ Chef hegemony, especially their slow speed of deployment and restricting users to Ruby. Salt is sort of halfway between Puppet and Ansible – it supports Python, but also forces users to write all CLI commands in either Python, or the custom DSL called PyDSL. It uses a master server and deployed agents called minions to control and communicate with the target servers, but this is implemented using the ZeroMq messaging lib at the transport layer, which makes it a few orders of magnitude faster than Puppet/ Chef.

5) Juju –

Juju is an open source configuration management and orchestration management tool. It enables applications to be deployed, integrated and scaled on various types of cloud platforms faster and more efficiently. It allows users to export and import application architectures and reproduce the same environment at different phases on cloud platforms such as Joyent, Amazon Web Services, Windows Azure, HP Cloud and IBM.

The main mechanism behind Juju is known as Charms that can be written in any programming language, whose execution is supported via the command line. They are a collection of YAML configuration files.
Clients are available for Ubuntu, Windows and Mac operating systems. Once you install the client, environments can be bootstrapped on various cloud platforms such as Windows Azure, HP Cloud, Joyent, Amazon Web Services and IBM.

Thank you,
Arun Bagul

Launching AWS instance using Chef server

Launching AWS instance using Chef server

Overview: 

                    Chef enables you to automate your infrastructure. It provides a command line tool called knife to help you manage your configurations. Using the knife EC2 plugin you can manage your Amazon EC2 instances with Chef. knife EC2 makes it possible to create and bootstrap Amazon EC2 instances in just one line – if you go through a few setup steps. Following are steps to setup your Chef installation and AWS configuration so that we can easily bootstrap new Amazon EC2 instances with Chef’s knife

Following are the steps need to launch AWS instance.

A. Installation and Configuration of Knife Ec2 instance

  1.  Instaiing knife-ec2 instance:

a. If you’re using ChefDK, simply install the Gem:
$ chef gem install knife-ec2

b. If you’re using bundler, simply add Chef and Knife EC2 to your Gemfile:
$ gem ‘knife-ec2’

c. If you are not using bundler, you can install the gem manually from Rubygems:
$ gem install knife-ec2

In my setup I used ChefDK.

2.  Add ruby’s gem path to PATH variable to work knife-ec2 with AWS

$  export PATH=/root/.chefdk/gem/ruby/2.1.0/bin:$PATH

 3. Add the AWS credentials of knife user to knife configuration file i.e ~/.chef/knife.rb.

——————————————————————————–

knife[:aws_access_key_id] = “user_key_ID”
knife[:aws_secret_access_key] = “User_secret_key”

———————————————————————————

 B. Prepare SSH access to Amazon EC2 Instance.
 1. Configure Amazon Security Group
As Amazon blocks all incoming traffic to EC2 instances by default. We’ll need to open the SSH(22) port for knife to access a newly created instance. Also HTTPS(443) port to communicate launched instance’s chef client with chefserver.Just login to the AWS management console and navigate to EC2 Services Compute Security Groups default group.Then add a rule for Type SSH and HTTPS with Source Anywhere and save the new inbound rule

2. Generate Key Pair in AWS Console
To enable SSH access to Amazon EC2 instances you need to create a key pair. Amazon will install the public key of that key pair on every EC2 instance. knife will use the private key of that key pair to connect to your Amazon EC2 instances. Store the downloaded private key knife.pem in “~/.ssh/knife.pem” of ec2-user.

3. Prepare SSH configuration to avoid host key mismatch errors:
Create “/home/ec2-user/.ssh/config and add below content:
_________________________________________________________
Host ec2*compute-1.amazonaws.com
StrictHostKeyChecking no
User ec2-user
IdentityFile /home/ec2-user/.ssh/knife.pem
_________________________________________________________

 C. Choose an AMI for your Amazon EC2 instances
We need to choose the right AMI for region, architecture and root storage. Note down the AMI ID (ami-XXXXXXXX) to use it with knife.

     

D. Create an EC2 instance using Chef knife:
Now, it’s time to use knife to fire up and configure a new Amazon EC2 instance. Execute below command to create instance.
$sudo knife ec2 server create -r “recipe[dir]” -I ami-0396cd69 -f m3.large -S knife -i /home/ec2-user/.ssh/knife.pem –ssh-user ec2-user –region us-east-1 -Z us-east-1b

Options:
-r is the run_list I want to associate with the newly created node. You can put any roles and recipes you like here
-I is the AMI ID
-f is the Amazon EC2 instance type
-S is the name you gave to the SSH key pair generated in the AWS management console
-i points to the private key file of that SSH key pair as downloaded when the key pair was created in the AWS management console
–ssh-user the official EC2 AMIs use ec2-user as the default user
–region us-east-1 If you want your instances to be deployed in any specific Amazon AWS region, add this parameter and the desired region
-Z us-east-1b is the availability zone within your region

NOTE:
If you did not give the –r i.e run list with above mentioned command, then it throws the Exception below:

     “EXCEPTIONS : NoMethodError Undefined method ‘empty?’ for nil:NilClass”

 

E.   Terminate instance and delete the corresponding Chef node
$ knife ec2 server delete i-XXXXXXXX –region us-east-1
$ knife node delete i-XXXXXXXX

(i-XXXXXXXX is the ID of the instance as found in the AWS management console)

Ravi co-founder of IndianGNU @ 2nd Check_MK Conference in Munich, Germany

Ravi co-founder of IndianGNU @ 2nd Check_MK Conference in Munich, Germany

Introduction-

Ravi Bhure co-founder of IndianGNU attended  2nd Check_MK Conference in Munich, Germany last October,2015 as delegate.

Ravi talked about “Migration from Solarwinds to Check_MK” in Check_MK Munich Conference.

For more information on presentation please visit https://www.linkedin.com/pulse/open-source-monitering-deadly-combo-migration-from-checkmk-burkule

 

Thank you,

Arun

 

 

Website Functional testing using Browser automation

Website Functional testing using Browser automation

Introduction-

Website Functional testing using Browser automation is very important and better than just monitoring url with 200 OK! Your Application or Website URL may be working fine but this doesn’t mean that all components are working fine. So testing your Web Application functionalities with screenshot is important in cloud based product and Uptime Reporting for Customers.

Few years back I wrote article “Firefox yslow and Showslow for Website testing and performance” Ref url- http://www.indiangnu.org/2012/firefox-yslow-and-showslow-for-website-testing-and-automation/

The Mechanize library is used for automating interaction with websites and available for Perl, Python and Ruby..
URL-
http://search.cpan.org/~ether/WWW-Mechanize-1.75/lib/WWW/Mechanize.pm
http://wwwsearch.sourceforge.net/mechanize/
https://pypi.python.org/pypi/mechanize/
http://mechanize.rubyforge.org/

There are several wrappers around “Mechanize” designed for functional testing of web applications: zope.testbrowser and twill

Top 5 Products for “Website Functional testing using Browser automation”…

1) Selenium Browser Automation (http://www.seleniumhq.org) –
Selenium automates browsers. Selenium is a portable software testing framework for web applications. Selenium provides a record/playback tool for authoring tests without learning a test scripting language (Selenium IDE). Selenium WebDriver is the successor to Selenium RC. Selenium WebDriver accepts commands (sent in Selenese, or via a Client API) and sends them to a browser. This is implemented through a browser-specific browser driver, which sends commands to a browser, and retrieves results. Most browser drivers actually launch and access a browser application (such as Firefox or Internet Explorer); there is also an HtmlUnit browser driver, which simulates a browser using HtmlUnit.

Selenium Grid- Selenium Grid is a server that allows tests to use web browser instances running on remote machines. With Selenium Grid, one server acts as the hub. the hub has a list of servers that provide access to browser instances (WebDriver nodes)

2) Splinter (https://splinter.readthedocs.org) –
Splinter is an open source tool for testing web applications using Python. It lets you automate browser actions, such as visiting URLs and interacting with their items. It supports multi webdrivers (chrome webdriver, firefox webdriver, phantomjs webdriver, zopetestbrowser, remote webdriver), support to iframe and alert and execute javascripts.

3) twill (http://twill.idyll.org) –
twill: a simple scripting language for Web browsing. twill is a simple language that allows users to browse the Web from a command-line interface. With twill, you can navigate through Web sites that use forms, cookies, and most standard Web features.

4) zope.testbrowser (https://pypi.python.org/pypi?:action=display&name=zope.testbrowser) –
zope.testbrowser provides an easy-to-use programmable web browser with special focus on testing. It is used in Zope, but it’s not Zope specific at all. For instance, it can be used to test or otherwise interact with any web site.

5) PAMIE  (http://pamie.sourceforge.net) –
P.A.M.I.E. – stands for Python Automated Module For I.E.  Pamie’s main use is for testing web sites by which you automate the Internet Explorer client using the Pamie scripting language. Simply create a script using the free PythonWin IDE that comes with the win32all extensions. import cPAMIE and use the Pamie Scripting Language (PSL) to write a script that simulates a user navigating a web site. It’s simple to use.

There are many product available for web site monitoring…

http://www.monitor.us/
https://ghostinspector.com/
https://www.browserstack.com
https://www.alertbot.com/
https://www.pingdom.com/

Thank you,
Arun Bagul