Author: Arun Bagul

Packer – How packer has simplified Cloud provisioning

Packer – How packer has simplified Cloud provisioning

What is Packer  –

Packer (HashiCorp) is an open source tool for creating identical machine images for multiple platforms from a single source configuration. Packer is lightweight, creating machine images for multiple platforms in parallel. Packer does not replace configuration management like Chef or Puppet. In fact, when building images, Packer can use tools like Chef or Puppet to install software onto the image.

A machine image is a single static unit that contains a per-configured operating system and installed software which is used to quickly create new running machines. Machine image formats change for each platform. Some examples include AMIs for EC2, VMDK/VMX files for VMware, OVF exports for VirtualBox, etc.

Packer and Continuous Delivery (CD) –

Packer is lightweight, portable, and command-line driven. This makes it the perfect tool to put in the middle of your continuous delivery pipeline. As part of this pipeline, the newly created images can then be launched and tested, verifying the infrastructure changes work. If the tests pass, you can be confident that the image will work when deployed. This brings a new level of stability and testability to infrastructure changes.

Packer template –

A Packer template is a configuration file that defines the image you want to build and how to build it. Packer templates use the Hashicorp Configuration Language (HCL). Packer configuration  has different block in configuration like mentioned below…

  • Packer Block –  The packer {} block contains Packer settings, including specifying a required Packer version.
  • Source Block – Thesource {} block configures a specific builder plugin, which is then invoked by a build block. Source blocks use builders and communicators to define what kind of virtualization to use, how to launch the image you want to provision, and how to connect to it.
  • Build block – The build  { } block defines what Packer should do with the Virtual Machine or Docker container after it launches.

Builders are responsible for creating machines and generating images from them for various platforms. For example, there are separate builders for EC2, VMware, VirtualBox, etc.

Please find the supported builder list here – https://www.packer.io/docs/builders

- virtualbox-iso
- vmware-iso
- hyperv-iso
- docker
  • Provisioner block – The provisioner { } block defines what Packer should do with the image build or Docker container after it launches.

Provisioners use builtin and third-party software to install and configure the machine image after booting. Provisioners prepare the system for use, so common use cases for provisioners include:

  • installing packages
  • patching the kernel
  • creating users
  • downloading application code
  • Post-Processors – Post-processors run after the image is built by the builder and provisioned by the provisioner(s). Post-processors are optional, and they can be used to upload artifacts, re-package, or more.
  • Communicators – are the mechanism Packer uses to upload files, execute scripts, etc. with the machine being created.

Communicators are configured within the builder section. Packer currently supports three kinds of communicators:

  • none – No communicator will be used. If this is set, most provisioners also can’t be used.
  • ssh – An SSH connection will be established to the machine. This is usually the default.
  • winrm – A WinRM connection will be established.

Packer Workflow –

The Packer workflow takes a defined source (usually an operating system installation media in ISO format or an existing machine), creates a machine on the builder, executes the Provisioner’s defined tasks and generates an output. Doing this on Hyper-V, VMWare workstation and Docker may look like what is shown below.

How to use Packer –

Initialize your Packer configuration and check and validate packer template…

$ packer init .
$ packer fmt .
$ packer validate .

Build the image with the packer build command. Packer will print output like what is shown below.

$ packer build new-os-image-template.hcl

Reference –

Packer software – https://www.packer.io

Thank you,

Arun Bagul

(Personal Blog and This is based on my knowledge)

IaC ~ Terraform and Pulumi

IaC ~ Terraform and Pulumi

Terraform and Pulumi

Before reading this blog I would recommend to read this blog – https://www.indiangnu.org/2017/top-5-infrastructure-as-code-iac-software/

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

Key feature of Terraform and Pulumi are..

  1. IaC (Infrastructure as Code) for better management of
  2. Infrastructure and state management
  3. Supports all Cloud providers like Azure, GCP, AWS and virtualization infrastructure like VMware and HyperV
  4. Change Automation
  5. Help you to adapt IaaS, SaaS and PaaS

How to user Terraform –

Terraform has 3 steps . First is “terraform init” initialization step in which Terraform will check and download required providers and plugins. Second is “terraform plan” in this step Terraform will create plan of change in your infrastructure and user has choice to verify and confirm then plan and then start third step ie “terraform apply” this in step Terraform will create resources in providers infrastructure like azure, aws or gcp. Terraform will also save all objects created in Terraform stafe “.tfstate” file and we can use same configuration to add or delete or update resource in same workflow….

To Debug Terraform error we can define debug and log fails mentioned below-

export TF_LOG_PROVIDER=TRACE
export TF_LOG_CORE=TRACE
export TF_LOG_PATH=logs.txt
terraform init
terraform plan
terraform apply

For formatting and configuration validation use below command –

terraform validate
terraform fmt
 
terraform refresh

HashiCorp Configuration Language (HCL)

In Terraform we can define infrastructure required to deploy as code using HCL.

HCL is a toolkit for creating structured configuration languages that are both human- and machine-friendly, for use with command-line tools. Although intended to be generally useful, it is primarily targeted towards devops tools, servers, etc. HCL has both a native syntax, intended to be pleasant to read and write for humans, and a JSON-based variant that is easier for machines to generate and parse. HCL syntax is designed to be easily read and written by humans, and allows declarative logic to permit its use in more complex applications

What is Pulumi ?

Pulumi is a modern infrastructure as code platform that allows you to use familiar programming languages and tools to build, deploy, and manage cloud infrastructure.

Pulumi vs. Terraform

Here is a summary of the key differences between Pulumi and Terraform:

ComponentPulumiTerraform
Language SupportPython, TypeScript, JavaScript, Go, C#, F#Hashicorp Configuration Language (HCL)
State ManagementManaged through Pulumi Service by default, self-managed options availableSelf-managed by default, managed SaaS offering available
Provider SupportNative cloud providers with 100% same-day resource coverage plus Terraform-based providers for additional coverageSupport across multiple IaaS, SaaS, and PaaS providers
OSS LicenseApache License 2.0Mozilla Public License 2.0

Ref –

Pulumi  – https://www.pulumi.com

Terraform – https://www.terraform.io

HCL –  https://github.com/hashicorp/hcl

Terraform Error – https://learn.hashicorp.com/tutorials/terraform/troubleshooting-workflow

Thank you,

Arun Bagul

(Personal Blog and This is based on my knowledge)

Openstack and networking Options

Openstack and networking Options

Introduction –

Before we start talking about Openstack and Networking options lets compare few Cloud and there terms..

 

Openstack AWS Azure
Cloud virtual networking Project VPC(Virtual Private Cloud) Azure VNet (Virtual Network)
Identity Mgmt Keystone AWS Key Management Service Azure KeyVault
Block Storage (Virtual Disk) Glance Amazon Elastic Block Storage (EBS) Azure Page Blobs / Premium Storage
Object Storage Swift Amazon Simple Storage Service (S3) Azure Blob Storage
Shared File System Manila Amazon Elastic File System (EFS) Azure File Storage
DNS Designate Route 53 Azure DNS
Private IP Address + DHCP/DNS Private IP Private IP DIP(Dynamic IP address)
External (NATed IPAddress) Floating IP NATed Elastic IP NATed VIP(Virtual IP address) NATed and PIP (instance-level Public IP address) Directly attached to VM

 

This document will help you to understand what the are different types of network option available in Openstack and how to use them.

 

LXC – Linux Container

LXC – Linux Container

Introduction-

What are the different Container technology?

Container technology has started after 2013. There is a high potential of getting confused about available container types like Docker , LXC/LXD and CoreOS rocket.

What’s LXC?
LXC (Linux Containers) is an operating-system-level virtualization method for running multiple isolated Linux systems (containers) on a control host using a single Linux kernel.
LXC is a userspace interface for the Linux kernel containment features. Through a powerful API and simple tools, it lets Linux users easily create and manage system or application containers.

Benefits of Linux Containers:
1 – Lightweight built-in virtualization
2 – Application/server isolation
3 – Easy deployment and management
4 – No additional licensing

Weaknesses of Linux Containers:
1 – Locked into the host kernel
2 – Supported only on Linux

Current LXC uses the following kernel features to contain processes:
– Kernel namespaces (ipc, uts, mount, pid, network and user)
– Apparmor and SELinux profiles
AppArmor is a Linux kernel security module that allows the system administrator to restrict programs’ capabilities with per-program profiles.
Security-Enhanced Linux is a Linux kernel security module that provides a mechanism for supporting access control security policies.
Seccomp policies
Chroots (using pivot_root)
Kernel capabilities
CGroups (control groups)

LXC is currently made of a few separate components:
– The liblxc library
– A set of standard tools to control the containers
– Distribution container templates
– Several language bindings for the API:
– python3
– Go
– ruby
– Haskell

The Linux kernel provides the cgroups functionality that allows limitation and prioritization of resources (CPU, memory, block I/O, network, etc.) without the need for starting any virtual machines, and also namespace isolation functionality that allows complete isolation of an applications’ view of the operating environment, including process trees, networking, user IDs and mounted file systems.

LXC containers are often considered as something in the middle between a chroot and a full fledged virtual machine. The goal of LXC is to create an environment as close as possible to a standard Linux installation but without the need for a separate kernel.
LXC combines the kernel’s cgroups and support for isolated namespaces to provide an isolated environment for applications. Docker can also use LXC as one of its execution drivers, enabling image management and providing deployment services.

What’s LXD?
LXD is a next generation system container manager. It offers a user experience similar to virtual machines but using Linux containers instead. LXD isn’t a rewrite of LXC, in fact it’s building on top of LXC to provide a new, better user experience. Under the hood, LXD uses LXC through liblxc and its Go binding to create and manage the containers.

What is difference between LXD vs Docker?
– Docker focuses on application delivery from development to production, while LXD’s focus is system containers.
– LXC in market since 2008 as compare to Docker 2013.
– Earlier Docker was based on LXC. Later Docker replaced it with libcontainer.
– Docker specializes in deploying apps
– LXD specializes in deploying (Linux) Virtual Machines

Application build using LXC?
Anbox – Android in a Box
Anbox is a container-based approach to boot a full Android system on a regular GNU/Linux system like Ubuntu. In other words: Anbox will let you run Android on your Linux system without the slowness of virtualization.

Reference –
WebSite: https://linuxcontainers.org
Version: LXC 2.1.x
https://linuxcontainers.org/lxd/getting-started-cli
http://www.tothenew.com/blog/lxc-linux-containers

Thank you,
Arun Bagul

How to login to Windows and Run command from Linux

How to login to Windows and Run command from Linux

Introduction –
In many cloud application you need to login to Windows from Linux server and run Windows native command or Powershell command to perform certain task.
There are two options to do this.

1) Winexe (outdated)-
NOTE – You can use it if it works for you!
Winexe remotely executes commands on Windows NT/2000/XP/2003 systems from GNU/Linux

    eg - winexe --user='<USERNAME>' //<HOSTNAME or IPADDRESS> "ipconfig /all"

2) Ruby or Python and WinRM –

* What is Windows Remote Management (WinRM)?
Windows Remote Management (WinRM) is the Microsoft implementation of WS-Management Protocol, a standard Simple Object Access Protocol (SOAP)-based, firewall-friendly protocol that    allows hardware and operating systems, from different vendors, to interoperate. Refere https://msdn.microsoft.com/en-us/library/aa384426.aspx for more information.

* What is PowerShell Remoting Protocol?
PowerShell Remoting allows me to run commands on a remote system as if I was sitting in front of it. It provides a consistent framework for managing computers across a network.
Windows PowerShell Remoting Protocol, which encodes messages prior to sending them over the Web Services Management Protocol Extensions for the Windows.

NOTE- “Enable-PSRemoting” service should be running on Windows server. (Command: Enable-PSRemoting -Force)

* WinRM and the Python library pywinrm (https://github.com/diyan/pywinrm)
   * WinRM and the Ruby library winrm

In this blog, we will use Ruby winrm library and see how we can monitor Windows service. Please find installation steps.

# yum install ruby-devel
# gem install -r winrm

Scripts-

#!/usr/bin/env /usr/bin/ruby

require 'rubygems'
require 'fileutils'
require 'highline/import'
require 'optparse'
require 'winrm'

############
options = {}
args = OptionParser.new do |opts|
  opts.on('-h', '--help', 'Show help') do
     puts opts
     exit
  end

  opts.on('-', /\A-\Z/) do
    puts opts
    exit(1)
  end

  opts.on('-H', '--hostname HOSTNAME', 'Hostname') do |hostname|
    options[:hostname] = hostname
  end

  opts.on('-u', '--username USERNAME', 'Username') do |username|
    options[:username] = username
  end

  opts.on('-s', '--service SERVICE', 'Window Service') do |winsrv|
    options[:winsrv] = winsrv
  end

  opts.on('-f', '--passfile FILE', 'File with Password') do |v|
    options[:passfile] = v
  end
end
args.parse!
args.banner = "\nUsage: #{$PROGRAM_NAME} <options>\n\n"
#puts options

if !options[:hostname].nil? && !options[:username].nil? && !options[:winsrv].nil?
   my_service = options[:winsrv].to_s
   my_user = options[:username].to_s
   my_pass = nil
   if File.exist?(options[:passfile].to_s)
     windata = File.read(options[:passfile].to_s)
     my_pass = windata.chomp.strip
   else
     print 'Enter Password: '
     pass = ask('Password: ') { |q| q.echo = '*' }
     my_pass = pass.chomp
   end

   ## windows code ##
   win_host = File.join('http://', options[:hostname], ':5985/wsman')
   opts = {
     endpoint: win_host.to_s, user: my_user.to_s, password: my_pass.to_s
   }
   conn = WinRM::Connection.new(opts)

   # powershell
   # ps_cmd = "Get-WMIObject Win32_Service -filter \"name = '#{my_service}'\" "
   #shell = conn.shell(:powershell)
   #output = shell.run(ps_cmd)
   ##puts output.stdout.chomp
   #data = output.stdout.gsub(/\s+|\n+/, ' ')
   #data = data.gsub(/\s+:/, ':')
   #if ( data =~ /ExitCode\s+:\s+0/ ) || ( data =~ /ExitCode:\s+0/ )
   # puts "OK - #{data}"
   # exit 0
   #else
   # puts "CRITICAL - #{data}"
   # exit 2
   #end

  # normal shell
   my_cmd = "sc query #{my_service}"
   shell_cmd = conn.shell(:cmd)
   output1 = shell_cmd.run(my_cmd)
   data1 = output1.to_s.chomp
   data1 = output1.stdout.gsub(/\s+|\n+/, ' ')
   data1 = data1.gsub(/\s+:/, ':')
   #puts data1
   if ( data1 =~ /.*STATE\s+:.*RUNNING/ ) || ( data1 =~ /.*STATE:.*RUNNING/ )
     puts "OK - #{data1}"
     exit 0
   else
     puts "CRITICAL - #{data1}"
     exit 2
   end
else
 STDERR.puts args.banner
 STDERR.puts args.summarize
 exit(1)
end

#eof

* How to user Script

root@localhost# ./winservice-monitoring.rb -u ‘XXXXX’ -H <MYHOST> -f /tmp/password.txt  -s xagt
OK – SERVICE_NAME: xagt TYPE: 10 WIN32_OWN_PROCESS STATE: 4 RUNNING (STOPPABLE, NOT_PAUSABLE, ACCEPTS_SHUTDOWN) WIN32_EXIT_CODE: 0 (0x0) SERVICE_EXIT_CODE: 0 (0x0) CHECKPOINT: 0x0 WAIT_HINT: 0x0
root@localhost #

root@localhost # ./winservice-monitoring.rb -u ‘XXXXX’ -H <MYHOST> -f /tmp/password.txt  -s xxx
CRITICAL – [SC] EnumQueryServicesStatus:OpenService FAILED 1060: The specified service does not exist as an installed service.
root@localhost #

Reference:
https://github.com/WinRb/WinRM
https://sourceforge.net/projects/winexe
http://blogs.msdn.com/PowerShell

Thank you,
Arun Bagul

Top 5 Infrastructure as Code (IaC) software

Top 5 Infrastructure as Code (IaC) software

Introduction

World is moving toward hybrid/multi-Cloud solutions and it is important for every Enterprise/Organizations to use different Cloud providers effectively!. Multi-Cloud strategy will help companies to save cost, make infrastructure highly available and businness continuity plan (disaster recovery) etc.

Infrastructure as Code (IaC) is a type of IT infrastructure that operations teams can automatically manage and provision through code, rather than using a manual process. Infrastructure as Code is sometimes referred to as programmable infrastructure. IaC is useful as it supports and make provisioning, deployment and maintenance of It infrastructure easy and simple in multi-Cloud scenario!

Why IaC?

* Manage infrastructure via source control, thus providing a detailed audit trail for changes.
* Apply testing to infrastructure in the form of unit testing, functional testing, and integration testing.
* Automate Your Deployment and Recovery Processes
* Rollback With the Same Tested Processes
* Don’t Repair, Redeploy
* Focus on Mean Time to Recovery
* Use Testing Tools to Verify Your Infrastructure and Hook Your Tests Into Your Monitoring System
* Documentation, since the code itself will document the state of the machine. This is particularly powerful because it means, for the first time, that infrastructure documentation is always up to date
* Enable collaboration around infrastructure configuration and provisioning, most notably between dev and ops.

Tops 5 Infrastructure as code (IaC) Software –

1) Terraform (https://www.terraform.io)
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. Terraform provides a flexible abstraction of resources and providers. Terraform is used to create, manage, and manipulate infrastructure resources. Providers generally are an IaaS (e.g. AWS, Google Cloud, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Terraform Enterprise, DNSimple, CloudFlare).
NOTE – Vagrant is another tool from HashiCorp. Refer article for more information – https://www.vagrantup.com/intro/vs/terraform.html

2) Spinnaker (https://www.spinnaker.io)
Spinnaker is an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence. Deploy across multiple cloud providers including AWS EC2, Kubernetes, Google Compute Engine, Google Kubernetes Engine, Google App Engine, Microsoft Azure, and Openstack.

3) AWS CloudFormation (https://aws.amazon.com/cloudformation)
AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. You can use AWS CloudFormation’s sample templates or create your own templates to describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application.

4) Google’s Cloud Deployment Manager (https://cloud.google.com/deployment-manager)
Google Cloud Deployment Manager allows you to specify all the resources needed for your application in a declarative format using yaml. You can also use Python or Jinja2 templates to parameterize the configuration and allow reuse of common deployment paradigms such as a load balanced, auto-scaled instance group. Treat your configuration as code and perform repeatable deployments.

5) Azure Automation and Azure Resource Manager(ARM)
Microsoft Azure Automation provides a way for users to automate the manual, long-running, error-prone, and frequently repeated tasks that are commonly performed in a cloud and enterprise environment. It saves time and increases the reliability of regular administrative tasks and even schedules them to be automatically performed at regular intervals. You can automate processes using runbooks or automate configuration management using Desired State Configuration. ARM Templates provides an easy way to create and manage one or more Azure resources consistently and repeatedly in an orderly and predictable manner in a resource group.

 

* Docker Compose (https://docs.docker.com/compose/overview)
NOTE- Docker Compose is mainly for Container technology and is different from above tools.

* Orchestrate containers with docker-compose
The powerful concept of microservices is gradually changing the industry. Large monolithic services are slowly giving way to swarms of small and autonomous microservices that work together. The process is accompanied by another market trend: containerization. Together, they help us build systems of unprecedented resilience. Containerization changes not only the architecture of services, but also the structure of environments used to create them. Now, when software is distributed in containers, developers have full freedom to decide what applications they need.

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. Compose preserves all volumes used by your services. Compose caches the configuration used to create a container. When you restart a service that has not changed, Compose re-uses the existing containers. Re-using containers means that you can make changes to your environment very quickly.

* IaC Tools and DevOps –

When we speak of the DevOps and continuous delivery/integration (CI/CD) toolchain, we’re referring to a superset of tools—many with overlapping capabilities—for helping organizations achieve faster and safer deployment velocity. This encompasses a broad range of solutions: provisioning tools, orchestration tools, testing frameworks, configuration management (CM) and automation platforms, and more. Please refer DevOps – Comparison of different Configuration Management Software for Comparisons between CM. Here we’ll compare different orchestration and management tools for provisioning infrastructures: Terraform and Spinnaker/CloudFormation.

  • CloudFormation is specific to AWS cloud resources, while Terraform/Spinnaker supports all cloud vendors.
  • Terraform allows you to define and manage your infrastructure, but Spinnaker allows you to manage your infrastructure from the perspective of code releases and deployment workflows
  • Infrastructure Lifecycle Management is easy using visualizations such as Terraform graph give developers and operators any easy way to comprehend dependent ordering
  • Docker Compose mainly for containers technology like Docker (https://www.docker.com)
  • Azure Automation is for Azure Cloud using Power-shell scripting

Thank you,
Arun Bagul

DevOps – Comparison of different Configuration Management Software

DevOps – Comparison of different Configuration Management Software

Introduction-

I’m working as DevOps since 2010. Many colleagues, friends asked me about comparison of different Configuration Management Software like Chef, Puppet or Ansible/Salt etc.

It is very important and difficult task to choose right CM(configuration management) software for managing Infrastructure and Application deployments.

I’m attaching pdf file with comparison of different CM hope it will help you.

Document – Devops-Comparison-v3

NOTE: This comparison is purely based on my knowledge and experience. Please feel free to share your updates.

Thank You,

Arun Bagul

Augmented Reality (AR)

Augmented Reality (AR)

I’m sure you might have played “pokemon-go” game!

Augmented Reality (AR) is changing the way we view the world or at least the way its users see the world. Picture yourself walking or driving down the street. With augmented-reality displays, which will eventually look much like a normal pair of glasses, informative graphics will appear in your field of view, and audio will coincide with whatever you see. These enhancements will be refreshed continually to reflect the movements of your head.

What is Augmented reality?
Augmented reality (AR), is a live direct or indirect view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data.
Various technologies are used in Augmented Reality rendering including optical projection systems, monitors, hand held devices, and display systems worn on the human body. Also head-up display, also known as a HUD, is a transparent display that presents data without requiring users to look away from their usual viewpoints. A precursor technology to augmented reality.

Where Augmented reality can be used?
Augmented reality has many applications. First used for military, industrial, and medical applications, by 2012 its use expanded into entertainment and other commercial industries. Also many startup are using AR in navigation, education, maintenance and repair, gaming, interior-design, and advertising and promotion.

What is Virtual reality (VR)?
Virtual reality (VR) is an artificial recreation of the real world that immerses the user in a computer-generated simulated reality. The technology requires expensive enablers such as headsets, which completely block out the users’ surroundings. However, because of the nature of the technology, the user is left blind to the outside world, leaving the technology at a disadvantage when it comes to everyday use.

What is different between Virtual Reality vs. Augmented Reality?

One of the biggest confusions in the world of augmented reality is the difference between augmented reality and virtual reality.  Both are earning a lot of media attention and are promising tremendous growth.

1) Augmented reality and virtual reality are inverse reflections of one in another with what each technology seeks to accomplish and deliver for the user. Virtual reality offers a digital recreation of a real life setting, while augmented reality delivers virtual elements as an overlay to the real world.

2) VR and AR are not really competing technologies, but rather complimentary technologies
3) VR is more immersive, AR provides more freedom for the user, and more possibilities for marketers because it does not need to be a head-mounted display.

What is the Market Size of Augmented reality?
Worldwide revenues for the augmented reality and virtual reality market are projected to approach $14 billion in 2017, according to IDC, the market research firm. But that’s forecast to explode to $143 billion by 2020.
Ref:

http://www.idc.com/getdoc.jsp?containerId=prUS42331217
http://www.marketsandmarkets.com/PressReleases/augmented-reality-virtual-reality.asp

Startup working on Augmented reality?

  • http://www.imaginate.in
    Hyderabad-based Imaginate, a virtual reality (VR) and augmented reality (AR) company, is the developer of NuSpace, a hardware-agnostic collaboration platform that enables people to communicate in an interactive realistic virtual world.
  • http://www.vizexperts.com
    Founded in 2004, VizExperts’ clientele includes Border Security Force (BSF), Defence Research and Development Organisations (DRDO), Indian National Centre for Ocean Information Services (INCOIS) and international conglomerates such as Halliburton, AMD and SGI. The company’s solutions offer improved situational awareness to security forces in a 3D format.
  • http://www.houssup.com

    Thank you,
    Arun Bagul

Top 5 configuration management software

Top 5 configuration management software

Why Configuration Management?

DevOps and CM(Configuration Management) are different. DevOps is about collaboration between people, while CM tools are just that: tools for automating the application of configuration states. Like any other tools, they are designed to solve certain problems in certain ways.
Using CM you can make changes very quickly, but needs to validate those changes. In considering which configuration management tool to select, you should also think about which complementary tool(s) you will use to avoid the costly effects of automating the deployment of bugs in your infrastructure-as-code.  

The advantages of software configuration management (SCM) are:

   –  It reduces redundant work
   –  It effectively manages simultaneous updates
   –  It avoids configuration related problems
   –  It simplifies coordination between team members
   –  It is helpful in tracking defects

Top five(5) tools for configuration management
    
1) Chef –

Like Puppet, Chef is also written in Ruby, and its uses a Ruby-based DSL. Chef utilizes a master-agent model, and in addition to a solo mode call chef-solo.
Chef is one of the most popular SCM tools. It is basically a framework for infrastructure development. It provides support and packages for framing ones infrastructure as code. It offers libraries for building up an infrastructure, which can be deployed easily. It produces consistent, shareable and reusable components, which are known as recipes and are used to automate infrastructure. It comprises the Chef server, workstation, repository and the Chef client.

2) Puppet –
Another SCM tool commonly used is Puppet. It was first introduced in 2005 as an open source configuration management tool. It is written in Ruby. This CM system allows defining the state of the IT infrastructure, and then automatically enforces the correct state. The user describes the systems resources and their state, either by using Puppets declarative language or a Ruby DSL. This information is stored in files known as Puppet manifests. It discovers system information through a utility called Facter and compiles it into a system-specific catalogue containing resources and their dependencies, which are applied against the target systems.

It is frequently stated that Puppet is a tool that was built with sysadmins in mind. The learning curve is less imposing due to Puppet being primarily model driven. Getting your head around JSON data structures in Puppet manifests is far less daunting to a sysadmin who has spent their life at the command line than Ruby syntax is.

3) Ansible –

A newer offering on the market, Ansible has nonetheless gained a solid footing in the industry.
Ansible is an open source platform for CM, orchestration and deployment of compute resources. It manages resources with the use of SSH (Paramiko, a Python SSH2 implementation, or standard SSH). Currently their solutions consists of two offerings: Ansible and Ansible Tower, the latter featuring the platform’s UI and dashboard. Despite being a relatively new player in the arena when compared to competitors like Chef or Puppet, it’s gained quite a favorable reputation amongst DevOps professionals for its straightforward operations and simple management capabilities.

4) SaltStack –

Salt is an open source multitasking CM and remote execution tool. It has a Python-based approach to represent infrastructure as a code philosophy. The remote execution engine is the heart of Salt. It creates a high speed and bi-directional communication network for a group of resources. A Salt state is a fast and flexible CM system on top of the communication system provided by the remote execution engine. It is a CLI-based tool.
It was also developed in response to dissatisfaction with the Puppet/ Chef hegemony, especially their slow speed of deployment and restricting users to Ruby. Salt is sort of halfway between Puppet and Ansible – it supports Python, but also forces users to write all CLI commands in either Python, or the custom DSL called PyDSL. It uses a master server and deployed agents called minions to control and communicate with the target servers, but this is implemented using the ZeroMq messaging lib at the transport layer, which makes it a few orders of magnitude faster than Puppet/ Chef.

5) Juju –

Juju is an open source configuration management and orchestration management tool. It enables applications to be deployed, integrated and scaled on various types of cloud platforms faster and more efficiently. It allows users to export and import application architectures and reproduce the same environment at different phases on cloud platforms such as Joyent, Amazon Web Services, Windows Azure, HP Cloud and IBM.

The main mechanism behind Juju is known as Charms that can be written in any programming language, whose execution is supported via the command line. They are a collection of YAML configuration files.
Clients are available for Ubuntu, Windows and Mac operating systems. Once you install the client, environments can be bootstrapped on various cloud platforms such as Windows Azure, HP Cloud, Joyent, Amazon Web Services and IBM.

Thank you,
Arun Bagul

Launching AWS instance using Chef server

Launching AWS instance using Chef server

Overview: 

                    Chef enables you to automate your infrastructure. It provides a command line tool called knife to help you manage your configurations. Using the knife EC2 plugin you can manage your Amazon EC2 instances with Chef. knife EC2 makes it possible to create and bootstrap Amazon EC2 instances in just one line – if you go through a few setup steps. Following are steps to setup your Chef installation and AWS configuration so that we can easily bootstrap new Amazon EC2 instances with Chef’s knife

Following are the steps need to launch AWS instance.

A. Installation and Configuration of Knife Ec2 instance

  1.  Instaiing knife-ec2 instance:

a. If you’re using ChefDK, simply install the Gem:
$ chef gem install knife-ec2

b. If you’re using bundler, simply add Chef and Knife EC2 to your Gemfile:
$ gem ‘knife-ec2’

c. If you are not using bundler, you can install the gem manually from Rubygems:
$ gem install knife-ec2

In my setup I used ChefDK.

2.  Add ruby’s gem path to PATH variable to work knife-ec2 with AWS

$  export PATH=/root/.chefdk/gem/ruby/2.1.0/bin:$PATH

 3. Add the AWS credentials of knife user to knife configuration file i.e ~/.chef/knife.rb.

——————————————————————————–

knife[:aws_access_key_id] = “user_key_ID”
knife[:aws_secret_access_key] = “User_secret_key”

———————————————————————————

 B. Prepare SSH access to Amazon EC2 Instance.
 1. Configure Amazon Security Group
As Amazon blocks all incoming traffic to EC2 instances by default. We’ll need to open the SSH(22) port for knife to access a newly created instance. Also HTTPS(443) port to communicate launched instance’s chef client with chefserver.Just login to the AWS management console and navigate to EC2 Services Compute Security Groups default group.Then add a rule for Type SSH and HTTPS with Source Anywhere and save the new inbound rule

2. Generate Key Pair in AWS Console
To enable SSH access to Amazon EC2 instances you need to create a key pair. Amazon will install the public key of that key pair on every EC2 instance. knife will use the private key of that key pair to connect to your Amazon EC2 instances. Store the downloaded private key knife.pem in “~/.ssh/knife.pem” of ec2-user.

3. Prepare SSH configuration to avoid host key mismatch errors:
Create “/home/ec2-user/.ssh/config and add below content:
_________________________________________________________
Host ec2*compute-1.amazonaws.com
StrictHostKeyChecking no
User ec2-user
IdentityFile /home/ec2-user/.ssh/knife.pem
_________________________________________________________

 C. Choose an AMI for your Amazon EC2 instances
We need to choose the right AMI for region, architecture and root storage. Note down the AMI ID (ami-XXXXXXXX) to use it with knife.

     

D. Create an EC2 instance using Chef knife:
Now, it’s time to use knife to fire up and configure a new Amazon EC2 instance. Execute below command to create instance.
$sudo knife ec2 server create -r “recipe[dir]” -I ami-0396cd69 -f m3.large -S knife -i /home/ec2-user/.ssh/knife.pem –ssh-user ec2-user –region us-east-1 -Z us-east-1b

Options:
-r is the run_list I want to associate with the newly created node. You can put any roles and recipes you like here
-I is the AMI ID
-f is the Amazon EC2 instance type
-S is the name you gave to the SSH key pair generated in the AWS management console
-i points to the private key file of that SSH key pair as downloaded when the key pair was created in the AWS management console
–ssh-user the official EC2 AMIs use ec2-user as the default user
–region us-east-1 If you want your instances to be deployed in any specific Amazon AWS region, add this parameter and the desired region
-Z us-east-1b is the availability zone within your region

NOTE:
If you did not give the –r i.e run list with above mentioned command, then it throws the Exception below:

     “EXCEPTIONS : NoMethodError Undefined method ‘empty?’ for nil:NilClass”

 

E.   Terminate instance and delete the corresponding Chef node
$ knife ec2 server delete i-XXXXXXXX –region us-east-1
$ knife node delete i-XXXXXXXX

(i-XXXXXXXX is the ID of the instance as found in the AWS management console)