Category: Virtualization


LXC – Linux Container

LXC – Linux Container


What are the different Container technology?

Container technology has started after 2013. There is a high potential of getting confused about available container types like Docker , LXC/LXD and CoreOS rocket.

What’s LXC?
LXC (Linux Containers) is an operating-system-level virtualization method for running multiple isolated Linux systems (containers) on a control host using a single Linux kernel.
LXC is a userspace interface for the Linux kernel containment features. Through a powerful API and simple tools, it lets Linux users easily create and manage system or application containers.

Benefits of Linux Containers:
1 – Lightweight built-in virtualization
2 – Application/server isolation
3 – Easy deployment and management
4 – No additional licensing

Weaknesses of Linux Containers:
1 – Locked into the host kernel
2 – Supported only on Linux

Current LXC uses the following kernel features to contain processes:
– Kernel namespaces (ipc, uts, mount, pid, network and user)
– Apparmor and SELinux profiles
AppArmor is a Linux kernel security module that allows the system administrator to restrict programs’ capabilities with per-program profiles.
Security-Enhanced Linux is a Linux kernel security module that provides a mechanism for supporting access control security policies.
Seccomp policies
Chroots (using pivot_root)
Kernel capabilities
CGroups (control groups)

LXC is currently made of a few separate components:
– The liblxc library
– A set of standard tools to control the containers
– Distribution container templates
– Several language bindings for the API:
– python3
– Go
– ruby
– Haskell

The Linux kernel provides the cgroups functionality that allows limitation and prioritization of resources (CPU, memory, block I/O, network, etc.) without the need for starting any virtual machines, and also namespace isolation functionality that allows complete isolation of an applications’ view of the operating environment, including process trees, networking, user IDs and mounted file systems.

LXC containers are often considered as something in the middle between a chroot and a full fledged virtual machine. The goal of LXC is to create an environment as close as possible to a standard Linux installation but without the need for a separate kernel.
LXC combines the kernel’s cgroups and support for isolated namespaces to provide an isolated environment for applications. Docker can also use LXC as one of its execution drivers, enabling image management and providing deployment services.

What’s LXD?
LXD is a next generation system container manager. It offers a user experience similar to virtual machines but using Linux containers instead. LXD isn’t a rewrite of LXC, in fact it’s building on top of LXC to provide a new, better user experience. Under the hood, LXD uses LXC through liblxc and its Go binding to create and manage the containers.

What is difference between LXD vs Docker?
– Docker focuses on application delivery from development to production, while LXD’s focus is system containers.
– LXC in market since 2008 as compare to Docker 2013.
– Earlier Docker was based on LXC. Later Docker replaced it with libcontainer.
– Docker specializes in deploying apps
– LXD specializes in deploying (Linux) Virtual Machines

Application build using LXC?
Anbox – Android in a Box
Anbox is a container-based approach to boot a full Android system on a regular GNU/Linux system like Ubuntu. In other words: Anbox will let you run Android on your Linux system without the slowness of virtualization.

Reference –
Version: LXC 2.1.x

Thank you,
Arun Bagul

Top 5 Infrastructure as Code (IaC) software

Top 5 Infrastructure as Code (IaC) software


World is moving toward hybrid/multi-Cloud solutions and it is important for every Enterprise/Organizations to use different Cloud providers effectively!. Multi-Cloud strategy will help companies to save cost, make infrastructure highly available and businness continuity plan (disaster recovery) etc.

Infrastructure as Code (IaC) is a type of IT infrastructure that operations teams can automatically manage and provision through code, rather than using a manual process. Infrastructure as Code is sometimes referred to as programmable infrastructure. IaC is useful as it supports and make provisioning, deployment and maintenance of It infrastructure easy and simple in multi-Cloud scenario!

Why IaC?

* Manage infrastructure via source control, thus providing a detailed audit trail for changes.
* Apply testing to infrastructure in the form of unit testing, functional testing, and integration testing.
* Automate Your Deployment and Recovery Processes
* Rollback With the Same Tested Processes
* Don’t Repair, Redeploy
* Focus on Mean Time to Recovery
* Use Testing Tools to Verify Your Infrastructure and Hook Your Tests Into Your Monitoring System
* Documentation, since the code itself will document the state of the machine. This is particularly powerful because it means, for the first time, that infrastructure documentation is always up to date
* Enable collaboration around infrastructure configuration and provisioning, most notably between dev and ops.

Tops 5 Infrastructure as code (IaC) Software –

1) Terraform (
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. Terraform provides a flexible abstraction of resources and providers. Terraform is used to create, manage, and manipulate infrastructure resources. Providers generally are an IaaS (e.g. AWS, Google Cloud, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Terraform Enterprise, DNSimple, CloudFlare).
NOTE – Vagrant is another tool from HashiCorp. Refer article for more information –

2) Spinnaker (
Spinnaker is an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence. Deploy across multiple cloud providers including AWS EC2, Kubernetes, Google Compute Engine, Google Kubernetes Engine, Google App Engine, Microsoft Azure, and Openstack.

3) AWS CloudFormation (
AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. You can use AWS CloudFormation’s sample templates or create your own templates to describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application.

4) Google’s Cloud Deployment Manager (
Google Cloud Deployment Manager allows you to specify all the resources needed for your application in a declarative format using yaml. You can also use Python or Jinja2 templates to parameterize the configuration and allow reuse of common deployment paradigms such as a load balanced, auto-scaled instance group. Treat your configuration as code and perform repeatable deployments.

5) Azure Automation and Azure Resource Manager(ARM)
Microsoft Azure Automation provides a way for users to automate the manual, long-running, error-prone, and frequently repeated tasks that are commonly performed in a cloud and enterprise environment. It saves time and increases the reliability of regular administrative tasks and even schedules them to be automatically performed at regular intervals. You can automate processes using runbooks or automate configuration management using Desired State Configuration. ARM Templates provides an easy way to create and manage one or more Azure resources consistently and repeatedly in an orderly and predictable manner in a resource group.


* Docker Compose (
NOTE- Docker Compose is mainly for Container technology and is different from above tools.

* Orchestrate containers with docker-compose
The powerful concept of microservices is gradually changing the industry. Large monolithic services are slowly giving way to swarms of small and autonomous microservices that work together. The process is accompanied by another market trend: containerization. Together, they help us build systems of unprecedented resilience. Containerization changes not only the architecture of services, but also the structure of environments used to create them. Now, when software is distributed in containers, developers have full freedom to decide what applications they need.

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. Compose preserves all volumes used by your services. Compose caches the configuration used to create a container. When you restart a service that has not changed, Compose re-uses the existing containers. Re-using containers means that you can make changes to your environment very quickly.

* IaC Tools and DevOps –

When we speak of the DevOps and continuous delivery/integration (CI/CD) toolchain, we’re referring to a superset of tools—many with overlapping capabilities—for helping organizations achieve faster and safer deployment velocity. This encompasses a broad range of solutions: provisioning tools, orchestration tools, testing frameworks, configuration management (CM) and automation platforms, and more. Please refer DevOps – Comparison of different Configuration Management Software for Comparisons between CM. Here we’ll compare different orchestration and management tools for provisioning infrastructures: Terraform and Spinnaker/CloudFormation.

  • CloudFormation is specific to AWS cloud resources, while Terraform/Spinnaker supports all cloud vendors.
  • Terraform allows you to define and manage your infrastructure, but Spinnaker allows you to manage your infrastructure from the perspective of code releases and deployment workflows
  • Infrastructure Lifecycle Management is easy using visualizations such as Terraform graph give developers and operators any easy way to comprehend dependent ordering
  • Docker Compose mainly for containers technology like Docker (
  • Azure Automation is for Azure Cloud using Power-shell scripting

Thank you,
Arun Bagul

Selecting virtual SCSI Controllers for Disks (VMware VM)

Selecting virtual SCSI Controllers for Disks (VMware VM)

To access virtual disks, a virtual machine uses virtual SCSI controllers. Each virtual disk that a virtual machine can access through one of the virtual SCSI controllers resides in the VMFS datastore, NFS-based datastore, or on a raw disk. The choice of SCSI controller does not affect whether your virtual disk is an IDE or SCSI disk.

Following virtual SCSI controllers commonly used…

A) BusLogic
– This was one of the first emulated vSCSI controllers available in the VMware platform.
– No updates and considered as legacy or for backward compatibility…

B) LSI Logic Parallel
– This was the other emulated vSCSI controller available originally in the VMware platform.
– Most operating systems had a driver that supported a queue depth of 32 and it became a very common choice, if not the default
– Default for Windows 2003/Vista and Linux

C) LSI Logic SAS
– This is an evolution of the parallel driver to support a new future facing standard.
– It began to grown popularity when Microsoft required its use for MCSC within Windows 2008 ore newer.
– Default for Windows 2008 or newer
– Linux guests SCSI disk hotplug works better with LSI Logic SAS
– Personally I use this
D) VMware Paravirtual (aka PVSCSI)
– This vSCSI controller is virtualization aware and was been designed to support very high throughput with minimal processing cost and is therefore the most efficient driver.
– In the past, there were issues if it was used with virtual machines that didn’t do a lot of IOPS, but that was resolved in vSphere 4.1.

* PVSCSI and LSI Logic Parallel/SAS are essentially the same when it comes to overall performance capability.
* Total of 4 vSCSI adapters are supported per virtual machine.  To provide the best performance, one should also distribute virtual disk across as many vSCSI adapters as possible
* Why not IDE? – IDE adapter completes one command at a time while SCSI can queue commands. So SCSI adapter is better optimized for parallel performance. Also Maximum of 4 IDE Devices per VM (includes CDROM) but SCSI allows 60 devices.

Thank You,

Choosing a NIC (Network Adapter) for VM in Vmware ESXi environment

Choosing a NIC (Network Adapter) for VM in Vmware ESXi environment


NIC types available for VM  are depends on VM Hardware version and Guest OS (Operating System). When you configure a virtual machine, you can add network adapters (NICs) and specify the adapter type…

The following NIC types widely used:

E1000 –
Emulated version of the Intel 82545EM Gigabit Ethernet NIC, with drivers available in most newer guest operating systems, including Windows XP and later and Linux versions 2.4.19 and later.

E1000e – This feature emulates a newer model of Intel Gigabit NIC (number 82574) in the virtual hardware. This is known as the “e1000e” vNIC. e1000e is available only on hardware version 8 (and newer) virtual machines in vSphere.

VMXNET2 (Enhanced)

Optimized for performance in a virtual machine and has no physical counterpart. Because operating system vendors do not provide built-in drivers for this card, you must install VMware Tools to have a driver for the VMXNET network adapter available.
Based on the VMXNET adapter but provides high-performance features commonly used on modern networks, such as jumbo frames and hardware offloads. VMXNET 2 (Enhanced) is available only for some guest operating systems on ESX/ESXi 3.5 and later.


Next generation of a paravirtualized NIC designed for performance. VMXNET 3 offers all the features available in VMXNET 2 and adds several new features, such as multiqueue support (also known as Receive Side Scaling in Windows), IPv6 offloads, and MSI/MSI-X interrupt delivery. VMXNET 3 is not related to VMXNET or VMXNET 2.
– VMXNET 3 is supported only for virtual machines version 7 and later.
– Support 10Gpbs ie 10Gig Network
– Jumbo frame supported

I would suggest to use  “VMXNET3”

Thank you,

VMware Virtual Machine(VM) Configuration File types

VMware Virtual Machine(VM) Configuration File types


When you create VM (Virtual Machine) in VMWare based Virtualization platform. VMware creates  few VM configuration files in  folder with VM name in Datastore (Local Storage or NFS/SAN). Please find the table which describes files types in vmware…


File Usage File Description File Format
.vmx .vmname.vmx Virtual machine configuration file. ASCII
.vmxf vmname.vmxf Additional virtual machine configuration files, available, for example, with teamed virtual machines. ASCII
.vmdk vmname.vmdk Virtual disk file. ASCII
.flat.vmdk vmname.flat.vmdk Preallocated virtual disk in binary format. Binary
.vswp vmname.vswp Swap file.
.nvram vmname.nvram or nvram Non-volatile RAM. Stores virtual machine BIOS information.
.vmss vmname.vmss Virtual machine suspend file.
.log vmware.log Virtual machine log file. ASCII
#.log vmware-#.log Old virtual machine log files. # is a number starting with 1. ASCII


Thank you,
Arun Bagul

VMware ESXi Commands

VMware ESXi Commands


Sometime we need to login to Esxi server to check hardware/networking and performance/stats. Sharing few important ESXi commands..

a)  ESXi NIC List

~ # esxcfg-nics  --list
Name    PCI           Driver      Link Speed    Duplex MAC Address       MTU    Description
vmnic0  0000:01:00.00 tg3   Up   1000Mbps  Full  XX:10:55:DD:CC:XX 1500   Broadcom BCM5720 Gigabit Ethernet
vmnic1  0000:01:00.01 tg3   Up   1000Mbps  Full  XX:10:55:67:CC:XX 1500   Broadcom BCM5720 Gigabit Ethernet
vmnic2  0000:02:00.00 tg3   Up   1000Mbps  Full  XX:10:55:65:CC:YY 1500   Broadcom BCM5720 Gigabit Ethernet
vmnic3  0000:02:00.01 tg3   Up   1000Mbps  Full  XX:10:55:23:CC:00 1500   Broadcom BCM5720 Gigabit Ethernet
~ #
~ # esxcli network ip interface  list
Name: vmk0
MAC Address: 24:b6:fd:XX:XX:YY
Enabled: true
Portset: vSwitch0
Portgroup: Management Network
VDS Name: N/A
VDS Port: N/A
VDS Connection: -1
MTU: 1500
TSO MSS: 65535
Port ID: 33554438

b)  ESXi Storage/iSCSI stats

~# esxcli storage san iscsi stats get
Adapter: vmhba34
Total Number of Sessions: 20
Total Number of Connections: 20
IO Data Sent: 2647449088
IO Data Received: 107921345640
Command PDUs: 15509582
Read Command PDUs: 12353055
Write Command PDUs: 3156497
Bidirectional Command PDUs: 0
No-data Command PDUs: 30
Response PDUs: 15509582
R2T PDUs: 0
Data-in PDUs: 0
Data-out PDUs: 0
Task Mgmt Request PDUs: 0
Task Mgmt Response PDUs: 0
Login Request PDUs: 20
Login Response PDUs: 20
Text Request PDUs: 0
Text Response PDUs: 0
Logout Request PDUs: 0
Logout Response PDUs: 0
NOP-Out PDUs: 1767885
NOP-In PDUs: 1767885
Async Event PDUs: 0
Reject PDUs: 0
Digest Errors: 0
Timeouts: 0
No Tx Buf Count: 0
No Rx Data Count: 232170
~ #


c)  ESXi  ping-

Check connectivity to storage, jumbo frame etc

~ # vmkping  -c 5 -s 8972
PING ( 8972 data bytes
8980 bytes from icmp_seq=0 ttl=64 time=2.104 ms
8980 bytes from icmp_seq=1 ttl=64 time=0.693 ms
8980 bytes from icmp_seq=2 ttl=64 time=0.541 ms

d) VMKernel  VMNIC and Check connectivity with VMKernel Port

~ # esxcfg-vmknic  --list
Interface  Port Group/DVPort   IP Family IP Address     Netmask       Broadcast       MAC Address     MTU   TSO MSS Enabled Type
vmk0       Management Network  IPv4  XX:10:55:23:CC:00 1500  65535  true  STATIC
vmk1       iSCSI Kernel 1      IPv4  XX:10:XX:23:CC:YY 1500  65535  true  STATIC
vmk2       iSCSI Kernel 2      IPv4  00:50:56:XX:65:ZZ 1500  65535  true  STATIC     

~ # vmkping  -c 5 -s 8972 -I vmk1
PING ( 8972 data bytes
8980 bytes from icmp_seq=0 ttl=64 time=0.747 ms
8980 bytes from icmp_seq=1 ttl=64 time=0.481 ms
8980 bytes from icmp_seq=2 ttl=64 time=0.523 ms
8980 bytes from icmp_seq=3 ttl=64 time=0.615 ms
8980 bytes from icmp_seq=4 ttl=64 time=0.504 ms

--- ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.481/0.574/0.747 ms
~ #

e) vSwitch list

~ # esxcfg-vswitch --list
Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
vSwitch0         128         47          128               1500    vmnic0,vmnic1
PortGroup Name        VLAN ID  Used Ports  Uplinks
NFS                   188      0           vmnic0,vmnic1
DMZ 192.168.X.0/24    1103     13          vmnic0,vmnic1
DMZ 192.168.Y.0/22    1102     22          vmnic0,vmnic1
DMZ 192.168.X.0/24    1101     8           vmnic0,vmnic1
Management Network    1102     1           vmnic0,vmnic1

Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
vSwitch1         128         3           128               1500    vmnic2
PortGroup Name        VLAN ID  Used Ports  Uplinks
iSCSI Kernel 1        0        1           vmnic2

Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
vSwitch2         128         3           128               1500    vmnic3
PortGroup Name        VLAN ID  Used Ports  Uplinks
iSCSI Kernel 2        0        1           vmnic3
~ #

Thank You,

Log rotation in VMWare ESXi

Log rotation in VMWare ESXi


Last month, while working on ESXi5.1 disconnect issue. we analyzed esxi logs for past 3/4 months. Just sharing information related to ESXi log rotation policy..

/var/log # esxcli system syslog config get
Default Rotation Size: 1024
Default Rotations: 8
Log Output: /scratch/log
Log To Unique Subdirectory: false
Remote Host: <none>
/var/log # cd /scratch/log
/vmfs/volumes/507a011b-acd45a80-9aed-e0db5501b632/log #


Thank you,
Arun Bagul

What is Virtualization and Types of Virtualization

What is Virtualization and Types of Virtualization

What is Virtualization and Type of Virtualization?

In general there are different types of virtualization like Memory,CPU, Storage, Hardware and Network virtualization. Howver here we are going to talk about OS virtualization only.

1] What is Hypervisor –

Hardware virtualization or platform virtualization refers to the creation of a virtual machine that acts like a real computer with an operating system. Software executed on these virtual machines is separated from the underlying hardware resources.
The words host and guest are used to distinguish the software that runs on the physical machine from the software that runs on the virtual machine.
The software or firmware that creates a virtual machine on the host hardware is called a hypervisor or Virtual Machine Manager.

2] Type of Virtualization –

a) Para-virtualization –
-Guest OS has to be modified
-VM does not simulate hardware
-Use special API that a modified guest OS must use
-Hypercalls trapped by the Hypervisor and serviced
-Xen, VMWare ESX Server

b) Full-virtualization (Native) –
VM simulates “enough” hardware to allow an unmodified guest OS to be run in
isolation. Same Hardware and CPU/Memory, eg- Vmware,IBM VM family,Parallels,
* Full virtualization with Xen Hypervisor requires:
i) Intel processor with the Intel VT extensions, or
ii) AMD processor with the AMD-V extensions, or
iii) an Intel Itanium processor
* Full virtualization with KVM hypervisor requires:
i) Intel processor with the Intel VT and the Intel 64 extensions, or
ii) AMD processor with the AMD-V and the AMD64 extensions

c) Emulation –
-VM emulates/simulates complete hardware
-Unmodified guest OS for a different PC can be run
-VirtualPC for Mac, QEMU

d) OS-level virtualization –
-OS allows multiple secure virtual servers to be run
-Guest OS is the same as the host OS, but appears isolated apps see an
isolated OS. eg: Solaris Containers, BSD Jails, Linux Vserver,OpenVZ and LXC (LinuX Containers)

e) Application level virtualization –
-Application is gives its own copy of components that are not shared
(eg: own registry files, global objects) – VE prevents conflicts, JVM

Thank you,
Arun Bagul

Karesansui – Xen and kernel-based Virtual Machine (KVM) Manager

Karesansui – Xen and kernel-based Virtual Machine (KVM) Manager

Introduction –

Karesansui is the best web based kernel-based Virtual Machine (KVM) and Xen Manager.
Also one of the leading Japanese open source project.

Karesansui has Simple, easy web-based interface. Easy installation. Saves initial cost to use. Free for all.
Supports Xen/Kernel-based Virtual Machine(KVM) hypervisor. Other hypervisors/virtualization support on future plan.

Please refer the project URL for more information-

* How to install –

Please go through the steps as mentioned here-

Thank You,
Arun Bagul

Xen virtualization on CentOS linux

Xen virtualization on CentOS linux

Introduction ~

What is Virtualization? ~ virtualization is technique of  running multiple operating system (OS) on same physical hardware at same time.
There are three types of Virtualization technologies

1) Full virtualization –
a) Hardware emulation – KQEMU
b) Binary translation – VirtualBox
c) Classic virtualization – OpenVZ
2) Para-virtualization
3) OS-level virtualization – Linux-VServer and OpenVZ

** Xen is an open-source para-virtualizing virtual machine monitor (VMM), or “hypervisor”,for a variety of processor. Xen can securely execute multiple virtual machines on a single physical system with near native performance.

** Xen Prerequisites –

1) iproute2 package
2) Linux bridge-utils (/sbin/brctl)
3) Linux hotplug system (/sbin/hotplug and related scripts)

Step 1) How to install Xen on Centos ~

[root@arun ~]# yum install xen.i386 xen-devel.i386   xen-libs.i386 libvirt.i386  libvirt-devel.i386  libvirt-python.i386 virt-manager.i386 virt-clone.i386

Step 2) How to install Xen Kernel for Centos ~

[root@arun ~]# yum install kernel-xen.i686  kernel-xen-devel.i686

* Once installation is completed; Please check the CentOS boot loader configuration file ie “/boot/grub/grub.conf”… and make sure that the first boot entry should look like this…

title CentOS (2.6.18-164.15.1.el5xen)
root (hd0,4)
kernel /boot/xen.gz-2.6.18-164.15.1.el5
module /boot/vmlinuz-2.6.18-164.15.1.el5xen ro root=LABEL=/ rhgb quiet
module /boot/initrd-2.6.18-164.15.1.el5xen.img

Step 3) Reboot the system so that system will boot with Xen Kernel….

That’s it Xen infrastructure is installed on CentOS.

[[root@arun ~]# rpm -qa | egrep “xen|virt” | sort
[[root@arun ~]#
Step 4 ) Test Xen setup – make sure that “libvirtd” service is running

Step 5) Install first Guest CentOS –

* Create Disk as file as shown below….

[[root@arun ~]# dd if=/dev/zero  of=/var/xen-disk/centOS.hdd bs=4k seek=2048k count=0
0+0 records in
0+0 records out
0 bytes (0 B) copied, 0.000191 seconds, 0.0 kB/s
[[root@arun ~]#  mke2fs -j /var/xen-disk/centOS.hdd
mke2fs 1.39 (29-May-2006)
/var/xen-disk/centOS.hdd is not a block special device.
Proceed anyway? (y,n) y
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
1048576 inodes, 2097152 blocks
104857 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2147483648
64 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 30 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[[root@arun ~]# mount -o loop /var/xen-disk/centOS.hdd  /mnt/
[[root@arun ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda5              55G   12G   41G  22% /
tmpfs                 829M   12K  829M   1% /dev/shm
/dev/sda2              23G   15G  7.8G  65% /mydata
none                  829M  104K  829M   1% /var/lib/xenstored
7.9G  147M  7.4G   2% /mnt
[[root@arun ~]#

* We are going to install guestOS from CD/DVD image so we will export this image via FTP so let us
configure ftp server….

* We have copied Centos CD/DVD in “/home/CentOS5.0/” location….

[root@arun ~]# ls /home/CentOS5.0/
CentOS            RELEASE-NOTES-cz.html  RELEASE-NOTES-fr       RELEASE-NOTES-nl.html     repodata
EULA              RELEASE-NOTES-de       RELEASE-NOTES-fr.html  RELEASE-NOTES-pt          RPM-GPG-KEY-beta
GPL               RELEASE-NOTES-de.html  RELEASE-NOTES-it       RELEASE-NOTES-pt_BR       RPM-GPG-KEY-CentOS-5
images            RELEASE-NOTES-en       RELEASE-NOTES-it.html  RELEASE-NOTES-pt_BR.html  TRANS.TBL
isolinux          RELEASE-NOTES-en.html  RELEASE-NOTES-ja       RELEASE-NOTES-pt.html
[root@arun ~]#

* I have changed anonymous FTP home from default one to “/home/CentOS5.0/” Please details below….

[root@arun ~]# grep ftp /etc/passwd
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
[root@arun ~]#  vi /etc/passwd
[root@arun ~]# grep ftp /etc/passwd
ftp:x:14:50:FTP User:/home/CentOS5.0:/sbin/nologin
[root@arun ~]#

* Now, restart FTP server and try to access to ftp with IPaddress assigned to bridge “virbr0”. In my case it is “”

* Start installation now using “virt-install”

[root@arun ~]# virt-install –name arunOS –os-type=linux –ram=300 –file /var/xen-disk/centOS.hdd –location –nographics –bridge=virbr0

Starting install…

* Welcome to CentOS

+————–+ Manual TCP/IP Configuration +—————+
|                                                            |
| Enter the IPv4 and/or the IPv6 address and prefix          |
| (address / prefix).  For IPv4, the dotted-quad netmask     |
| or the CIDR-style prefix are acceptable. The gateway and   |
| name server fields must be valid IPv4 or IPv6 addresses.   |
|                                                            |
| IPv4 address: /          |
| Gateway:    |
| Name Server:  _________________________________________    |
|                                                            |
|            +—-+                      +——+            |
|            | OK |                      | Back |            |
|            +—-+                      +——+            |
|                                                            |
|                                                            |

<Tab>/<Alt-Tab> between elements  | <Space> selects | <F12> next screen

* Welcome to CentOS

+—————————–+ Warning +——————————+
|                                                                      |
| /dev/xvda currently has a loop partition layout.  To use this disk   |
| for the installation of CentOS, it must be re-initialized, causing   |
| the loss of ALL DATA on this drive.                                  |
|                                                                      |
| Would you like to format this drive?                                 |
|                                                                      |
|         +————–+                  +————–+           |
|         | Ignore drive |                  | Format drive |           |
|         +————–+                  +————–+           |
|                                                                      |
|                                                                      |

<Tab>/<Alt-Tab> between elements   |  <Space> selects   |  <F12> next screen

* Welcome to CentOS

+————————-+ Partitioning Type +————————-+
|                                                                       |
|    Installation requires partitioning of your hard drive.  The        |
|    default layout is reasonable for most users.  You can either       |
|    choose to use this or create your own.                             |
|                                                                       |
| Remove all partitions on selected drives and create default layout.   |
| Remove linux partitions on selected drives and create default layout. |
| Use free space on selected drives and create default layout.          |
| Create custom layout.                                                 |
|                                                                       |
|       Which drive(s) do you want to use for this installation?        |
|                              [*] xvda ^                               |
|                                       #                               |
|                                                                       |
|                          +—-+   +——+                            |
|                          | OK |   | Back |                            |
|                          +—-+   +——+                            |
|                                                                       |
|                                                                       |

<Space>,<+>,<-> selection   |   <F2> Add drive   |   <F12> next screen

* Welcome to CentOS

+—————————-+ Partitioning +—————————-+
|                                                                        |
|      Device        Start    End     Size       Type     Mount Point    |
| /dev/xvda                                                            ^ |
|   Free space            1    1045    8192M  Free space               # |
|                                                                      : |
|                                                                      : |
|                                                                      : |
|                                                                      : |
|                                                                      : |
|                                                                      : |
|                                                                      : |
|                                                                      v |
|                                                                        |
|    +—–+   +——+   +——–+   +——+   +—-+   +——+      |
|    | New |   | Edit |   | Delete |   | RAID |   | OK |   | Back |      |
|    +—–+   +——+   +——–+   +——+   +—-+   +——+      |
|                                                                        |
|                                                                        |

F1-Help     F2-New      F3-Edit   F4-Delete    F5-Reset    F12-OK

* Welcome to CentOS

+—————————-+ Partitioning +—————————-+
|                                                                        |
|      Device        Start    End     Size       Type     Mount Point    |
| /dev/xvda                                                            ^ |
|   xvda1                 1     829    6502M  ext3        /            # |
|   xvda2               830     893     502M  swap                     : |
|   Free space          894    1044    1184M  Free space               : |
|                                                                      : |
|                                                                      : |
|                                                                      : |
|                                                                      : |
|                                                                      : |
|                                                                      v |
|                                                                        |
|    +—–+   +——+   +——–+   +——+   +—-+   +——+      |
|    | New |   | Edit |   | Delete |   | RAID |   | OK |   | Back |      |
|    +—–+   +——+   +——–+   +——+   +—-+   +——+      |
|                                                                        |
|                                                                        |

F1-Help     F2-New      F3-Edit   F4-Delete    F5-Reset    F12-OK

* Same way configure TZ,root password,packages,boot loader options etc…

* Welcome to CentOS

+———————+ Formatting +———————-+
|                                                         |
| Formatting / file system…                             |
|                                                         |
|                           70%                           |
|                                                         |

<Tab>/<Alt-Tab> between elements   |  <Space> selects   |  <F12> next screen

That’s it!

Thank you,
Arun Bagul