Category: Virtualization

Virtualization

Template and Powered OFF VM’s Report from vCenter server

Template and Powered OFF VM’s Report from vCenter server

This powershell script will help you to pull report of powered off VM’s and the old templates. You can run this script from your desktop directly or from your vCenter server. You need to provide vCenter server list as input. Script will ask for credentials during runtime and output will be stored in csv file.

$VCServers = Get-Content "C:\vclist.txt"

$vcUSERNAME = Read-Host 'Enter user name'
$vcPassword = Read-Host 'Enter password' -AsSecureString
$logincred = New-Object System.Management.Automation.PSCredential ($vcusername, $vcPassword)

$date = Get-Date -Format "yyyy_MM_dd_hh_mm"

#---------------------------- VC Report generation-------------------------------------------#

foreach ($VCServer in $VCServers) {

Write-Host -ForegroundColor DarkYellow "Working on $VCServer"
Connect-VIServer -Server $VCServer -Credential $logincred # -ErrorAction SilentlyContinue -WarningAction 0 | Out-Null

#----------------- Template Report -----------------#

$csvfile_template = "C:\VC_reports\$($VCServer + "_" +$date + "_" + "template.csv")"
Write-Host -ForegroundColor Yellow "File name - $csvfile_template"

Get-Template -Name spwdfvm* | Sort-Object |
Select Name,
@{N="Datastore";E={[string]::Join(',',(Get-Datastore -Id $_.DatastoreIdList | Select -ExpandProperty Name))}} |
Export-Csv $csvfile_template -NoTypeInformation -UseCulture

#------------------- OLD VM Report -------------------#

$csvfile_vm = "C:\VC_reports\$($VCServer + "_" +$date + "_" + "OLD_VM.csv")"
Write-Host -ForegroundColor Yellow "File name - $csvfile_vm"

Get-VMHost | % { Get-VM -Location $_.Name } | where {$_.PowerState -eq "PoweredOff"}  |
Select Name, PowerState,
@{N="Datastore";E={[string]::Join(',',(Get-Datastore -Id $_.DatastoreIdList | Select -ExpandProperty Name))}},
@{N="UsedSpaceGB";E={[math]::Round($_.UsedSpaceGB,1)}},
@{N="ProvisionedSpaceGB";E={[math]::Round($_.ProvisionedSpaceGB,1)}},
@{N="Folder";E={$_.Folder.Name}} |
Export-Csv $csvfile_vm -NoTypeInformation -UseCulture

disconnect-VIserver -Server $VCServer -confirm:$false

}
LXC – Linux Container

LXC – Linux Container

Introduction-

What are the different Container technology?

Container technology has started after 2013. There is a high potential of getting confused about available container types like Docker , LXC/LXD and CoreOS rocket.

What’s LXC?
LXC (Linux Containers) is an operating-system-level virtualization method for running multiple isolated Linux systems (containers) on a control host using a single Linux kernel.
LXC is a userspace interface for the Linux kernel containment features. Through a powerful API and simple tools, it lets Linux users easily create and manage system or application containers.

Benefits of Linux Containers:
1 – Lightweight built-in virtualization
2 – Application/server isolation
3 – Easy deployment and management
4 – No additional licensing

Weaknesses of Linux Containers:
1 – Locked into the host kernel
2 – Supported only on Linux

Current LXC uses the following kernel features to contain processes:
– Kernel namespaces (ipc, uts, mount, pid, network and user)
– Apparmor and SELinux profiles
AppArmor is a Linux kernel security module that allows the system administrator to restrict programs’ capabilities with per-program profiles.
Security-Enhanced Linux is a Linux kernel security module that provides a mechanism for supporting access control security policies.
Seccomp policies
Chroots (using pivot_root)
Kernel capabilities
CGroups (control groups)

LXC is currently made of a few separate components:
– The liblxc library
– A set of standard tools to control the containers
– Distribution container templates
– Several language bindings for the API:
– python3
– Go
– ruby
– Haskell

The Linux kernel provides the cgroups functionality that allows limitation and prioritization of resources (CPU, memory, block I/O, network, etc.) without the need for starting any virtual machines, and also namespace isolation functionality that allows complete isolation of an applications’ view of the operating environment, including process trees, networking, user IDs and mounted file systems.

LXC containers are often considered as something in the middle between a chroot and a full fledged virtual machine. The goal of LXC is to create an environment as close as possible to a standard Linux installation but without the need for a separate kernel.
LXC combines the kernel’s cgroups and support for isolated namespaces to provide an isolated environment for applications. Docker can also use LXC as one of its execution drivers, enabling image management and providing deployment services.

What’s LXD?
LXD is a next generation system container manager. It offers a user experience similar to virtual machines but using Linux containers instead. LXD isn’t a rewrite of LXC, in fact it’s building on top of LXC to provide a new, better user experience. Under the hood, LXD uses LXC through liblxc and its Go binding to create and manage the containers.

What is difference between LXD vs Docker?
– Docker focuses on application delivery from development to production, while LXD’s focus is system containers.
– LXC in market since 2008 as compare to Docker 2013.
– Earlier Docker was based on LXC. Later Docker replaced it with libcontainer.
– Docker specializes in deploying apps
– LXD specializes in deploying (Linux) Virtual Machines

Application build using LXC?
Anbox – Android in a Box
Anbox is a container-based approach to boot a full Android system on a regular GNU/Linux system like Ubuntu. In other words: Anbox will let you run Android on your Linux system without the slowness of virtualization.

Reference –
WebSite: https://linuxcontainers.org
Version: LXC 2.1.x
https://linuxcontainers.org/lxd/getting-started-cli
http://www.tothenew.com/blog/lxc-linux-containers

Thank you,
Arun Bagul

Top 5 Infrastructure as Code (IaC) software

Top 5 Infrastructure as Code (IaC) software

Introduction

World is moving toward hybrid/multi-Cloud solutions and it is important for every Enterprise/Organizations to use different Cloud providers effectively!. Multi-Cloud strategy will help companies to save cost, make infrastructure highly available and businness continuity plan (disaster recovery) etc.

Infrastructure as Code (IaC) is a type of IT infrastructure that operations teams can automatically manage and provision through code, rather than using a manual process. Infrastructure as Code is sometimes referred to as programmable infrastructure. IaC is useful as it supports and make provisioning, deployment and maintenance of It infrastructure easy and simple in multi-Cloud scenario!

Why IaC?

* Manage infrastructure via source control, thus providing a detailed audit trail for changes.
* Apply testing to infrastructure in the form of unit testing, functional testing, and integration testing.
* Automate Your Deployment and Recovery Processes
* Rollback With the Same Tested Processes
* Don’t Repair, Redeploy
* Focus on Mean Time to Recovery
* Use Testing Tools to Verify Your Infrastructure and Hook Your Tests Into Your Monitoring System
* Documentation, since the code itself will document the state of the machine. This is particularly powerful because it means, for the first time, that infrastructure documentation is always up to date
* Enable collaboration around infrastructure configuration and provisioning, most notably between dev and ops.

Tops 5 Infrastructure as code (IaC) Software –

1) Terraform (https://www.terraform.io)
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. Terraform provides a flexible abstraction of resources and providers. Terraform is used to create, manage, and manipulate infrastructure resources. Providers generally are an IaaS (e.g. AWS, Google Cloud, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Terraform Enterprise, DNSimple, CloudFlare).
NOTE – Vagrant is another tool from HashiCorp. Refer article for more information – https://www.vagrantup.com/intro/vs/terraform.html

2) Spinnaker (https://www.spinnaker.io)
Spinnaker is an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence. Deploy across multiple cloud providers including AWS EC2, Kubernetes, Google Compute Engine, Google Kubernetes Engine, Google App Engine, Microsoft Azure, and Openstack.

3) AWS CloudFormation (https://aws.amazon.com/cloudformation)
AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. You can use AWS CloudFormation’s sample templates or create your own templates to describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application.

4) Google’s Cloud Deployment Manager (https://cloud.google.com/deployment-manager)
Google Cloud Deployment Manager allows you to specify all the resources needed for your application in a declarative format using yaml. You can also use Python or Jinja2 templates to parameterize the configuration and allow reuse of common deployment paradigms such as a load balanced, auto-scaled instance group. Treat your configuration as code and perform repeatable deployments.

5) Azure Automation and Azure Resource Manager(ARM)
Microsoft Azure Automation provides a way for users to automate the manual, long-running, error-prone, and frequently repeated tasks that are commonly performed in a cloud and enterprise environment. It saves time and increases the reliability of regular administrative tasks and even schedules them to be automatically performed at regular intervals. You can automate processes using runbooks or automate configuration management using Desired State Configuration. ARM Templates provides an easy way to create and manage one or more Azure resources consistently and repeatedly in an orderly and predictable manner in a resource group.

 

* Docker Compose (https://docs.docker.com/compose/overview)
NOTE- Docker Compose is mainly for Container technology and is different from above tools.

* Orchestrate containers with docker-compose
The powerful concept of microservices is gradually changing the industry. Large monolithic services are slowly giving way to swarms of small and autonomous microservices that work together. The process is accompanied by another market trend: containerization. Together, they help us build systems of unprecedented resilience. Containerization changes not only the architecture of services, but also the structure of environments used to create them. Now, when software is distributed in containers, developers have full freedom to decide what applications they need.

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. Compose preserves all volumes used by your services. Compose caches the configuration used to create a container. When you restart a service that has not changed, Compose re-uses the existing containers. Re-using containers means that you can make changes to your environment very quickly.

* IaC Tools and DevOps –

When we speak of the DevOps and continuous delivery/integration (CI/CD) toolchain, we’re referring to a superset of tools—many with overlapping capabilities—for helping organizations achieve faster and safer deployment velocity. This encompasses a broad range of solutions: provisioning tools, orchestration tools, testing frameworks, configuration management (CM) and automation platforms, and more. Please refer DevOps – Comparison of different Configuration Management Software for Comparisons between CM. Here we’ll compare different orchestration and management tools for provisioning infrastructures: Terraform and Spinnaker/CloudFormation.

  • CloudFormation is specific to AWS cloud resources, while Terraform/Spinnaker supports all cloud vendors.
  • Terraform allows you to define and manage your infrastructure, but Spinnaker allows you to manage your infrastructure from the perspective of code releases and deployment workflows
  • Infrastructure Lifecycle Management is easy using visualizations such as Terraform graph give developers and operators any easy way to comprehend dependent ordering
  • Docker Compose mainly for containers technology like Docker (https://www.docker.com)
  • Azure Automation is for Azure Cloud using Power-shell scripting

Thank you,
Arun Bagul

Selecting virtual SCSI Controllers for Disks (VMware VM)

Selecting virtual SCSI Controllers for Disks (VMware VM)

Introduction-
To access virtual disks, a virtual machine uses virtual SCSI controllers. Each virtual disk that a virtual machine can access through one of the virtual SCSI controllers resides in the VMFS datastore, NFS-based datastore, or on a raw disk. The choice of SCSI controller does not affect whether your virtual disk is an IDE or SCSI disk.

Following virtual SCSI controllers commonly used…

A) BusLogic
– This was one of the first emulated vSCSI controllers available in the VMware platform.
– No updates and considered as legacy or for backward compatibility…

B) LSI Logic Parallel
– This was the other emulated vSCSI controller available originally in the VMware platform.
– Most operating systems had a driver that supported a queue depth of 32 and it became a very common choice, if not the default
– Default for Windows 2003/Vista and Linux

C) LSI Logic SAS
– This is an evolution of the parallel driver to support a new future facing standard.
– It began to grown popularity when Microsoft required its use for MCSC within Windows 2008 ore newer.
– Default for Windows 2008 or newer
– Linux guests SCSI disk hotplug works better with LSI Logic SAS
– Personally I use this
D) VMware Paravirtual (aka PVSCSI)
– This vSCSI controller is virtualization aware and was been designed to support very high throughput with minimal processing cost and is therefore the most efficient driver.
– In the past, there were issues if it was used with virtual machines that didn’t do a lot of IOPS, but that was resolved in vSphere 4.1.

* PVSCSI and LSI Logic Parallel/SAS are essentially the same when it comes to overall performance capability.
* Total of 4 vSCSI adapters are supported per virtual machine.  To provide the best performance, one should also distribute virtual disk across as many vSCSI adapters as possible
* Why not IDE? – IDE adapter completes one command at a time while SCSI can queue commands. So SCSI adapter is better optimized for parallel performance. Also Maximum of 4 IDE Devices per VM (includes CDROM) but SCSI allows 60 devices.

Thank You,
Arun

Choosing a NIC (Network Adapter) for VM in Vmware ESXi environment

Choosing a NIC (Network Adapter) for VM in Vmware ESXi environment

Introduction-

NIC types available for VM  are depends on VM Hardware version and Guest OS (Operating System). When you configure a virtual machine, you can add network adapters (NICs) and specify the adapter type…

The following NIC types widely used:

E1000 –
Emulated version of the Intel 82545EM Gigabit Ethernet NIC, with drivers available in most newer guest operating systems, including Windows XP and later and Linux versions 2.4.19 and later.

E1000e – This feature emulates a newer model of Intel Gigabit NIC (number 82574) in the virtual hardware. This is known as the “e1000e” vNIC. e1000e is available only on hardware version 8 (and newer) virtual machines in vSphere.

VMXNET2 (Enhanced)

Optimized for performance in a virtual machine and has no physical counterpart. Because operating system vendors do not provide built-in drivers for this card, you must install VMware Tools to have a driver for the VMXNET network adapter available.
Based on the VMXNET adapter but provides high-performance features commonly used on modern networks, such as jumbo frames and hardware offloads. VMXNET 2 (Enhanced) is available only for some guest operating systems on ESX/ESXi 3.5 and later.

VMXNET3

Next generation of a paravirtualized NIC designed for performance. VMXNET 3 offers all the features available in VMXNET 2 and adds several new features, such as multiqueue support (also known as Receive Side Scaling in Windows), IPv6 offloads, and MSI/MSI-X interrupt delivery. VMXNET 3 is not related to VMXNET or VMXNET 2.
– VMXNET 3 is supported only for virtual machines version 7 and later.
– Support 10Gpbs ie 10Gig Network
– Jumbo frame supported

I would suggest to use  “VMXNET3”

Thank you,
Arun

VMware Virtual Machine(VM) Configuration File types

VMware Virtual Machine(VM) Configuration File types

Introduction

When you create VM (Virtual Machine) in VMWare based Virtualization platform. VMware creates  few VM configuration files in  folder with VM name in Datastore (Local Storage or NFS/SAN). Please find the table which describes files types in vmware…

 

File Usage File Description File Format
.vmx .vmname.vmx Virtual machine configuration file. ASCII
.vmxf vmname.vmxf Additional virtual machine configuration files, available, for example, with teamed virtual machines. ASCII
.vmdk vmname.vmdk Virtual disk file. ASCII
.flat.vmdk vmname.flat.vmdk Preallocated virtual disk in binary format. Binary
.vswp vmname.vswp Swap file.
.nvram vmname.nvram or nvram Non-volatile RAM. Stores virtual machine BIOS information.
.vmss vmname.vmss Virtual machine suspend file.
.log vmware.log Virtual machine log file. ASCII
#.log vmware-#.log Old virtual machine log files. # is a number starting with 1. ASCII

 

Thank you,
Arun Bagul

VMware ESXi Commands

VMware ESXi Commands

Introduction

Sometime we need to login to Esxi server to check hardware/networking and performance/stats. Sharing few important ESXi commands..

a)  ESXi NIC List

~ # esxcfg-nics  --list
Name    PCI           Driver      Link Speed    Duplex MAC Address       MTU    Description
vmnic0  0000:01:00.00 tg3   Up   1000Mbps  Full  XX:10:55:DD:CC:XX 1500   Broadcom BCM5720 Gigabit Ethernet
vmnic1  0000:01:00.01 tg3   Up   1000Mbps  Full  XX:10:55:67:CC:XX 1500   Broadcom BCM5720 Gigabit Ethernet
vmnic2  0000:02:00.00 tg3   Up   1000Mbps  Full  XX:10:55:65:CC:YY 1500   Broadcom BCM5720 Gigabit Ethernet
vmnic3  0000:02:00.01 tg3   Up   1000Mbps  Full  XX:10:55:23:CC:00 1500   Broadcom BCM5720 Gigabit Ethernet
~ #
~ # esxcli network ip interface  list
vmk0
Name: vmk0
MAC Address: 24:b6:fd:XX:XX:YY
Enabled: true
Portset: vSwitch0
Portgroup: Management Network
VDS Name: N/A
VDS UUID: N/A
VDS Port: N/A
VDS Connection: -1
MTU: 1500
TSO MSS: 65535
Port ID: 33554438

b)  ESXi Storage/iSCSI stats

~# esxcli storage san iscsi stats get
Adapter: vmhba34
Total Number of Sessions: 20
Total Number of Connections: 20
IO Data Sent: 2647449088
IO Data Received: 107921345640
Command PDUs: 15509582
Read Command PDUs: 12353055
Write Command PDUs: 3156497
Bidirectional Command PDUs: 0
No-data Command PDUs: 30
Response PDUs: 15509582
R2T PDUs: 0
Data-in PDUs: 0
Data-out PDUs: 0
Task Mgmt Request PDUs: 0
Task Mgmt Response PDUs: 0
Login Request PDUs: 20
Login Response PDUs: 20
Text Request PDUs: 0
Text Response PDUs: 0
Logout Request PDUs: 0
Logout Response PDUs: 0
NOP-Out PDUs: 1767885
NOP-In PDUs: 1767885
Async Event PDUs: 0
SNACK PDUs: 0
Reject PDUs: 0
Digest Errors: 0
Timeouts: 0
No Tx Buf Count: 0
No Rx Data Count: 232170
~ #

 

c)  ESXi  ping-

Check connectivity to storage, jumbo frame etc

~ # vmkping  -c 5 -s 8972 192.168.7.243
PING 192.168.7.243 (192.168.7.243): 8972 data bytes
8980 bytes from 192.168.7.243: icmp_seq=0 ttl=64 time=2.104 ms
8980 bytes from 192.168.7.243: icmp_seq=1 ttl=64 time=0.693 ms
8980 bytes from 192.168.7.243: icmp_seq=2 ttl=64 time=0.541 ms

d) VMKernel  VMNIC and Check connectivity with VMKernel Port

~ # esxcfg-vmknic  --list
Interface  Port Group/DVPort   IP Family IP Address     Netmask       Broadcast       MAC Address     MTU   TSO MSS Enabled Type
vmk0       Management Network  IPv4      192.168.7.5    255.255.252.0  192.168.7.255  XX:10:55:23:CC:00 1500  65535  true  STATIC
vmk1       iSCSI Kernel 1      IPv4      192.168.7.55   255.255.252.0  192.168.7.255  XX:10:XX:23:CC:YY 1500  65535  true  STATIC
vmk2       iSCSI Kernel 2      IPv4      192.168.7.155  255.255.252.0  192.168.7.255  00:50:56:XX:65:ZZ 1500  65535  true  STATIC     

~ # vmkping  -c 5 -s 8972 -I vmk1 192.168.7.243
PING 192.168.7.243 (192.168.7.243): 8972 data bytes
8980 bytes from 192.168.7.243: icmp_seq=0 ttl=64 time=0.747 ms
8980 bytes from 192.168.7.243: icmp_seq=1 ttl=64 time=0.481 ms
8980 bytes from 192.168.7.243: icmp_seq=2 ttl=64 time=0.523 ms
8980 bytes from 192.168.7.243: icmp_seq=3 ttl=64 time=0.615 ms
8980 bytes from 192.168.7.243: icmp_seq=4 ttl=64 time=0.504 ms

--- 192.168.7.243 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.481/0.574/0.747 ms
~ #

e) vSwitch list

~ # esxcfg-vswitch --list
Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
vSwitch0         128         47          128               1500    vmnic0,vmnic1
PortGroup Name        VLAN ID  Used Ports  Uplinks
NFS                   188      0           vmnic0,vmnic1
DMZ 192.168.X.0/24    1103     13          vmnic0,vmnic1
DMZ 192.168.Y.0/22    1102     22          vmnic0,vmnic1
DMZ 192.168.X.0/24    1101     8           vmnic0,vmnic1
Management Network    1102     1           vmnic0,vmnic1

Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
vSwitch1         128         3           128               1500    vmnic2
PortGroup Name        VLAN ID  Used Ports  Uplinks
iSCSI Kernel 1        0        1           vmnic2

Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
vSwitch2         128         3           128               1500    vmnic3
PortGroup Name        VLAN ID  Used Ports  Uplinks
iSCSI Kernel 2        0        1           vmnic3
~ #

Thank You,
Arun

Log rotation in VMWare ESXi

Log rotation in VMWare ESXi

Introduction

Last month, while working on ESXi5.1 disconnect issue. we analyzed esxi logs for past 3/4 months. Just sharing information related to ESXi log rotation policy..

/var/log # esxcli system syslog config get
Default Rotation Size: 1024
Default Rotations: 8
Log Output: /scratch/log
Log To Unique Subdirectory: false
Remote Host: <none>
/var/log # cd /scratch/log
/vmfs/volumes/507a011b-acd45a80-9aed-e0db5501b632/log #

 

Thank you,
Arun Bagul

What is Virtualization and Types of Virtualization

What is Virtualization and Types of Virtualization

What is Virtualization and Type of Virtualization?

In general there are different types of virtualization like Memory,CPU, Storage, Hardware and Network virtualization. Howver here we are going to talk about OS virtualization only.

1] What is Hypervisor –

Hardware virtualization or platform virtualization refers to the creation of a virtual machine that acts like a real computer with an operating system. Software executed on these virtual machines is separated from the underlying hardware resources.
The words host and guest are used to distinguish the software that runs on the physical machine from the software that runs on the virtual machine.
The software or firmware that creates a virtual machine on the host hardware is called a hypervisor or Virtual Machine Manager.

2] Type of Virtualization –

a) Para-virtualization –
-Guest OS has to be modified
-VM does not simulate hardware
-Use special API that a modified guest OS must use
-Hypercalls trapped by the Hypervisor and serviced
-Xen, VMWare ESX Server

b) Full-virtualization (Native) –
VM simulates “enough” hardware to allow an unmodified guest OS to be run in
isolation. Same Hardware and CPU/Memory, eg- Vmware,IBM VM family,Parallels,
Xen.
* Full virtualization with Xen Hypervisor requires:
i) Intel processor with the Intel VT extensions, or
ii) AMD processor with the AMD-V extensions, or
iii) an Intel Itanium processor
* Full virtualization with KVM hypervisor requires:
i) Intel processor with the Intel VT and the Intel 64 extensions, or
ii) AMD processor with the AMD-V and the AMD64 extensions

c) Emulation –
-VM emulates/simulates complete hardware
-Unmodified guest OS for a different PC can be run
-VirtualPC for Mac, QEMU

d) OS-level virtualization –
-OS allows multiple secure virtual servers to be run
-Guest OS is the same as the host OS, but appears isolated apps see an
isolated OS. eg: Solaris Containers, BSD Jails, Linux Vserver,OpenVZ and LXC (LinuX Containers)

e) Application level virtualization –
-Application is gives its own copy of components that are not shared
(eg: own registry files, global objects) – VE prevents conflicts, JVM

Thank you,
Arun Bagul

Karesansui – Xen and kernel-based Virtual Machine (KVM) Manager

Karesansui – Xen and kernel-based Virtual Machine (KVM) Manager

Introduction –

Karesansui is the best web based kernel-based Virtual Machine (KVM) and Xen Manager.
Also one of the leading Japanese open source project.

Karesansui has Simple, easy web-based interface. Easy installation. Saves initial cost to use. Free for all.
Supports Xen/Kernel-based Virtual Machine(KVM) hypervisor. Other hypervisors/virtualization support on future plan.


Please refer the project URL for more information-  http://karesansui-project.info/

* How to install –

Please go through the steps as mentioned here- http://karesansui-project.info/wiki/1/En_tutorial

Thank You,
Arun Bagul