When you are getting such alerts from Vcenter Check the Mgmt Connectivity.

Restart the management service.

(0) Comments    Read More   
Sep
23
Posted on 23-09-2014
Filed Under (VMware/ESXi) by Arun Bagal


Introduction:

To get started with your installation of ESXi5, insert the ESXi5 disc into your server and start it up.

In Figure 1 below, you’ll see the first screen that greets you when you start your server. From this menu, choose the first option to start the ESXi 5 installer.

Figure 1: ESXi 5 boot menu

Once you choose the installation option, the installer provides you with a window that details the status of each file that needs to be loaded. Figure 2 shows you this screen. After that, you’re greeted with a familiar screen that shows you some information about your server, including the processor type and system RAM. The target machine for my sample installation is a virtual machine running on my laptop, hence the relatively minimal hardware configuration. You can see this screen in Figure 3.


Figure 2:Installer load status

Figure 3:Yet another boot screen!

With the preliminaries out of the way, the ESXi 5 installer truly kicks off with a welcome screen containing information regarding VMware’s Compatibility Guide. To continue with the installation process, press Enter.


Figure 4: Kick off the ESXi installation.

Of course, no installation would be complete without having to accept an end user license agreement. To accept the agreement as a part of the installer, press F11. If you don’t accept the agreement, press Escape to abort the installation. You can see this screen in Figure 5.

Figure 5: ESXi 5 end user license agreement

A location to which to install ESXi 5 is the first technical decision you have to make. In Figure 6 below, you can see that I have a single 40 GB volume from which to choose as an install location on my machine.

  

Figure 6: Choose an installation location for ESXi 5

Next up, choose your keyboard layout as US Default.

The root password on your ESXi 5 system is the key to your virtual kingdom, so choose with care. Make sure you provide a strong password. As you can see in figure 7, you have to provide the password twice to make sure you don’t include any typos.

Figure 7: Provide a password for the root user account

The ESXi installer now scans your system to get additional information.

Once that’s complete, you’re asked to confirm the installation by pressing the F11 button .

 Figure 8Confirm the installation

Once you initiate the installation, your selected disk will be repartitioned. Throughout the process, the installer provides you with an installation status like the one shown in Figure 9.

Figure 9Installation status

When the installation process has finished, you’ll get a message indicating such as

Figure 10:Installation is complete

The last screen you’ll see is a yellow and gray one like the one shown below. Take note of the IP address on the screen.

Figure 11: ESXi 5 server display

 

Thank You,

Arun Bagal.

(0) Comments    Read More   
Sep
22
Posted on 22-09-2014
Filed Under (NETAPP-Storage, Storage & Backup) by Arun Bagul

Introduction-

We can access NetApp volume using CIFS/SMB just like windows share. It is very useful to use NetApp storage in mixed environment of Linux/Windows or on Windows based
Products.

Step 1) Creating NetApp volume or use Qtree

First we need to create  “/vol/mycifs_share” netapp volume or you can use qtree as well.
Please refer article about creating NetApp volume – http://www.indiangnu.org/2014/how-to-create-volume-in-netapp-and-how-to-nfs-export/

Step 2) Change Secuirty style to NTFS (eg- mycifs_share volume)

my-netapp1> qtree security /vol/mycifs_share ntfs
Sun Jun 10 06:19:08 EDT [my-netapp1: wafl.quota.sec.change:notice]: security style for /vol/mycifs_share/ changed from unix to ntfs
my-netapp1>

Step 3) Creating CIFS/Windows Share

I assume that NetApp filer has been joined to AD (Active Directory, LDAP) and CIFS licensed is installed/configured and cifs service is   running on NetApp. Now we will create CIFS share and give permissions to User/Groups…

my-netapp1> cifs  shares  -add MyShare /vol/mycifs_share -comment “My Test Windows CIFS Share”
The share name ‘MyShare’ will not be accessible by some MS-DOS workstations
my-netapp1>

Step 4) Give CIFS Share Access

my-netapp1> cifs access MyShare “MYDOMAIN\USER_OR_GROUP”   “Full Control”
1 share(s) have been successfully modified
my-netapp1>

NOTE- We can give Full permission ie “Full Control”, Read permission ie “Read”

Step 5) List CIFS-

Filer: 192.168.10.50

my-netapp1> cifs shares
Name         Mount Point                       Description
—-         ———–                       ———–
MyShare       /vol/mycifs_share                My Test Windows CIFS Share

Step 6) Access CIFS/Share on Windows or Linux-

\\192.168.10.50\MyShare

Thank you,
Arun Bagul

(0) Comments    Read More   

Introduction-
To access virtual disks, a virtual machine uses virtual SCSI controllers. Each virtual disk that a virtual machine can access through one of the virtual SCSI controllers resides in the VMFS datastore, NFS-based datastore, or on a raw disk. The choice of SCSI controller does not affect whether your virtual disk is an IDE or SCSI disk.

Following virtual SCSI controllers commonly used…

A) BusLogic
– This was one of the first emulated vSCSI controllers available in the VMware platform.
– No updates and considered as legacy or for backward compatibility…

B) LSI Logic Parallel
– This was the other emulated vSCSI controller available originally in the VMware platform.
– Most operating systems had a driver that supported a queue depth of 32 and it became a very common choice, if not the default
– Default for Windows 2003/Vista and Linux

C) LSI Logic SAS
– This is an evolution of the parallel driver to support a new future facing standard.
– It began to grown popularity when Microsoft required its use for MCSC within Windows 2008 ore newer.
– Default for Windows 2008 or newer
– Linux guests SCSI disk hotplug works better with LSI Logic SAS
– Personally I use this
D) VMware Paravirtual (aka PVSCSI)
– This vSCSI controller is virtualization aware and was been designed to support very high throughput with minimal processing cost and is therefore the most efficient driver.
– In the past, there were issues if it was used with virtual machines that didn’t do a lot of IOPS, but that was resolved in vSphere 4.1.

* PVSCSI and LSI Logic Parallel/SAS are essentially the same when it comes to overall performance capability.
* Total of 4 vSCSI adapters are supported per virtual machine.  To provide the best performance, one should also distribute virtual disk across as many vSCSI adapters as possible
* Why not IDE? – IDE adapter completes one command at a time while SCSI can queue commands. So SCSI adapter is better optimized for parallel performance. Also Maximum of 4 IDE Devices per VM (includes CDROM) but SCSI allows 60 devices.

Thank You,
Arun

(0) Comments    Read More   
Sep
17
Posted on 17-09-2014
Filed Under (NETAPP-Storage, Storage & Backup) by Arun Bagul

Introduction –

NetApp is leading provider of high speed/performance SAN/NAS storage. I’m working on various NetApp Products since past 5+ yrs as Storage Admin.  This article gives overview of NetApp Storage products and Technologies…

* Data ONTAP – NetApp Data ONTAP is Operating System (OS) running on NetApp Storage. “Data ONTAP” supports Cluster Mode and 7-Node (using ONTAP like ONTAP7)

* NetApp SANtricity Storage OS offers a powerful, easy-to-use interface for administering E-series NetApp Storage.

    • Dynamic Disk Pools (DDPs) – greatly simplify traditional storage management with no idle spares to manage or reconfigure when drives are added or fail, thus providing the ability to automatically configure, expand  & scale storage. DDPs enable dynamic rebalancing of drive count changes.

    • Dynamic RAID-level migration changes the RAID level of a volume group on the existing drives without requiring the relocation of data. The software supports DDPs and RAID levels   0, 1, 3, 5, 6, and 10.

    • Dynamic volume expansion (DVE) – allows administrators to expand the capacity of an existing volume by using the free capacity on an existing volume group. DVE combines the new capacity with the original capacity for maximum performance and utilization.

    • Streamlined Performance Efficiency – Intelligent cache tiering, which uses the SANtricity SSD Cache feature, enhances performance and application response time. The SSD Cache feature provides intelligent read caching capability to identify and host the most frequently accessed blocks of data and leverages the superior performance and lower latency of solid-state drives (SSDs). This caching approach works in real time and in a data-driven fashion, and is always on, with no complicated policies to define the trigger for data movement between tiers

• Efficient Storage Provisioning – Thin provisioning delivers significant savings by separating the internal allocation of storage from the external allocation reported to hosts

* NetApp OnCommand System Manager – OnCommand System Manager is the simple yet powerful browser-based management tools that enable administrators to easily configure and manage individual NetApp storage systems or clusters of systems. OnCommand Unified Manager monitors and alerts on the health of your NetApp storage running on clustered Data ONTAP.

* WAFL and RAID-DP – NetApp introduced double-parity RAID, named RAID-DP, in 2003, starting with the Data ONTAP6. Since then it has become the default RAID group type used on NetApp storage. At the most basic layer, RAID-DP adds a second parity disk to each RAID group in an aggregate or traditional volume. A RAID group is an underlying construct that aggregates and traditional volumes are built upon. Each traditional NetApp RAID 4 group has some number of data disks and one parity disk, with aggregates and volumes containing one or more RAID 4 groups. Whereas the parity disk in a RAID 4 volume stores row parity across the disks in a RAID 4 group, the additional RAID-DP parity disk stores. diagonal parity across the disks in a RAID-DP group.

A) Unified Storage Data ONTAP –

Common FAS series NetApp storage Array/filers with NetApp Data ONTAP OS.

B) High Performance SAN Storage E-Series –

The NetApp E5500 data storage system sets new standards for performance efficiency in application-driven environments. The E5500 is equally adept at supporting high-IOPS mixed workloads and databases, high-performance file systems, and bandwidth-intensive streaming applications. NetApp’s patent-pending Dynamic Disk Pools (DDP) simplifies traditional RAID management by distributing data parity information and spare capacity across a pool of drives. The modular flexibility of the E-Series—with three disk drive/controller shelves, multiple drive types, and a complete selection of interfaces—enables custom configurations optimized and able to scale as needed. The maximum storage density of the E5500 reduces rack space by up to 60%, power use by up to 40%, and cooling requirements by up to 39%.

– Maximum RAW Capacity- 28TB,48TB, 240TB to 1.54PB

– Rack Unit 2U (12 or 24 drives) and 4U (60 drives)

– Maximum 16 Shelves or 384 total disk

– 2/3/4TB SAS Disk Drives and 400/800GB SSD Disks (Mixed)

 – 24GB ECC RAM  (ECC which stands for Error Correction Code RAM)

– 8 x 10Gbps iSCSI IO ports, 8 x 8Gb SAS IO ports, 8 x 16Gb FC IO ports

– SANtricity OS 11.10

 

C) NetApp disk shelves

NetApp offers a full range of high-capacity, high-performance, and self-encrypting disk drives plus ultra-high-performance solid-state drives (SSDs). Disk shelf options let you optimize for capacity, performance, or versatility. NetApp Optical SAS interconnects simplify infrastructure while providing industry-leading performance.

– Nondisruptive controller upgrades

– Self-managing Virtual Storage Tier technologies, including Flash Pool, optimize data placement on flash for maximum performance

– Supported Disk types- SATA,SAS,SSD and Flash disk

a) DS2246 – 2U Rack units, 24 Drives per enclosure, 12 Drives per rack unit,Optical SAS support

b) DS4246 – 4U Rack units, 24 Drives per enclosure, 6 Drives per rack unit,Optical SAS support, 2TB/3TB/4TB disk drives

c) DS4486 – 4U Rack units, 24 Drives per enclosure, 12 Drives per rack unit,Optical SAS support, 4TB disk drives, Tandem (dual) drives

d) DS4243 – 4U Rack units, 24 Drives per enclosure, 6 Drives per rack unit,Optical SAS support, DS4243 disk shelf is no longer available   in new system shipments.

For details please visit NetApp page- http://www.netapp.com/us/products/storage-systems/disk-shelves-and-storage-media/disk-shelves-tech-specs.aspx

 

D) NetApp Software –

    * NetApp FilerView Administration Tool – GUI tools was used to manage NetApp Filer. However If you plan to run Data ONTAP 8.1 or later software, you need to use OnCommand System Manager software.

* NetApp FlexArray Virtualization Software – FlexArray enables you to connect your existing storage arrays to FAS8000 controllers using your Fibre Channel SAN fabric. Array LUNs are provisioned to the FAS8000 and collected into a storage pool from which NetApp volumes are created and shared out to SAN hosts and NAS clients. The new volumes, managed by the FAS8000. FlexArray has the flexibility to serve both SAN and NAS protocols at the same time without any complex add-on components, making the FAS8000 the ideal storage virtualization platform.

* NetApp DataMotion – DataMotion data migration software lets you move data from one logical or physical storage device to another, without disrupting operations. You can keep your shared storage infrastructure running as you add capacity, refresh infrastructure, and balance performance. You can use DataMotion for vFiler and Volumes.

* NetApp Deduplication and Compression – NetApp data compression is a new feature that compresses data as it is written to NetApp FAS and V-Series storage systems. Like deduplication, NetApp data compression works in both SAN and NAS environments. NetApp data deduplication combines the benefits of granularity, performance, and resiliency to give you a significant advantage in the race to meet ever-increasing storage capacity demands.

* NetApp Flash Pool – is a NetApp Data ONTAP feature (introduced in version 8.1.1) that enables mixing regular HDDs with SSDs at an aggregate level. NetApp Flash Pool, an integral component of the NetApp Virtual Storage Tier, enables automated storage tiering. NetApp Flash Pool lets you mix solid state disk (SSD) technology and hard disk (HDD) technology at the aggregate level, to achieve SSD-like performance at HDD-like prices.

E)  NetApp Protection Software –

* Snapmirror – data replication technology provides disaster recovery protection and simplifies the management of data replication.

* MetroCluster – high-availability and disaster recovery software delivers continuous availability, transparent failover protection, and zero data loss.

* SnapVault – software speeds and simplifies backup and data recovery, protecting data at the block level.

* Open Systems SnapVault(OSSV) – software leverages block-level incremental backup technology to protect Windows, Linux/UNIX, SQL Server and VMware systems running on mixed storage.

* SnapRestore – data recovery software uses stored Data ONTAP Snapshot copies to recover anything from a single file to multi-terabyte volumes, in seconds.

F) NetApp StorageGRID – NetApp StorageGRID object storage software enables secure management of petabyte-scale distributed content repositories. Eliminating the typical constraints of data containers in blocks and files, the StorageGRID application offers secure, intelligent, and scalable data storage and management in a single global namespace. It optimizes metadata management and content placement through a global policy engine with built-in security. StorageGRID software automates the lifecycle of stored content by managing how files and objects are stored, placed, protected, and retrieved.

 

Thank You,
Arun Bagul

(0) Comments    Read More   
Sep
16

Introduction

NetApp Storage supports multiple protocols to access data like NFS, CIFS(SMB), FTP and WebDav etc. This article explains how to create NetApp Volume and export using NFS.

Step 1) Check Aggr Space and Ping Storage connectivity from server (where u will be mounting volume) –

# ping -c5 -M do -s 8972 192.168.0.10

netapp-filer1> df -hA
Aggregate                total       used      avail capacity
aggr2                     16TB       14TB     2320GB      86%
aggr3                     16TB       14TB     1681GB      90%
aggr0                   1490GB     1251GB      239GB      84%
aggr1                     16TB       15TB     1511GB      91%
aggr4                     12TB     5835GB     7044GB      45%
netapp-filer1>

Step 2) Create Volume –

netapp-filer1> vol create myvolume_bkup  -l en_US -s volume  aggr1 500g
Creation of volume ‘myvolume_bkup’ with size 1t on containing aggregate
‘aggr1’ has completed.

Step 3) Disable or Change snapshot and Reserve –

netapp-filer1> vol options myvolume_bkup
nosnap=off, nosnapdir=off, minra=off, no_atime_update=off, nvfail=off,
ignore_inconsistent=off, snapmirrored=off, create_ucode=off,
convert_ucode=off, maxdirsize=73400, schedsnapname=ordinal,
fs_size_fixed=off, compression=off, guarantee=volume, svo_enable=off,
svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off,
no_i2p=off, fractional_reserve=100, extent=off, try_first=volume_grow,
read_realloc=off, snapshot_clone_dependency=off, nbu_archival_snap=off

netapp-filer1> vol options myvolume_bkup  nosnap on
netapp-filer1> snap reserve myvolume_bkup  0

netapp-filer1> df -h  myvolume_bkup
Filesystem               total       used      avail capacity  Mounted on
/vol/myvolume_bkup/      500GB      176KB      499GB       0%  /vol/myvolume_bkup/
/vol/myvolume_bkup/.snapshot        0TB        0TB        0TB     —%  /vol/myvolume_bkup/.snapshot

Step 4) Exports NFS –

netapp-filer1> exportfs -p sec=sys,rw=192.168.0.25,root=192.168.0.25,nosuid  /vol/myvolume_bkup

Step 4) /etc/fstab entry on Server –
192.168.0.10:/vol/myvolume_bkup     /backup nfs     defaults,hard,rw,rsize=65536,wsize=65536,proto=tcp 0 0

Thank you,
Arun Bagul

(1) Comment    Read More   
Sep
15
Posted on 15-09-2014
Filed Under (General information) by Arun Bagul

How to Import/Export GPG Keys-

Step 1) List GPG Keys –

[root@test-host ~]# gpg -kv
/root/.gnupg/pubring.gpg
————————
pub  1024D/F9F17DC2 2012-09-27 Test GPG key (Created by Arun) <arun@my.com>
sub  2048g/F173E2CC 2012-09-27

pub  1024D/5A6C12B1 2013-02-25 Test2 <abagul@my.com>
sub  1024g/CA7BF220 2013-02-25

Step 2) How to Export GPG Key –

[root@test-host ~]# gpg –armor –export  –output /tmp/mykey.pub -r ‘5A6C12B1’
[root@test-host ~]# cat /tmp/mykey.pub
—–BEGIN PGP PUBLIC KEY BLOCK—–
Version: GnuPG v1.2.6 (GNU/Linux)

[root@test-host ~]# gpg –armor –export-secret-key  -r 5A6C12B1 –output /tmp/mykey.pri
[root@test-host ~]# cat /tmp/mykey.pri
—–BEGIN PGP PRIVATE KEY BLOCK—–
Version: GnuPG v1.2.6 (GNU/Linux)

Step 3) How to Import GPG Keys –

[arunb@test-host2 ~]$ gpg –import   /tmp/mykey.pri
gpg: keyring `/saba/arunb/.gnupg/secring.gpg’ created
gpg: key 5A6C12B1: secret key imported
gpg: key 5A6C12B1: public key Test2 <abagul@my.com> imported

[arunb@test-host2 ~]$ gpg –import   /tmp/mykey.pub
gpg: key 5A6C12B1: key Test2 <abagul@my.com> 2 new signatures imported
[arunb@test-host2 ~]$

Step 4) Now Test GPG Encryption/Decryption –

[arunb@test-host2 ~]$ echo “arunb” |gpg -v –no-tty  –passphrase-fd 0 –output /tmp/output.csv –decrypt /tmp/mytest.csv.pgp
gpg: public key is CA7BF220
gpg: using secondary key CA7BF220 instead of primary key 5A6C12B1
gpg: using secondary key CA7BF220 instead of primary key 5A6C12B1
gpg: encrypted with 1024-bit ELG-E key, ID CA7BF220, created 2013-02-25
“Test2 <abagul@my.com>”
gpg: AES256 encrypted data
gpg: original file name=’mytest_1_1.csv’
[arunb@test-host2 ~]$

Thank you,
Arun Bagul

(0) Comments    Read More   

Introduction-

NIC types available for VM  are depends on VM Hardware version and Guest OS (Operating System). When you configure a virtual machine, you can add network adapters (NICs) and specify the adapter type…

The following NIC types widely used:

E1000 –
Emulated version of the Intel 82545EM Gigabit Ethernet NIC, with drivers available in most newer guest operating systems, including Windows XP and later and Linux versions 2.4.19 and later.

E1000e – This feature emulates a newer model of Intel Gigabit NIC (number 82574) in the virtual hardware. This is known as the “e1000e” vNIC. e1000e is available only on hardware version 8 (and newer) virtual machines in vSphere.

VMXNET2 (Enhanced)

Optimized for performance in a virtual machine and has no physical counterpart. Because operating system vendors do not provide built-in drivers for this card, you must install VMware Tools to have a driver for the VMXNET network adapter available.
Based on the VMXNET adapter but provides high-performance features commonly used on modern networks, such as jumbo frames and hardware offloads. VMXNET 2 (Enhanced) is available only for some guest operating systems on ESX/ESXi 3.5 and later.

VMXNET3

Next generation of a paravirtualized NIC designed for performance. VMXNET 3 offers all the features available in VMXNET 2 and adds several new features, such as multiqueue support (also known as Receive Side Scaling in Windows), IPv6 offloads, and MSI/MSI-X interrupt delivery. VMXNET 3 is not related to VMXNET or VMXNET 2.
– VMXNET 3 is supported only for virtual machines version 7 and later.
– Support 10Gpbs ie 10Gig Network
– Jumbo frame supported

I would suggest to use  “VMXNET3”

Thank you,
Arun

(0) Comments    Read More   
Sep
01
Posted on 01-09-2014
Filed Under (VMware/ESXi) by Shanino Rodrigues

Unable to extend VM disk from Vcenter Console… Option gred out

Reason: When Ever you want to perform VM disk extend make sure all snapshpt are deleted for that particular VM.

(0) Comments    Read More   
Sep
01
Posted on 01-09-2014
Filed Under (Virtualization, VMware/ESXi) by Arun Bagul

Introduction

When you create VM (Virtual Machine) in VMWare based Virtualization platform. VMware creates  few VM configuration files in  folder with VM name in Datastore (Local Storage or NFS/SAN). Please find the table which describes files types in vmware…

 

File Usage File Description File Format
.vmx .vmname.vmx Virtual machine configuration file. ASCII
.vmxf vmname.vmxf Additional virtual machine configuration files, available, for example, with teamed virtual machines. ASCII
.vmdk vmname.vmdk Virtual disk file. ASCII
.flat.vmdk vmname.flat.vmdk Preallocated virtual disk in binary format. Binary
.vswp vmname.vswp Swap file.
.nvram vmname.nvram or nvram Non-volatile RAM. Stores virtual machine BIOS information.
.vmss vmname.vmss Virtual machine suspend file.
.log vmware.log Virtual machine log file. ASCII
#.log vmware-#.log Old virtual machine log files. # is a number starting with 1. ASCII

 

Thank you,
Arun Bagul

(0) Comments    Read More   

www.flickr.com
arunbagul's photos More of arunbagul's photos
Get Adobe Flash player
-->