Sep
17
Posted on 17-09-2014
Filed Under (NETAPP-Storage, Storage & Backup) by Arun Bagul

Introduction -

NetApp is leading provider of high speed/performance SAN/NAS storage. I’m working on various NetApp Products since past 5+ yrs as Storage Admin.  This article gives overview of NetApp Storage products and Technologies…

* Data ONTAP - NetApp Data ONTAP is Operating System (OS) running on NetApp Storage. “Data ONTAP” supports Cluster Mode and 7-Node (using ONTAP like ONTAP7)

* NetApp SANtricity Storage OS - offers a powerful, easy-to-use interface for administering E-series NetApp Storage.

    • Dynamic Disk Pools (DDPs) – greatly simplify traditional storage management with no idle spares to manage or reconfigure when drives are added or fail, thus providing the ability to automatically configure, expand  & scale storage. DDPs enable dynamic rebalancing of drive count changes.

    • Dynamic RAID-level migration changes the RAID level of a volume group on the existing drives without requiring the relocation of data. The software supports DDPs and RAID levels   0, 1, 3, 5, 6, and 10.

    • Dynamic volume expansion (DVE) – allows administrators to expand the capacity of an existing volume by using the free capacity on an existing volume group. DVE combines the new capacity with the original capacity for maximum performance and utilization.

    • Streamlined Performance Efficiency – Intelligent cache tiering, which uses the SANtricity SSD Cache feature, enhances performance and application response time. The SSD Cache feature provides intelligent read caching capability to identify and host the most frequently accessed blocks of data and leverages the superior performance and lower latency of solid-state drives (SSDs). This caching approach works in real time and in a data-driven fashion, and is always on, with no complicated policies to define the trigger for data movement between tiers

• Efficient Storage Provisioning - Thin provisioning delivers significant savings by separating the internal allocation of storage from the external allocation reported to hosts

* NetApp OnCommand System Manager - OnCommand System Manager is the simple yet powerful browser-based management tools that enable administrators to easily configure and manage individual NetApp storage systems or clusters of systems. OnCommand Unified Manager monitors and alerts on the health of your NetApp storage running on clustered Data ONTAP.

* WAFL and RAID-DP - NetApp introduced double-parity RAID, named RAID-DP, in 2003, starting with the Data ONTAP6. Since then it has become the default RAID group type used on NetApp storage. At the most basic layer, RAID-DP adds a second parity disk to each RAID group in an aggregate or traditional volume. A RAID group is an underlying construct that aggregates and traditional volumes are built upon. Each traditional NetApp RAID 4 group has some number of data disks and one parity disk, with aggregates and volumes containing one or more RAID 4 groups. Whereas the parity disk in a RAID 4 volume stores row parity across the disks in a RAID 4 group, the additional RAID-DP parity disk stores. diagonal parity across the disks in a RAID-DP group.

A) Unified Storage Data ONTAP -

Common FAS series NetApp storage Array/filers with NetApp Data ONTAP OS.

B) High Performance SAN Storage E-Series –

The NetApp E5500 data storage system sets new standards for performance efficiency in application-driven environments. The E5500 is equally adept at supporting high-IOPS mixed workloads and databases, high-performance file systems, and bandwidth-intensive streaming applications. NetApp’s patent-pending Dynamic Disk Pools (DDP) simplifies traditional RAID management by distributing data parity information and spare capacity across a pool of drives. The modular flexibility of the E-Series—with three disk drive/controller shelves, multiple drive types, and a complete selection of interfaces—enables custom configurations optimized and able to scale as needed. The maximum storage density of the E5500 reduces rack space by up to 60%, power use by up to 40%, and cooling requirements by up to 39%.

- Maximum RAW Capacity- 28TB,48TB, 240TB to 1.54PB

- Rack Unit 2U (12 or 24 drives) and 4U (60 drives)

- Maximum 16 Shelves or 384 total disk

- 2/3/4TB SAS Disk Drives and 400/800GB SSD Disks (Mixed)

 - 24GB ECC RAM  (ECC which stands for Error Correction Code RAM)

- 8 x 10Gbps iSCSI IO ports, 8 x 8Gb SAS IO ports, 8 x 16Gb FC IO ports

- SANtricity OS 11.10

 

C) NetApp disk shelves

NetApp offers a full range of high-capacity, high-performance, and self-encrypting disk drives plus ultra-high-performance solid-state drives (SSDs). Disk shelf options let you optimize for capacity, performance, or versatility. NetApp Optical SAS interconnects simplify infrastructure while providing industry-leading performance.

- Nondisruptive controller upgrades

- Self-managing Virtual Storage Tier technologies, including Flash Pool, optimize data placement on flash for maximum performance

- Supported Disk types- SATA,SAS,SSD and Flash disk

a) DS2246 – 2U Rack units, 24 Drives per enclosure, 12 Drives per rack unit,Optical SAS support

b) DS4246 – 4U Rack units, 24 Drives per enclosure, 6 Drives per rack unit,Optical SAS support, 2TB/3TB/4TB disk drives

c) DS4486 – 4U Rack units, 24 Drives per enclosure, 12 Drives per rack unit,Optical SAS support, 4TB disk drives, Tandem (dual) drives

d) DS4243 – 4U Rack units, 24 Drives per enclosure, 6 Drives per rack unit,Optical SAS support, DS4243 disk shelf is no longer available   in new system shipments.

For details please visit NetApp page- http://www.netapp.com/us/products/storage-systems/disk-shelves-and-storage-media/disk-shelves-tech-specs.aspx

 

D) NetApp Software -

    * NetApp FilerView Administration Tool - GUI tools was used to manage NetApp Filer. However If you plan to run Data ONTAP 8.1 or later software, you need to use OnCommand System Manager software.

* NetApp FlexArray Virtualization Software – FlexArray enables you to connect your existing storage arrays to FAS8000 controllers using your Fibre Channel SAN fabric. Array LUNs are provisioned to the FAS8000 and collected into a storage pool from which NetApp volumes are created and shared out to SAN hosts and NAS clients. The new volumes, managed by the FAS8000. FlexArray has the flexibility to serve both SAN and NAS protocols at the same time without any complex add-on components, making the FAS8000 the ideal storage virtualization platform.

* NetApp DataMotion – DataMotion data migration software lets you move data from one logical or physical storage device to another, without disrupting operations. You can keep your shared storage infrastructure running as you add capacity, refresh infrastructure, and balance performance. You can use DataMotion for vFiler and Volumes.

* NetApp Deduplication and Compression – NetApp data compression is a new feature that compresses data as it is written to NetApp FAS and V-Series storage systems. Like deduplication, NetApp data compression works in both SAN and NAS environments. NetApp data deduplication combines the benefits of granularity, performance, and resiliency to give you a significant advantage in the race to meet ever-increasing storage capacity demands.

* NetApp Flash Pool – is a NetApp Data ONTAP feature (introduced in version 8.1.1) that enables mixing regular HDDs with SSDs at an aggregate level. NetApp Flash Pool, an integral component of the NetApp Virtual Storage Tier, enables automated storage tiering. NetApp Flash Pool lets you mix solid state disk (SSD) technology and hard disk (HDD) technology at the aggregate level, to achieve SSD-like performance at HDD-like prices.

E)  NetApp Protection Software –

* Snapmirror – data replication technology provides disaster recovery protection and simplifies the management of data replication.

* MetroCluster – high-availability and disaster recovery software delivers continuous availability, transparent failover protection, and zero data loss.

* SnapVault – software speeds and simplifies backup and data recovery, protecting data at the block level.

* Open Systems SnapVault(OSSV) – software leverages block-level incremental backup technology to protect Windows, Linux/UNIX, SQL Server and VMware systems running on mixed storage.

* SnapRestore – data recovery software uses stored Data ONTAP Snapshot copies to recover anything from a single file to multi-terabyte volumes, in seconds.

F) NetApp StorageGRID - NetApp StorageGRID object storage software enables secure management of petabyte-scale distributed content repositories. Eliminating the typical constraints of data containers in blocks and files, the StorageGRID application offers secure, intelligent, and scalable data storage and management in a single global namespace. It optimizes metadata management and content placement through a global policy engine with built-in security. StorageGRID software automates the lifecycle of stored content by managing how files and objects are stored, placed, protected, and retrieved.

 

Thank You,
Arun Bagul

(0) Comments    Read More   
Sep
16

Introduction-

NetApp Storage supports multiple protocols to access data like NFS, CIFS(SMB), FTP and WebDav etc. This article explains how to create NetApp Volume and export using NFS.

Step 1) Check Aggr Space and Ping Storage connectivity from server (where u will be mounting volume) -

# ping -c5 -M do -s 8972 192.168.0.10

netapp-filer1> df -hA
Aggregate                total       used      avail capacity
aggr2                     16TB       14TB     2320GB      86%
aggr3                     16TB       14TB     1681GB      90%
aggr0                   1490GB     1251GB      239GB      84%
aggr1                     16TB       15TB     1511GB      91%
aggr4                     12TB     5835GB     7044GB      45%
netapp-filer1>

Step 2) Create Volume -

netapp-filer1> vol create myvolume_bkup  -l en_US -s volume  aggr1 500g
Creation of volume ‘myvolume_bkup’ with size 1t on containing aggregate
‘aggr1′ has completed.

Step 3) Disable or Change snapshot and Reserve -

netapp-filer1> vol options myvolume_bkup
nosnap=off, nosnapdir=off, minra=off, no_atime_update=off, nvfail=off,
ignore_inconsistent=off, snapmirrored=off, create_ucode=off,
convert_ucode=off, maxdirsize=73400, schedsnapname=ordinal,
fs_size_fixed=off, compression=off, guarantee=volume, svo_enable=off,
svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off,
no_i2p=off, fractional_reserve=100, extent=off, try_first=volume_grow,
read_realloc=off, snapshot_clone_dependency=off, nbu_archival_snap=off

netapp-filer1> vol options myvolume_bkup  nosnap on
netapp-filer1> snap reserve myvolume_bkup  0

netapp-filer1> df -h  myvolume_bkup
Filesystem               total       used      avail capacity  Mounted on
/vol/myvolume_bkup/      500GB      176KB      499GB       0%  /vol/myvolume_bkup/
/vol/myvolume_bkup/.snapshot        0TB        0TB        0TB     —%  /vol/myvolume_bkup/.snapshot

Step 4) Exports NFS -

netapp-filer1> exportfs -p sec=sys,rw=192.168.0.25,root=192.168.0.25,nosuid  /vol/myvolume_bkup

Step 4) /etc/fstab entry on Server -
192.168.0.10:/vol/myvolume_bkup     /backup nfs     defaults,hard,rw,rsize=65536,wsize=65536,proto=tcp 0 0

Thank you,
Arun Bagul

(0) Comments    Read More   
Sep
15
Posted on 15-09-2014
Filed Under (General information) by Arun Bagul

How to Import/Export GPG Keys-

Step 1) List GPG Keys -

[root@test-host ~]# gpg -kv
/root/.gnupg/pubring.gpg
————————
pub  1024D/F9F17DC2 2012-09-27 Test GPG key (Created by Arun) <arun@my.com>
sub  2048g/F173E2CC 2012-09-27

pub  1024D/5A6C12B1 2013-02-25 Test2 <abagul@my.com>
sub  1024g/CA7BF220 2013-02-25

Step 2) How to Export GPG Key -

[root@test-host ~]# gpg –armor –export  –output /tmp/mykey.pub -r ’5A6C12B1′
[root@test-host ~]# cat /tmp/mykey.pub
—–BEGIN PGP PUBLIC KEY BLOCK—–
Version: GnuPG v1.2.6 (GNU/Linux)

[root@test-host ~]# gpg –armor –export-secret-key  -r 5A6C12B1 –output /tmp/mykey.pri
[root@test-host ~]# cat /tmp/mykey.pri
—–BEGIN PGP PRIVATE KEY BLOCK—–
Version: GnuPG v1.2.6 (GNU/Linux)

Step 3) How to Import GPG Keys -

[arunb@test-host2 ~]$ gpg –import   /tmp/mykey.pri
gpg: keyring `/saba/arunb/.gnupg/secring.gpg’ created
gpg: key 5A6C12B1: secret key imported
gpg: key 5A6C12B1: public key Test2 <abagul@my.com> imported

[arunb@test-host2 ~]$ gpg –import   /tmp/mykey.pub
gpg: key 5A6C12B1: key Test2 <abagul@my.com> 2 new signatures imported
[arunb@test-host2 ~]$

Step 4) Now Test GPG Encryption/Decryption -

[arunb@test-host2 ~]$ echo “arunb” |gpg -v –no-tty  –passphrase-fd 0 –output /tmp/output.csv –decrypt /tmp/mytest.csv.pgp
gpg: public key is CA7BF220
gpg: using secondary key CA7BF220 instead of primary key 5A6C12B1
gpg: using secondary key CA7BF220 instead of primary key 5A6C12B1
gpg: encrypted with 1024-bit ELG-E key, ID CA7BF220, created 2013-02-25
“Test2 <abagul@my.com>”
gpg: AES256 encrypted data
gpg: original file name=’mytest_1_1.csv’
[arunb@test-host2 ~]$

Thank you,
Arun Bagul

(0) Comments    Read More   

Introduction-

NIC types available for VM  are depends on VM Hardware version and Guest OS (Operating System). When you configure a virtual machine, you can add network adapters (NICs) and specify the adapter type…

The following NIC types widely used:

E1000 -
Emulated version of the Intel 82545EM Gigabit Ethernet NIC, with drivers available in most newer guest operating systems, including Windows XP and later and Linux versions 2.4.19 and later.

E1000e – This feature emulates a newer model of Intel Gigabit NIC (number 82574) in the virtual hardware. This is known as the “e1000e” vNIC. e1000e is available only on hardware version 8 (and newer) virtual machines in vSphere.

VMXNET2 (Enhanced) -

Optimized for performance in a virtual machine and has no physical counterpart. Because operating system vendors do not provide built-in drivers for this card, you must install VMware Tools to have a driver for the VMXNET network adapter available.
Based on the VMXNET adapter but provides high-performance features commonly used on modern networks, such as jumbo frames and hardware offloads. VMXNET 2 (Enhanced) is available only for some guest operating systems on ESX/ESXi 3.5 and later.

VMXNET3 -

Next generation of a paravirtualized NIC designed for performance. VMXNET 3 offers all the features available in VMXNET 2 and adds several new features, such as multiqueue support (also known as Receive Side Scaling in Windows), IPv6 offloads, and MSI/MSI-X interrupt delivery. VMXNET 3 is not related to VMXNET or VMXNET 2.
- VMXNET 3 is supported only for virtual machines version 7 and later.
- Support 10Gpbs ie 10Gig Network
- Jumbo frame supported

I would suggest to use  “VMXNET3″

Thank you,
Arun

(0) Comments    Read More   
Sep
01
Posted on 01-09-2014
Filed Under (VMware/ESXi) by Shanino Rodrigues

Unable to extend VM disk from Vcenter Console… Option gred out

Reason: When Ever you want to perform VM disk extend make sure all snapshpt are deleted for that particular VM.

(0) Comments    Read More   
Sep
01
Posted on 01-09-2014
Filed Under (Virtualization, VMware/ESXi) by Arun Bagul

Introduction-

When you create VM (Virtual Machine) in VMWare based Virtualization platform. VMware creates  few VM configuration files in  folder with VM name in Datastore (Local Storage or NFS/SAN). Please find the table which describes files types in vmware…

 

File Usage File Description File Format
.vmx .vmname.vmx Virtual machine configuration file. ASCII
.vmxf vmname.vmxf Additional virtual machine configuration files, available, for example, with teamed virtual machines. ASCII
.vmdk vmname.vmdk Virtual disk file. ASCII
.flat.vmdk vmname.flat.vmdk Preallocated virtual disk in binary format. Binary
.vswp vmname.vswp Swap file.
.nvram vmname.nvram or nvram Non-volatile RAM. Stores virtual machine BIOS information.
.vmss vmname.vmss Virtual machine suspend file.
.log vmware.log Virtual machine log file. ASCII
#.log vmware-#.log Old virtual machine log files. # is a number starting with 1. ASCII

 

Thank you,
Arun Bagul

(0) Comments    Read More   
Aug
31
Posted on 31-08-2014
Filed Under (Virtualization, VMware/ESXi) by Arun Bagul

Introduction-

Sometime we need to login to Esxi server to check hardware/networking and performance/stats. Sharing few important ESXi commands..

a)  ESXi NIC List

~ # esxcfg-nics  --list
Name    PCI           Driver      Link Speed    Duplex MAC Address       MTU    Description
vmnic0  0000:01:00.00 tg3   Up   1000Mbps  Full  XX:10:55:DD:CC:XX 1500   Broadcom BCM5720 Gigabit Ethernet
vmnic1  0000:01:00.01 tg3   Up   1000Mbps  Full  XX:10:55:67:CC:XX 1500   Broadcom BCM5720 Gigabit Ethernet
vmnic2  0000:02:00.00 tg3   Up   1000Mbps  Full  XX:10:55:65:CC:YY 1500   Broadcom BCM5720 Gigabit Ethernet
vmnic3  0000:02:00.01 tg3   Up   1000Mbps  Full  XX:10:55:23:CC:00 1500   Broadcom BCM5720 Gigabit Ethernet
~ #
~ # esxcli network ip interface  list
vmk0
Name: vmk0
MAC Address: 24:b6:fd:XX:XX:YY
Enabled: true
Portset: vSwitch0
Portgroup: Management Network
VDS Name: N/A
VDS UUID: N/A
VDS Port: N/A
VDS Connection: -1
MTU: 1500
TSO MSS: 65535
Port ID: 33554438

b)  ESXi Storage/iSCSI stats

~# esxcli storage san iscsi stats get
Adapter: vmhba34
Total Number of Sessions: 20
Total Number of Connections: 20
IO Data Sent: 2647449088
IO Data Received: 107921345640
Command PDUs: 15509582
Read Command PDUs: 12353055
Write Command PDUs: 3156497
Bidirectional Command PDUs: 0
No-data Command PDUs: 30
Response PDUs: 15509582
R2T PDUs: 0
Data-in PDUs: 0
Data-out PDUs: 0
Task Mgmt Request PDUs: 0
Task Mgmt Response PDUs: 0
Login Request PDUs: 20
Login Response PDUs: 20
Text Request PDUs: 0
Text Response PDUs: 0
Logout Request PDUs: 0
Logout Response PDUs: 0
NOP-Out PDUs: 1767885
NOP-In PDUs: 1767885
Async Event PDUs: 0
SNACK PDUs: 0
Reject PDUs: 0
Digest Errors: 0
Timeouts: 0
No Tx Buf Count: 0
No Rx Data Count: 232170
~ #

 

c)  ESXi  ping-

Check connectivity to storage, jumbo frame etc

~ # vmkping  -c 5 -s 8972 192.168.7.243
PING 192.168.7.243 (192.168.7.243): 8972 data bytes
8980 bytes from 192.168.7.243: icmp_seq=0 ttl=64 time=2.104 ms
8980 bytes from 192.168.7.243: icmp_seq=1 ttl=64 time=0.693 ms
8980 bytes from 192.168.7.243: icmp_seq=2 ttl=64 time=0.541 ms

d) VMKernel  VMNIC and Check connectivity with VMKernel Port

~ # esxcfg-vmknic  --list
Interface  Port Group/DVPort   IP Family IP Address     Netmask       Broadcast       MAC Address     MTU   TSO MSS Enabled Type
vmk0       Management Network  IPv4      192.168.7.5    255.255.252.0  192.168.7.255  XX:10:55:23:CC:00 1500  65535  true  STATIC
vmk1       iSCSI Kernel 1      IPv4      192.168.7.55   255.255.252.0  192.168.7.255  XX:10:XX:23:CC:YY 1500  65535  true  STATIC
vmk2       iSCSI Kernel 2      IPv4      192.168.7.155  255.255.252.0  192.168.7.255  00:50:56:XX:65:ZZ 1500  65535  true  STATIC     

~ # vmkping  -c 5 -s 8972 -I vmk1 192.168.7.243
PING 192.168.7.243 (192.168.7.243): 8972 data bytes
8980 bytes from 192.168.7.243: icmp_seq=0 ttl=64 time=0.747 ms
8980 bytes from 192.168.7.243: icmp_seq=1 ttl=64 time=0.481 ms
8980 bytes from 192.168.7.243: icmp_seq=2 ttl=64 time=0.523 ms
8980 bytes from 192.168.7.243: icmp_seq=3 ttl=64 time=0.615 ms
8980 bytes from 192.168.7.243: icmp_seq=4 ttl=64 time=0.504 ms

--- 192.168.7.243 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.481/0.574/0.747 ms
~ #

e) vSwitch list

~ # esxcfg-vswitch --list
Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
vSwitch0         128         47          128               1500    vmnic0,vmnic1
PortGroup Name        VLAN ID  Used Ports  Uplinks
NFS                   188      0           vmnic0,vmnic1
DMZ 192.168.X.0/24    1103     13          vmnic0,vmnic1
DMZ 192.168.Y.0/22    1102     22          vmnic0,vmnic1
DMZ 192.168.X.0/24    1101     8           vmnic0,vmnic1
Management Network    1102     1           vmnic0,vmnic1

Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
vSwitch1         128         3           128               1500    vmnic2
PortGroup Name        VLAN ID  Used Ports  Uplinks
iSCSI Kernel 1        0        1           vmnic2

Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
vSwitch2         128         3           128               1500    vmnic3
PortGroup Name        VLAN ID  Used Ports  Uplinks
iSCSI Kernel 2        0        1           vmnic3
~ #

Thank You,
Arun

(0) Comments    Read More   
Aug
30
Posted on 30-08-2014
Filed Under (Virtualization, VMware/ESXi) by Arun Bagul

Introduction-

Last month, while working on ESXi5.1 disconnect issue. we analyzed esxi logs for past 3/4 months. Just sharing information related to ESXi log rotation policy..

/var/log # esxcli system syslog config get
Default Rotation Size: 1024
Default Rotations: 8
Log Output: /scratch/log
Log To Unique Subdirectory: false
Remote Host: <none>
/var/log # cd /scratch/log
/vmfs/volumes/507a011b-acd45a80-9aed-e0db5501b632/log #

 

Thank you,
Arun Bagul

(0) Comments    Read More   

Difference between (Extended) ext2/3 and ext4 File System

* Ext2
-It was introduced in 1993. Developed by Remy Card.
-ext2 stands for second extended file system.
-This was developed to overcome the limitation of the original ext file system.
-ext2 does not have journaling feature.
-ext2 is recommended for flash drives, usb drives etc
-Maximum individual file size can be from 16GB to 2TB (depends on block size)
-Overall ext2 FS size can be from 2TB to 32TB

* Ext3
-It was introduced in 2001. Developed by Stephen Tweedie.
-ext3 stands for third extended file system.
    -The main benefit of ext3 is that it allows journaling.
-Journaling has a dedicated area in the file system, where all the changes are tracked. When the system crashes,file system
corruption chances are less because of journaling.
-Maximum individual file size can be from 16GB to 2TB
-Overall ext3 FS size can be from 2TB to 32TB
-There are three types of journaling available in ext3 file system.
1) Journal – both Metadata and Content are saved in the journal.
2) Ordered – Only metadata is saved in the journal. Metadata are journaled only after writing the content to disk. This is the default.
3) Writeback – Only metadata is saved in the journal. Metadata might be journaled either before or after the
content is written to the disk.
    -You can convert a ext2 file system to ext3 file system directly (without backup/restore).

* Ext4
-It was introduced in 2008.
-Ext4 stands for fourth extended file system.
-Starting from Linux Kernel 2.6.19 ext4 was available.
-Maximum individual file size can be from 16 GB to 16TB
-Overall maximum ext4 FS size is 1024PB (petabyte), 1PB = 1024TB (terabyte)
-Directory can contain a maximum of 64,000 subdirectories (as opposed to 32,000 in ext3)
-You can also mount an existing ext3 FS as ext4 fs (without having to upgrade it)
    -ext4 default inode size is 256 bytes.(in ext3 inode size is 128 bytes)
-Several other new features are introduced in ext4: multiblock allocation, delayed allocation, journal checksum. fast fsck, etc. All you need to know is that these new features have improved the performance and reliability of the filesystem when compared to ext3
-In ext4, you also have the option of turning the journaling feature “off”.
-Faster file system checking as Unallocated blocks are skipped during FS checking
-Improved timestamps- Up to the nanosecond. Which will defer the year 2038 problem
-Online Defragmentation

What is Extents?
-Ext3 uses a block mapping scheme (block 4Kb), the bigger the file needs huge block mapping will lead to slower handling.
-Ext4 introduces the concept of Extents. An extent is basically a “Bunch of blocks”.
Basically it say “write the data is in the next N blocks ie extent” instead of mapping each individual block separately.
-Ext4 will support up to 128Mb extents,This improve performance and also help in reducing fragmentation.

Multiblock Allocation-
-Ext3 uses a block allocator that decides which free blocks will be used to write the data. But this allocator
can only allocate one block at a time.
-Ext4 will support multi-block allocation, which allocates many blocks in a single call and avoids a lot of overhead.

Thank you,
Arun Bagul

(1) Comment    Read More   
Jan
31
Posted on 31-01-2013
Filed Under (Perl & Python) by Arun Bagul

Introduction-

I wrote this simple LDAP Caching unix daemon 2 yrs back when we faced lot of issue with Integrating Apache with LDAP authentication
using Apache ldap auth module. We were able to configure it properly however we faced slowness issue.

Basically We wanted to use Nagios (Check_mk Multisite) with LDAP authentication. So we wrote this unix daemon.
As of now this is very simple (no theading/forking and it is blocking) However it is working perfectly without any issue (for Nagios web interface authentication and few other web based tools, around 300+ users).

Download Perl files-

* ldapcached.pl
http://www.indiangnu.org/wp-content/uploads/2013/ldapcached-pl.txt

* ldapcached-client.pl
http://www.indiangnu.org/wp-content/uploads/2013/ldapcached-client-pl.txt

* Custom Apache Handler – to use this daemon for basic authentication
http://www.indiangnu.org/wp-content/uploads/2013/MyHandler-pm.txt

root@arunb:~# cat /etc/init.d/ldapcached-initd.pl
#!/usr/bin/perl
use strict;
use warnings;
use Proc::Daemon;

no warnings ‘uninitialized’;
if ( $ARGV[0] =~ m/start/ ) {
Proc::Daemon::Init;
my $continue = 1;
$SIG{TERM} = sub { $continue = 0 };
while ($continue) { eval { `/usr/local/ldapcached.pl –daemon`;};}

} else { print ” * Usage: $0 {start}”;}
#end
print “\n”;
root@arunb:~#

Step 1] Run Daemon – (edit LDAP related varibles)

Copy ldapcached.pl perl file at following location on your system /usr/local/ldapcached.pl
OR Change path in “ldapcached-initd.pl” startup file

* Start process-

root@:~# /etc/init.d/ldapcached-initd.pl start
root@:~#

* Make sure only one ldapcached process running?

root@:~# ps aux | grep ldap | grep -v grep
root 19441 0.0 0.0 6212 1408 ? S 15:03 0:00 /usr/bin/perl /etc/init.d/ldapcached-initd.pl start
root 19442 0.0 0.2 8656 4916 ? S 15:03 0:00 ldapcached
root@:~#

Step 2] How to test-

root@:~$ perl ldapcached-client.pl –client ‘my-ldap-user’ ‘my-ldap-pass’
Failed
root@:~$

root@:~$ perl ldapcached-client.pl –client ‘my-ldap-user’ ‘my-ldap-pass’
Pass
root@:~$

Step 3] How I should use/integrate in Application-

Say I want to use this ldapcached unix daemon for Apache/Basic Authentication -

NOTE- Make sure to copy MyHandler.pm module in Perl module directory or Check Apache error log for any error.

ScriptAlias /nagios/cgi-bin “/usr/lib64/nagios/cgi”

<Directory “/usr/lib64/nagios/cgi”>
Order allow,deny
Allow from all
AuthType Basic
AuthName “Nagios GUI”
PerlAuthenHandler Apache::MyHandler
Require valid-user
</Directory>

Thank you,
Arun

(0) Comments    Read More   
Get Adobe Flash player

www.flickr.com
arunbagul's photos More of arunbagul's photos