Category: Storage & Backup

Storage & Backup

How to create CIFS/Windows share on NetApp Storage

How to create CIFS/Windows share on NetApp Storage

Introduction-

We can access NetApp volume using CIFS/SMB just like windows share. It is very useful to use NetApp storage in mixed environment of Linux/Windows or on Windows based
Products.

Step 1) Creating NetApp volume or use Qtree

First we need to create  “/vol/mycifs_share” netapp volume or you can use qtree as well.
Please refer article about creating NetApp volume – http://www.indiangnu.org/2014/how-to-create-volume-in-netapp-and-how-to-nfs-export/

Step 2) Change Secuirty style to NTFS (eg- mycifs_share volume)

my-netapp1> qtree security /vol/mycifs_share ntfs
Sun Jun 10 06:19:08 EDT [my-netapp1: wafl.quota.sec.change:notice]: security style for /vol/mycifs_share/ changed from unix to ntfs
my-netapp1>

Step 3) Creating CIFS/Windows Share

I assume that NetApp filer has been joined to AD (Active Directory, LDAP) and CIFS licensed is installed/configured and cifs service is   running on NetApp. Now we will create CIFS share and give permissions to User/Groups…

my-netapp1> cifs  shares  -add MyShare /vol/mycifs_share -comment “My Test Windows CIFS Share”
The share name ‘MyShare’ will not be accessible by some MS-DOS workstations
my-netapp1>

Step 4) Give CIFS Share Access

my-netapp1> cifs access MyShare “MYDOMAIN\USER_OR_GROUP”   “Full Control”
1 share(s) have been successfully modified
my-netapp1>

NOTE- We can give Full permission ie “Full Control”, Read permission ie “Read”

Step 5) List CIFS-

Filer: 192.168.10.50

my-netapp1> cifs shares
Name         Mount Point                       Description
—-         ———–                       ———–
MyShare       /vol/mycifs_share                My Test Windows CIFS Share

Step 6) Access CIFS/Share on Windows or Linux-

\\192.168.10.50\MyShare

Thank you,
Arun Bagul

NetApp Product Overview

NetApp Product Overview

Introduction –

NetApp is leading provider of high speed/performance SAN/NAS storage. I’m working on various NetApp Products since past 5+ yrs as Storage Admin.  This article gives overview of NetApp Storage products and Technologies…

* Data ONTAP – NetApp Data ONTAP is Operating System (OS) running on NetApp Storage. “Data ONTAP” supports Cluster Mode and 7-Node (using ONTAP like ONTAP7)

* NetApp SANtricity Storage OS offers a powerful, easy-to-use interface for administering E-series NetApp Storage.

    • Dynamic Disk Pools (DDPs) – greatly simplify traditional storage management with no idle spares to manage or reconfigure when drives are added or fail, thus providing the ability to automatically configure, expand  & scale storage. DDPs enable dynamic rebalancing of drive count changes.

    • Dynamic RAID-level migration changes the RAID level of a volume group on the existing drives without requiring the relocation of data. The software supports DDPs and RAID levels   0, 1, 3, 5, 6, and 10.

    • Dynamic volume expansion (DVE) – allows administrators to expand the capacity of an existing volume by using the free capacity on an existing volume group. DVE combines the new capacity with the original capacity for maximum performance and utilization.

    • Streamlined Performance Efficiency – Intelligent cache tiering, which uses the SANtricity SSD Cache feature, enhances performance and application response time. The SSD Cache feature provides intelligent read caching capability to identify and host the most frequently accessed blocks of data and leverages the superior performance and lower latency of solid-state drives (SSDs). This caching approach works in real time and in a data-driven fashion, and is always on, with no complicated policies to define the trigger for data movement between tiers

• Efficient Storage Provisioning – Thin provisioning delivers significant savings by separating the internal allocation of storage from the external allocation reported to hosts

* NetApp OnCommand System Manager – OnCommand System Manager is the simple yet powerful browser-based management tools that enable administrators to easily configure and manage individual NetApp storage systems or clusters of systems. OnCommand Unified Manager monitors and alerts on the health of your NetApp storage running on clustered Data ONTAP.

* WAFL and RAID-DP – NetApp introduced double-parity RAID, named RAID-DP, in 2003, starting with the Data ONTAP6. Since then it has become the default RAID group type used on NetApp storage. At the most basic layer, RAID-DP adds a second parity disk to each RAID group in an aggregate or traditional volume. A RAID group is an underlying construct that aggregates and traditional volumes are built upon. Each traditional NetApp RAID 4 group has some number of data disks and one parity disk, with aggregates and volumes containing one or more RAID 4 groups. Whereas the parity disk in a RAID 4 volume stores row parity across the disks in a RAID 4 group, the additional RAID-DP parity disk stores. diagonal parity across the disks in a RAID-DP group.

A) Unified Storage Data ONTAP –

Common FAS series NetApp storage Array/filers with NetApp Data ONTAP OS.

B) High Performance SAN Storage E-Series –

The NetApp E5500 data storage system sets new standards for performance efficiency in application-driven environments. The E5500 is equally adept at supporting high-IOPS mixed workloads and databases, high-performance file systems, and bandwidth-intensive streaming applications. NetApp’s patent-pending Dynamic Disk Pools (DDP) simplifies traditional RAID management by distributing data parity information and spare capacity across a pool of drives. The modular flexibility of the E-Series—with three disk drive/controller shelves, multiple drive types, and a complete selection of interfaces—enables custom configurations optimized and able to scale as needed. The maximum storage density of the E5500 reduces rack space by up to 60%, power use by up to 40%, and cooling requirements by up to 39%.

– Maximum RAW Capacity- 28TB,48TB, 240TB to 1.54PB

– Rack Unit 2U (12 or 24 drives) and 4U (60 drives)

– Maximum 16 Shelves or 384 total disk

– 2/3/4TB SAS Disk Drives and 400/800GB SSD Disks (Mixed)

 – 24GB ECC RAM  (ECC which stands for Error Correction Code RAM)

– 8 x 10Gbps iSCSI IO ports, 8 x 8Gb SAS IO ports, 8 x 16Gb FC IO ports

– SANtricity OS 11.10

 

C) NetApp disk shelves

NetApp offers a full range of high-capacity, high-performance, and self-encrypting disk drives plus ultra-high-performance solid-state drives (SSDs). Disk shelf options let you optimize for capacity, performance, or versatility. NetApp Optical SAS interconnects simplify infrastructure while providing industry-leading performance.

– Nondisruptive controller upgrades

– Self-managing Virtual Storage Tier technologies, including Flash Pool, optimize data placement on flash for maximum performance

– Supported Disk types- SATA,SAS,SSD and Flash disk

a) DS2246 – 2U Rack units, 24 Drives per enclosure, 12 Drives per rack unit,Optical SAS support

b) DS4246 – 4U Rack units, 24 Drives per enclosure, 6 Drives per rack unit,Optical SAS support, 2TB/3TB/4TB disk drives

c) DS4486 – 4U Rack units, 24 Drives per enclosure, 12 Drives per rack unit,Optical SAS support, 4TB disk drives, Tandem (dual) drives

d) DS4243 – 4U Rack units, 24 Drives per enclosure, 6 Drives per rack unit,Optical SAS support, DS4243 disk shelf is no longer available   in new system shipments.

For details please visit NetApp page- http://www.netapp.com/us/products/storage-systems/disk-shelves-and-storage-media/disk-shelves-tech-specs.aspx

 

D) NetApp Software –

    * NetApp FilerView Administration Tool – GUI tools was used to manage NetApp Filer. However If you plan to run Data ONTAP 8.1 or later software, you need to use OnCommand System Manager software.

* NetApp FlexArray Virtualization Software – FlexArray enables you to connect your existing storage arrays to FAS8000 controllers using your Fibre Channel SAN fabric. Array LUNs are provisioned to the FAS8000 and collected into a storage pool from which NetApp volumes are created and shared out to SAN hosts and NAS clients. The new volumes, managed by the FAS8000. FlexArray has the flexibility to serve both SAN and NAS protocols at the same time without any complex add-on components, making the FAS8000 the ideal storage virtualization platform.

* NetApp DataMotion – DataMotion data migration software lets you move data from one logical or physical storage device to another, without disrupting operations. You can keep your shared storage infrastructure running as you add capacity, refresh infrastructure, and balance performance. You can use DataMotion for vFiler and Volumes.

* NetApp Deduplication and Compression – NetApp data compression is a new feature that compresses data as it is written to NetApp FAS and V-Series storage systems. Like deduplication, NetApp data compression works in both SAN and NAS environments. NetApp data deduplication combines the benefits of granularity, performance, and resiliency to give you a significant advantage in the race to meet ever-increasing storage capacity demands.

* NetApp Flash Pool – is a NetApp Data ONTAP feature (introduced in version 8.1.1) that enables mixing regular HDDs with SSDs at an aggregate level. NetApp Flash Pool, an integral component of the NetApp Virtual Storage Tier, enables automated storage tiering. NetApp Flash Pool lets you mix solid state disk (SSD) technology and hard disk (HDD) technology at the aggregate level, to achieve SSD-like performance at HDD-like prices.

E)  NetApp Protection Software –

* Snapmirror – data replication technology provides disaster recovery protection and simplifies the management of data replication.

* MetroCluster – high-availability and disaster recovery software delivers continuous availability, transparent failover protection, and zero data loss.

* SnapVault – software speeds and simplifies backup and data recovery, protecting data at the block level.

* Open Systems SnapVault(OSSV) – software leverages block-level incremental backup technology to protect Windows, Linux/UNIX, SQL Server and VMware systems running on mixed storage.

* SnapRestore – data recovery software uses stored Data ONTAP Snapshot copies to recover anything from a single file to multi-terabyte volumes, in seconds.

F) NetApp StorageGRID – NetApp StorageGRID object storage software enables secure management of petabyte-scale distributed content repositories. Eliminating the typical constraints of data containers in blocks and files, the StorageGRID application offers secure, intelligent, and scalable data storage and management in a single global namespace. It optimizes metadata management and content placement through a global policy engine with built-in security. StorageGRID software automates the lifecycle of stored content by managing how files and objects are stored, placed, protected, and retrieved.

 

Thank You,
Arun Bagul

How to create volume in NetApp and how to NFS export

How to create volume in NetApp and how to NFS export

Introduction

NetApp Storage supports multiple protocols to access data like NFS, CIFS(SMB), FTP and WebDav etc. This article explains how to create NetApp Volume and export using NFS.

Step 1) Check Aggr Space and Ping Storage connectivity from server (where u will be mounting volume) –

# ping -c5 -M do -s 8972 192.168.0.10

netapp-filer1> df -hA
Aggregate                total       used      avail capacity
aggr2                     16TB       14TB     2320GB      86%
aggr3                     16TB       14TB     1681GB      90%
aggr0                   1490GB     1251GB      239GB      84%
aggr1                     16TB       15TB     1511GB      91%
aggr4                     12TB     5835GB     7044GB      45%
netapp-filer1>

Step 2) Create Volume –

netapp-filer1> vol create myvolume_bkup  -l en_US -s volume  aggr1 500g
Creation of volume ‘myvolume_bkup’ with size 1t on containing aggregate
‘aggr1’ has completed.

Step 3) Disable or Change snapshot and Reserve –

netapp-filer1> vol options myvolume_bkup
nosnap=off, nosnapdir=off, minra=off, no_atime_update=off, nvfail=off,
ignore_inconsistent=off, snapmirrored=off, create_ucode=off,
convert_ucode=off, maxdirsize=73400, schedsnapname=ordinal,
fs_size_fixed=off, compression=off, guarantee=volume, svo_enable=off,
svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off,
no_i2p=off, fractional_reserve=100, extent=off, try_first=volume_grow,
read_realloc=off, snapshot_clone_dependency=off, nbu_archival_snap=off

netapp-filer1> vol options myvolume_bkup  nosnap on
netapp-filer1> snap reserve myvolume_bkup  0

netapp-filer1> df -h  myvolume_bkup
Filesystem               total       used      avail capacity  Mounted on
/vol/myvolume_bkup/      500GB      176KB      499GB       0%  /vol/myvolume_bkup/
/vol/myvolume_bkup/.snapshot        0TB        0TB        0TB     —%  /vol/myvolume_bkup/.snapshot

Step 4) Exports NFS –

netapp-filer1> exportfs -p sec=sys,rw=192.168.0.25,root=192.168.0.25,nosuid  /vol/myvolume_bkup

Step 4) /etc/fstab entry on Server –
192.168.0.10:/vol/myvolume_bkup     /backup nfs     defaults,hard,rw,rsize=65536,wsize=65536,proto=tcp 0 0

Thank you,
Arun Bagul

Configure the Tape Library in Linux

Configure the Tape Library in Linux

To check autoloader/library detected or not, give command

root@indiangnu.org:/home/arun# cat /proc/scsi/scsi

If it is not show changer, Do following steps

Check the version of Redhat Linux

If it is Redhat Linux 3

Add following line in /etc/modules.conf

“options scsi_mod max_luns=255”

“options scsi_mod scsi_noreportlun=1”

If it is Redhat Linux 4

Add following line in /etc/modprobe.conf

“options scsi_mod max_luns=255”

“options scsi_mod scsi_noreportlun=1”

 

Then create new initrd file…..

root@indiangnu.org:/home/arun# cd /boot

Rename initrd file

root@indiangnu.org:/home/arun# mv initrd-`uname –r.img initrd-`uname-r.main

e.g. mv initrd-2.6.9-42.EL.img initrd-2.6.9-42.EL.main

root@indiangnu.org:/home/arun# mkinitrd initrd-`uname –r`.img `uname –r`

e.g. mkinitrd initrd-2.6.9-42.EL.img 2.6.9-42.EL

At the end REBOOT your Linux box.

and again you have to re-scan. using first command cat /proc/scsi/scsi

Thank you,

Ravi Bhure

RAID 0+1 — Optimize for Performance and Redundancy

RAID 0+1 — Optimize for Performance and Redundancy

RAID (Redundant Array of Independent Disks) is a set of technology standards for teaming disk drives to improve fault tolerance and performance.

RAID Levels

Level

Name

0 Striping
1 Mirroring
2 Parallel Access with Specialized Disks
3 Synchronous Access with Dedicated Parity Disk
4 Independent Access with Dedicated Parity Disk
5 Independent Access with Distributed Parity
6 Independent Access with Double Parity

Choosing a RAID Level

Each RAID level represents a set of trade-offs between performance, redundancy, and cost.

RAID 0 — Optimized for Performance

RAID 0 uses striping to write data across multiple drives simultaneously. This means that when you write a 5GB file across 5 drives, 1GB of data is written to each drive. Parallel reading of data from multiple drives can have a significant positive impact on performance.

The trade-off with RAID 0 is that if one of those drives fail, all of your data is lost and you must restore from backup.

RAID 0 is an excellent choice for cache servers, where the actual data being stored is of little value, but performance is very important.

RAID 1 — Optimized for Redundancy

RAID 1 uses mirroring to write data to multiple drives. This means that when you write a file, the file is actually written to two disks. If one of the disks fails, you simply replace it and rebuild the mirror.

The tradeoff with RAID 1 is cost. With RAID 1, you must purchase double the amount of storage space that your data requires.

RAID 5 — A Good Compromise

RAID 5 stripes data across multiple disks. RAID 5, however, adds a parity check bit to the data. This slightly reduces available disk capacity, but it also means that the RAID array continues to function if a single disk fails. In the event of a disk failure, you simply replace the failed disk and keep going.

The tradeoffs with RAID 5 are a small performance penalty in write operations and a slight decrease in usabable storage space.

RAID 0+1 — Optimize for Performance and Redundancy

RAID 0+1 combines the performance of RAID 0 with the redundancy of RAID 1.

To build a RAID 0+1 array, you first build a set of RAID 1 mirrored disks and you then combine these disk sets in a RAID 0 striped array.

A RAID 0+1 array can survive the loss of one disk from each mirrored pair. RAID 0+1 cannot survive the loss of two disks in the same mirrored pair.

Thank you,

Ravi Bhure

Quota Management

Quota Management

Introduction-

Introduction- Disk space management and disk space allocation among the users and groups is one of the critical task of System Administrator. System Admin has to think in all perspective before finalizing any policy. Quota is a system administration tools for monitoring and limiting users and/or groups disk usage, per file system. Disk quotas can be configured for individual users as well as user groups. This kind of flexibility makes it possible to give each user a small quota to handle “personal” file (such as email and reports), while allowing the projects they work on to have more sizable quotas (assuming the projects are given their own groups).

Quota provides two ways to set limits…

1) Number of inodes that may be allocated to a user or a group.
2) Number of disk blocks amount of space in kilobytes that may be allocated to a user or a group.

In addition, quotas can be set not just to control the number of disk blocks consumed but to control the number of inodes. Because inodes are used to contain file-related information, this allows control over the number of files that can be created. By using Quota, the users are forced by the system administrator to not consume unlimited disk space on a system. This program is handled on per user/group, per file system basis and must be set for each file system separately. The system administrator is alerted before a user consumes too much disk space or a partition becomes full.

What are the types of Quota Format?

Answer -There four type of Quota format/protocol you can use any format ….
1) vfsold – original quota format (version 1 quota)
2) vfsv0 – new quota format (version 2 quota)
3) rpc – use RPC calls (quota over NFS) and
4) xfs – quota on XFS filesystem

Steps to Configure Quota on File System –

Step 1] Check Kernel support?

The first thing you need to do is ensure that your kernel has been built with Quota support enabled. Now a days once you installed Quota package the kernel module for Quota will be enabled. Still you can confirm as given below…

Step 2] Enable quotas per file system by modifying /etc/fstab –

root@arunbagul:~# cat /etc/fstab
LABEL=/ / ext3 defaults 1 1
LABEL=/boot /boot ext3 defaults 1 2
LABEL=/home /home ext3 defaults,usrquota,grpquota 1 2
/dev/hda2 Perform setting for specified format (ie. don’t perform format auto detection). Possible format names are: vfsold (version 1 quota), vfsv0
(version 2 quota), rpc (quota over NFS), xfs (quota on XFS filesystem) swap swap defaults 0 0
none /proc proc defaults 0 0
none /dev/shm tmpfs defaults 0 0
root@arunbagul:~#

NOTE – ‘usrquota’ for user quota and ‘grpquota’ for group quota…

The /etc/fstab file contains information about the various file systems installed/mounted on your Linux server. Quota must be enabled in the /etc/fstab file before you can use it. Quota must be set for each file system separately. Check the /etc/fstab, in which /home file system has both user and group quotas enabled. Depending on your intentions, needs, etc, you can enable quota only for users, groups or both users and groups.

Step 3] Activate/Enabled Quota on file system (quota.user and quota.group)-

After the modifing /etc/fstab file for Quota, remount each file system whose fstab entry has been modified. If the file system is not in use by any process, use the umount command followed by the mount to remount the file system. If the file system is currently in use, the easiest method for remounting the file system is to reboot the system. we need to reboot the system or remount the file system to activate/establish quota on a file system.
Once quota is activated on given file system, the quota.user and quota.group files will be created on that file system, in parent directory.

command (1) quotacheck – scan a filesystem for disk usage, create, check and repair quota files (quota.user and quota.group). quotacheck examines each
filesystem and builds a table of current disk usage, and compares this table against that recorded in the disk quota file for the filesystem
(this step is ommitted if option -c is specified). If any inconsistencies are detected, both the quota file and the current system copy of the
incorrect quotas are updated. By default, only user quotas are checked. quotacheck expects each filesystem to be checked to have quota files
named as quota.user &[quota.group located at the root of the associated filesystem. If a file is not present, quotacheck will create it.

-u, –user
Only user quotas listed in /etc/mtab or on the filesystems specified are to be checked. This is the default action.

-g, –group
Only group quotas listed in /etc/mtab or on the filesystems specified are to be checked.

-c, –create-files
Don’t read existing quota files. Just perform a new scan and save it to disk. quotacheck also skips scanning of old quota files when they
are not found.

-v, –verbose
quotacheck reports its operation as it progresses. Normally it operates silently. If the option is specified twice, also the current
directory is printed (note that printing can slow down the scan measurably).

root@arunbagul:/home# quotacheck -ugcv /home
quotacheck: Mountpoint (or device) /home not found.
quotacheck: Cannot find filesystem to check or filesystem not mounted with quota option.
root@arunbagul:/home#

NOTE – As I have not added usrquota and grpquota options in /etc/fstab the quotacheck command has failed to detect the quota file system.. so now complete step(2) and remount the file system..,,

root@arunbagul:# umount /home
root@arunbagul:#
root@arunbagul:/home# mount | grep /home
/dev/sda8 on /home type ext3 (rw)
root@arunbagul:/home#

root@arunbagul:/home# mount -a
root@arunbagul:/home# mount | grep /home
/dev/sda8 on /home type ext3 (rw,usrquota,grpquota)
root@arunbagul:/home#

Now run the quotacheck command….

root@arunbagul:/home# ls -F
arun/ guest/
root@arunbagul:/home#

root@arunbagul:/home# quotacheck -ugcv /home
quotacheck: Cannot remount filesystem mounted on /home read-only so counted values might not be right.
Please stop all programs writing to filesystem or use -m flag to force checking.
root@arunbagul:/home#

root@arunbagul:/home# quotacheck -ugcvm /home
quotacheck: Scanning /dev/sda8 [/home] done
quotacheck: Cannot stat old user quota file: No such file or directory
quotacheck: Cannot stat old group quota file: No such file or directory
quotacheck: Checked 7090 directories and 64268 files
quotacheck: Old file not found.
quotacheck: Old file not found.
root@arunbagul:/home#

root@arunbagul:/home# ls -F
aquota.group aquota.user arun/ guest/
root@arunbagul:/home#

** use command quotaon/quotaoff to on/off quota of file system

root@arunbagul:/home# quotaon /home
root@arunbagul:/home#

** Note – please don’t run below command (we are in process of enabling quota..!!)

root@arunbagul:/home# quotaoff /home
root@arunbagul:/home#

Step 4] How to report Quota –

command (2) repquota – summarize quotas for a filesystem repquota prints a summary of the disc usage and quotas for the specified file systems. For
each user the current number of files and amount of space (in kilobytes) is printed, along with any quotas created with edquota.
As repquota has to translate ids of all users/groups to names

-a, –all
Report on all filesystems indicated in /etc/mtab to be read-write with quotas.

-v, –verbose
Report all quotas, even if there is no usage. Be also more verbose about quotafile information.
-g, –group
Report quotas for groups.

-u, –user
Report quotas for users. This is the default.

root@arunbagul:/home/arun# repquota /home
*** Report for user quotas on device /dev/sda8
Block grace time: 7days; Inode grace time: 7days
Block limits File limits
User used soft hard grace used soft hard grace
———————————————————————-
root — 237368 0 0 3231 0 0
www-data — 151352 0 0 359 0 0
nobody — 21680 0 0 1875 0 0
arun — 3473212 0 0 44268 0 0
ftp — 4 0 0 1 0 0

root@arunbagul:/home/arun# repquota -u /home
*** Report for user quotas on device /dev/sda8
Block grace time: 7days; Inode grace time: 7days
Block limits File limits
User used soft hard grace used soft hard grace
———————————————————————-
root — 237368 0 0 3231 0 0
www-data — 151352 0 0 359 0 0
nobody — 21680 0 0 1875 0 0
arun — 3473212 0 0 44268 0 0
ftp — 4 0 0 1 0 0

root@arunbagul:/home/arun# repquota -g /home
*** Report for group quotas on device /dev/sda8
Block grace time: 7days; Inode grace time: 7days
Block limits File limits
Group used soft hard grace used soft hard grace
———————————————————————-
root — 258992 0 0 5102 0 0
www-data — 113188 0 0 252 0 0
plugdev — 52 0 0 3 0 0
nogroup — 4 0 0 1 0 0
admin — 224236 0 0 21623 0 0
arun — 3473212 0 0 44268 0 0
subversion — 38164 0 0 107 0 0

root@arunbagul:/home/arun# repquota -ug /home
*** Report for user quotas on device /dev/sda8
Block grace time: 7days; Inode grace time: 7days
Block limits File limits
User used soft hard grace used soft hard grace
———————————————————————-
root — 237368 0 0 3231 0 0
www-data — 151352 0 0 359 0 0
nobody — 21680 0 0 1875 0 0
arun — 3473212 0 0 44268 0 0
ftp — 4 0 0 1 0 0

*** Report for group quotas on device /dev/sda8
Block grace time: 7days; Inode grace time: 7days
Block limits File limits
Group used soft hard grace used soft hard grace
———————————————————————-
root — 258992 0 0 5102 0 0
www-data — 113188 0 0 252 0 0
plugdev — 52 0 0 3 0 0
nogroup — 4 0 0 1 0 0
admin — 224236 0 0 21623 0 0
arun — 3473212 0 0 44268 0 0
subversion — 38164 0 0 107 0 0

root@arunbagul:/home/arun#

Step 5] Assigning Quotas per User/Group –

command (3) edquota – edquota is a quota editor. One or more users or groups may be specified on the command line. If a number is given in the place of
user/group name it is treated as an UID/GID. Setting a quota to zero indicates that no quota should be imposed. Users are permitted to exceed
their soft limits for a grace period that may be specified per filesystem. Once the grace period has expired, the soft limit is enforced as a
hard limit.
-u, –user
Edit the user quota. This is the default.
-g, –group
Edit the group quota.

-f, –filesystem filesystem
Perform specified operations only for given filesystem (default is to perform operations for all filesystems with quota).

-t, –edit-period
Edit the soft time limits for each filesystem. In new quota format time limits must be specified (there is no default value set in kernel).
Time units of ’seconds’, ’minutes’,’hours’, and ’days’ are understood. Time limits are printed in the greatest possible time unit such that
the value is greater than or equal to one.

root@arunbagul:/home/arun# edquota -u arun
root@arunbagul:/home/arun#

===>

Disk quotas for user arun (uid 1000):
Filesystem blocks soft hard inodes soft hard
/dev/sda8 3473212 0 0 44268 0 0

NOTE – once you run above command, it will open default editor specified in your system and you will see the following content be default in that temp file
modify the value save the file the quota will be set to that user or group

root@arunbagul:/home/arun# edquota -g www-data
root@arunbagul:/home/arun#

===>
Disk quotas for group www-data (gid 33):
Filesystem blocks soft hard inodes soft hard
/dev/sda8 113188 0 0 252 0 0

root@arunbagul:/home/arun# edquota -ug arun -f /home
root@arunbagul:/home/arun#

** How to change soft time limits for each filesystem…(by default it is 7 days)

root@arunbagul:/home/arun# edquota -t -f /home
OR
root@arunbagul:/home/arun# edquota -t
root@arunbagul:/home/arun#

===>
Grace period before enforcing soft limits for users:
Time units may be: days, hours, minutes, or seconds
Filesystem Block grace period Inode grace period
/dev/sda8 7days 7days

command (4) setquota – is a command line quota editor. The filesystem, user/group name and new quotas for this filesystem can be specified on the command
line. Note that if a number is given in the place of a user/group name it is treated as an UID/GID.

-r, –remote
Edit also remote quota use rpc.rquotad on remote server to set quota. This option is available only if quota tools were compiled with
enabled support for setting quotas over RPC.

-F, –format=quotaformat
Perform setting for specified format (ie. don’t perform format auto detection). Possible format names are: vfsold (version 1 quota), vfsv0
(version 2 quota), rpc (quota over NFS), xfs (quota on XFS filesystem)

-u, –user
Set user quotas for named user. This is the default.

-g, –group
Set group quotas for named group.

-t, –edit-period
Set grace times for users/groups. Times block-grace and inode-grace are specified in seconds.

-T, –edit-times
Alter times for individual user/group when softlimit is enforced. Times block-grace and inode-grace are specified in seconds or can be
string ’unset’.

-a, –all
Go through all filesystems with quota in /etc/mtab and perform setting.

** How to use it ?

setquota [-u|-g] [-r] [-F quotaformat] <user|group> <block-softlimit> <block-hardlimit> <inode-softlimit> <inode-hardlimit> -a|<filesystem>…
setquota [-u|-g] [-r] [-F quotaformat] <-p protouser|protogroup> <user|group> -a|<filesystem>…
setquota [-u|-g] [-F quotaformat] -t <blockgrace> <inodegrace> -a|<filesystem>…
setquota [-u|-g] [-F quotaformat] <user|group> -T <blockgrace> <inodegrace> -a|<filesystem>…

root@arunbagul:~# setquota -u arun 1000 1500 0 0 /home
root@arunbagul:~# setquota -g arun 1000 1500 0 0 /home

Step 6] What is Soft and Hard limit –

* Soft – Maximum amount of space or files user/group can use.
* Hard – Only used if grace periods are in effect, otherwise they are ignored and soft limits are used to enforce file system limits.
* Grace Periods – If used, users may exceed their soft limits up to their hard limits for a period of days specified by the grace period.
After the grace period expires, the user can no longer exceed their soft limit.

command (5) quotatool – is a tool for manipulating filesystem quotas. Depending on the command line options given, it can set hard or soft limits on block
and inode usage, set and reset grace periods, for both users and (if your system supports this) groups. The filesystem to set the
quota on is given as the first (and only) non-option element, and it is either the block special file (i.e /dev/sda3) or
the mount point (i.e. /home) for the filesystem.

Step 7] How to install Quota –

** How to install Quota on Debian/Ubuntu system…

root@arunbagul:~# apt-get install quota quotatool
Reading package lists… Done
Building dependency tree
…..
Setting up quota (3.14-8) …
Setting up quotatool (1.4.9-2) …

root@arunbagul:~#

** How to install Quota on Redhat(RHEL)/Fedora/Gentoo/CentOS/Suse/Madriva –

root@arunbagul:~# rpm -ivh <Quota package name>
………….
……….
root@arunbagul:~#

Step 8] Kernel parameters for Quota –

** Check status of kernel parameters values BEFORE activating Quota –

root@arunbagul:/home/arun/perl-prog# sysctl -a | grep quota
fs.quota.lookups = 0
fs.quota.drops = 0
fs.quota.reads = 0
fs.quota.writes = 0
fs.quota.cache_hits = 0
fs.quota.allocated_dquots = 0
fs.quota.free_dquots = 0
fs.quota.syncs = 16
fs.quota.warnings = 1
root@arunbagul:/home/arun/perl-prog#

** Check the Quota parameters values AFTER activating Quota –

root@arunbagul:/home/arun# sysctl -a | grep quota
fs.quota.lookups = 826
fs.quota.drops = 440
fs.quota.reads = 7
fs.quota.writes = 0
fs.quota.cache_hits = 819
fs.quota.allocated_dquots = 7
fs.quota.free_dquots = 0
fs.quota.syncs = 16
fs.quota.warnings = 1
root@arunbagul:/home/arun#

command (6) quotastats – you can use this command to queries the kernel for quota statistics (parameter).

root@arunbagul:~# quotastats
Kernel quota version: 6.5.1
Number of dquot lookups: 920
Number of dquot drops: 534
Number of dquot reads: 7
Number of dquot writes: 0
Number of quotafile syncs: 16
Number of dquot cache hits: 913
Number of allocated dquots: 7
Number of free dquots: 0
Number of in use dquot entries (user/group): 7
root@arunbagul:~#

IMP NOTE :: Quotas over NFS – Since NFS maps remote users to local users, set the quotas on the local users that you plan to map the remote users too.

Thank you,
Arun Bagul