Month: September 2014

CloudStack: An Overview of the Open Source IaaS Platform.

CloudStack: An Overview of the Open Source IaaS Platform.

 

 

Here’s an open source IaaS platform to set up an on-demand, elastic cloud computing service. It enables utility computing services by allowing cloud service providers to offer self-service compute instances, storage volumes and networking configurations, and to set up a private cloud for internal use.

Apache CloudStack is an open source, multi-tenanted cloud orchestration platform, which is used to build private, public and hybrid IaaS clouds by pooling computing resources. It manages computing, networking and storage resources. CloudStack is hypervisor agnostic; it uses multiple virtualisation platforms such as KVM, vSphere and XenServer. It supports the Amazon Web Services API, apart from its own APIs.

Features and use cases:
CloudStack supports Citrix XenServer, VMware vSphere and KVM on Ubuntu or CentOS. It can manage multiple geographically distributed data centres. The CloudStack API gives programmatic access to all managed resources and hence it is easier to create command line tools. Multi-node installation support and load balancing makes it highly available. In addition, MySQL replication is also useful for maintaining high availability.

Service providers and organisations use CloudStack to set up an elastic and on-demand IaaS. It can also be used to set up an on-premise private cloud behind the organisation’s firewall for internal purposes like gaining better control over infrastructure.

 

A host is a computer that provides the computing resources such as the CPU, storage, memory, networking, etc, to run the virtual machines. Each host has a hypervisor installed to manage the VMs. As CloudStack is hypervisor agnostic, multiple hypervisor-enabled servers such as a Linux KVM-enabled server, a Citrix XenServer server and an ESXi server can be used.

One or more primary storage is coupled with a cluster that stores the disk volumes for all the VMs running on hosts in that specific cluster.
A cluster consists of one or more hosts and one or more primary storage servers. In other words, a cluster can be considered as a set of XenServer servers or a set of KVM servers.

A CloudStack pod represents a single rack. A CloudStack pod consists of one or more clusters of hosts and one or more primary storage servers. Hosts in the same pod are in the same subnet.

A zone typically corresponds to a single data centre; it is permissible to have multiple zones in a data centre. Pods are contained within zones. Each zone can contain one or more pods. Zones can be public or private.

Secondary storage is shared by all the pods in the zone that stores templates, ISO images and disk volume snapshots.

 A CloudStack installation consists of a management server and the cloud infrastructure. Hosts, storage and IP addresses are managed by the management server. The minimum installation consists of one virtual machine running the CloudStack management server and another virtual machine to act as the cloud infrastructure the host running the hypervisor software.

The management server manages cloud resources, and the administrator can manage and interact with the management server by using a UI and APIs. It also managew s the assignment of guest VMs to particular hosts, the assignment of public and private IP addresses, templates and ISO images, as well as snapshots.

The CloudStack API:

 The CloudStack API supports three access roles: root admin, domain admin and user. The root admin can access all the features in addition to both virtual and physical resource management; the domain admin can access only the virtual resources that belong to the administrator’s domain, while the user can access the features that allow the management of the user’s virtual machines, storage and network.

To use the CloudStack API, knowledge of Java or PHP, HTTP GET/POST and query strings, XML or JSON, URL of the CloudStack server, and API key and secret key is necessary.

Thanks,

Pravin Babar.

 

Shellshock ‘Deadly serious’ new tech bug found!

Shellshock ‘Deadly serious’ new tech bug found!

This issue affects all products which use the Bash shell and parse values of environment variables. This issue is especially dangerous as there are many possible ways Bash can be called by an application. Quite often if an application executes another binary, Bash is invoked to accomplish this. Because of the pervasive use of the Bash shell, this issue is quite serious and should be treated as such.

All versions prior to those listed as updates for this issue are vulnerable to some degree.

My infected OS version is CentOS-6 and bash version 4.1.2

[root@host75 ~]# lsb_release -a
lsb_release -a
LSB Version: :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
Distributor ID: CentOS
Description: CentOS release 6.4 (Final)
Release: 6.4
Codename: Final

[root@host75 ~]# bash --version
bash --version
GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)
Copyright (C) 2009 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later

This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Diagnostic Steps:

To test if your version of Bash is vulnerable to this issue, run the following command:

$ env x='() { :;}; echo vulnerable' bash -c "echo this is a test"

If the output of the above command looks as follows:

vulnerable
this is a test

hmm, I got infected!

[root@host75 ~]# env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
vulnerable
this is a test

You are using a vulnerable version of Bash. The patch used to fix this issue ensures that no code is allowed after the end of a Bash function. Thus, if you run the above example with the patched version of Bash, you should get an output similar to:

$ env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x'
this is a test

If your system is vulnerable, update to the most recent version of the Bash package by running the following command:

$yum update bash

This fix my bash ??????

[root@host75 ~]# yum update bash
Loaded plugins: fastestmirror, security, tmprepo
Loading mirror speeds from cached hostfile
epel/metalink | 15 kB 00:00
* base: centos.eecs.wsu.edu
* epel: mirrors.kernel.org
* extras: centos.chi.host-engine.com
* updates: mirror.raystedman.net
base | 3.7 kB 00:00
epel | 4.4 kB 00:00
epel/primary_db | 6.3 MB 00:05
extras | 3.3 kB 00:00
updates | 3.4 kB 00:00
updates/primary_db | 5.3 MB 00:04
Setting up Update Process
Resolving Dependencies
--> Running transaction check
---> Package bash.x86_64 0:4.1.2-14.el6 will be updated
---> Package bash.x86_64 0:4.1.2-15.el6_5.1 will be an update
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================================================================================
Package Arch Version Repository Size
================================================================================================================================================
Updating:
bash x86_64 4.1.2-15.el6_5.1 updates 905 k

Transaction Summary
================================================================================================================================================
Upgrade 1 Package(s)

Total download size: 905 k
Is this ok [y/N]: y
Downloading Packages:
bash-4.1.2-15.el6_5.1.x86_64.rpm | 905 kB 00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Updating : bash-4.1.2-15.el6_5.1.x86_64 1/2
Cleanup : bash-4.1.2-14.el6.x86_64 2/2
Verifying : bash-4.1.2-15.el6_5.1.x86_64 1/2
Verifying : bash-4.1.2-14.el6.x86_64 2/2

Updated:
bash.x86_64 0:4.1.2-15.el6_5.1

Complete!

Test if update fixed to patch your bash

[root@host75 ~]# env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x'
this is a test

How does this impact systems:
This issue affects all products which use the Bash shell and parse values of environment variables. This issue is especially dangerous as there are many possible ways Bash can be called by an application. Quite often if an application executes another binary, Bash is invoked to accomplish this. Because of the pervasive use of the Bash shell, this issue is quite serious and should be treated as such.

All versions prior to those listed as updates for this issue are vulnerable to some degree.

See the appropriate remediation article for specifics.

Functions written in Bash itself do not need to be changed, even if they are exported with “export -f”. Bash will transparently apply the appropriate naming when exporting, and reverse the process when importing function definitions.

Ref:
http://www.bbc.com/news/technology-29361794
https://www.us-cert.gov/ncas/current-activity/2014/09/24/Bourne-Again-Shell-Bash-Remote-Code-Execution-Vulnerability
https://access.redhat.com/articles/1200223
https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/
http://lists.gnu.org/archive/html/bug-bash/2014-09/threads.html
https://rhn.redhat.com/errata/RHSA-2014-1306.html

Mangalyaan – India’s race for space success

Mangalyaan – India’s race for space success

Introduction-

Many many congratulations to ISRO (Indian Space Research Organisation) team for making Mangalyaan successful!. we are proud of you all!

 

Short story of Mangalyaan!-

India’s maiden mission to Mars, the Mangalyaan, has arrived in orbit after a 300-day marathon covering over 670 million kilometres.

“India will become the first Asian country to have achieved this and if it happens in the maiden attempt itself, India could become the first country in the world to have reached distant Mars on its own steam in the first attempt,” said Isro chairman K Radhakrishnan as it approached.

“We have gone beyond the boundaries of human enterprise” – Narendra Modi (PM. India)

Ref Links-  http://www.isro.org/mars/home.aspx

http://en.wikipedia.org/wiki/Mars_Orbiter_Mission

Thanks you,
Arun Bagul

Installation of Vmware-Exsi 5.0 Server

Installation of Vmware-Exsi 5.0 Server


Introduction:

To get started with your installation of ESXi5, insert the ESXi5 disc into your server and start it up.

In Figure 1 below, you’ll see the first screen that greets you when you start your server. From this menu, choose the first option to start the ESXi 5 installer.

Figure 1: ESXi 5 boot menu

Once you choose the installation option, the installer provides you with a window that details the status of each file that needs to be loaded. Figure 2 shows you this screen. After that, you’re greeted with a familiar screen that shows you some information about your server, including the processor type and system RAM. The target machine for my sample installation is a virtual machine running on my laptop, hence the relatively minimal hardware configuration. You can see this screen in Figure 3.


Figure 2:Installer load status

Figure 3:Yet another boot screen!

With the preliminaries out of the way, the ESXi 5 installer truly kicks off with a welcome screen containing information regarding VMware’s Compatibility Guide. To continue with the installation process, press Enter.


Figure 4: Kick off the ESXi installation.

Of course, no installation would be complete without having to accept an end user license agreement. To accept the agreement as a part of the installer, press F11. If you don’t accept the agreement, press Escape to abort the installation. You can see this screen in Figure 5.

Figure 5: ESXi 5 end user license agreement

A location to which to install ESXi 5 is the first technical decision you have to make. In Figure 6 below, you can see that I have a single 40 GB volume from which to choose as an install location on my machine.

  

Figure 6: Choose an installation location for ESXi 5

Next up, choose your keyboard layout as US Default.

The root password on your ESXi 5 system is the key to your virtual kingdom, so choose with care. Make sure you provide a strong password. As you can see in figure 7, you have to provide the password twice to make sure you don’t include any typos.

Figure 7: Provide a password for the root user account

The ESXi installer now scans your system to get additional information.

Once that’s complete, you’re asked to confirm the installation by pressing the F11 button .

 Figure 8Confirm the installation

Once you initiate the installation, your selected disk will be repartitioned. Throughout the process, the installer provides you with an installation status like the one shown in Figure 9.

Figure 9Installation status

When the installation process has finished, you’ll get a message indicating such as

Figure 10:Installation is complete

The last screen you’ll see is a yellow and gray one like the one shown below. Take note of the IP address on the screen.

Figure 11: ESXi 5 server display

 

Thank You,

Arun Bagal.

How to create CIFS/Windows share on NetApp Storage

How to create CIFS/Windows share on NetApp Storage

Introduction-

We can access NetApp volume using CIFS/SMB just like windows share. It is very useful to use NetApp storage in mixed environment of Linux/Windows or on Windows based
Products.

Step 1) Creating NetApp volume or use Qtree

First we need to create  “/vol/mycifs_share” netapp volume or you can use qtree as well.
Please refer article about creating NetApp volume – http://www.indiangnu.org/2014/how-to-create-volume-in-netapp-and-how-to-nfs-export/

Step 2) Change Secuirty style to NTFS (eg- mycifs_share volume)

my-netapp1> qtree security /vol/mycifs_share ntfs
Sun Jun 10 06:19:08 EDT [my-netapp1: wafl.quota.sec.change:notice]: security style for /vol/mycifs_share/ changed from unix to ntfs
my-netapp1>

Step 3) Creating CIFS/Windows Share

I assume that NetApp filer has been joined to AD (Active Directory, LDAP) and CIFS licensed is installed/configured and cifs service is   running on NetApp. Now we will create CIFS share and give permissions to User/Groups…

my-netapp1> cifs  shares  -add MyShare /vol/mycifs_share -comment “My Test Windows CIFS Share”
The share name ‘MyShare’ will not be accessible by some MS-DOS workstations
my-netapp1>

Step 4) Give CIFS Share Access

my-netapp1> cifs access MyShare “MYDOMAIN\USER_OR_GROUP”   “Full Control”
1 share(s) have been successfully modified
my-netapp1>

NOTE- We can give Full permission ie “Full Control”, Read permission ie “Read”

Step 5) List CIFS-

Filer: 192.168.10.50

my-netapp1> cifs shares
Name         Mount Point                       Description
—-         ———–                       ———–
MyShare       /vol/mycifs_share                My Test Windows CIFS Share

Step 6) Access CIFS/Share on Windows or Linux-

\\192.168.10.50\MyShare

Thank you,
Arun Bagul

Selecting virtual SCSI Controllers for Disks (VMware VM)

Selecting virtual SCSI Controllers for Disks (VMware VM)

Introduction-
To access virtual disks, a virtual machine uses virtual SCSI controllers. Each virtual disk that a virtual machine can access through one of the virtual SCSI controllers resides in the VMFS datastore, NFS-based datastore, or on a raw disk. The choice of SCSI controller does not affect whether your virtual disk is an IDE or SCSI disk.

Following virtual SCSI controllers commonly used…

A) BusLogic
– This was one of the first emulated vSCSI controllers available in the VMware platform.
– No updates and considered as legacy or for backward compatibility…

B) LSI Logic Parallel
– This was the other emulated vSCSI controller available originally in the VMware platform.
– Most operating systems had a driver that supported a queue depth of 32 and it became a very common choice, if not the default
– Default for Windows 2003/Vista and Linux

C) LSI Logic SAS
– This is an evolution of the parallel driver to support a new future facing standard.
– It began to grown popularity when Microsoft required its use for MCSC within Windows 2008 ore newer.
– Default for Windows 2008 or newer
– Linux guests SCSI disk hotplug works better with LSI Logic SAS
– Personally I use this
D) VMware Paravirtual (aka PVSCSI)
– This vSCSI controller is virtualization aware and was been designed to support very high throughput with minimal processing cost and is therefore the most efficient driver.
– In the past, there were issues if it was used with virtual machines that didn’t do a lot of IOPS, but that was resolved in vSphere 4.1.

* PVSCSI and LSI Logic Parallel/SAS are essentially the same when it comes to overall performance capability.
* Total of 4 vSCSI adapters are supported per virtual machine.  To provide the best performance, one should also distribute virtual disk across as many vSCSI adapters as possible
* Why not IDE? – IDE adapter completes one command at a time while SCSI can queue commands. So SCSI adapter is better optimized for parallel performance. Also Maximum of 4 IDE Devices per VM (includes CDROM) but SCSI allows 60 devices.

Thank You,
Arun

NetApp Product Overview

NetApp Product Overview

Introduction –

NetApp is leading provider of high speed/performance SAN/NAS storage. I’m working on various NetApp Products since past 5+ yrs as Storage Admin.  This article gives overview of NetApp Storage products and Technologies…

* Data ONTAP – NetApp Data ONTAP is Operating System (OS) running on NetApp Storage. “Data ONTAP” supports Cluster Mode and 7-Node (using ONTAP like ONTAP7)

* NetApp SANtricity Storage OS offers a powerful, easy-to-use interface for administering E-series NetApp Storage.

    • Dynamic Disk Pools (DDPs) – greatly simplify traditional storage management with no idle spares to manage or reconfigure when drives are added or fail, thus providing the ability to automatically configure, expand  & scale storage. DDPs enable dynamic rebalancing of drive count changes.

    • Dynamic RAID-level migration changes the RAID level of a volume group on the existing drives without requiring the relocation of data. The software supports DDPs and RAID levels   0, 1, 3, 5, 6, and 10.

    • Dynamic volume expansion (DVE) – allows administrators to expand the capacity of an existing volume by using the free capacity on an existing volume group. DVE combines the new capacity with the original capacity for maximum performance and utilization.

    • Streamlined Performance Efficiency – Intelligent cache tiering, which uses the SANtricity SSD Cache feature, enhances performance and application response time. The SSD Cache feature provides intelligent read caching capability to identify and host the most frequently accessed blocks of data and leverages the superior performance and lower latency of solid-state drives (SSDs). This caching approach works in real time and in a data-driven fashion, and is always on, with no complicated policies to define the trigger for data movement between tiers

• Efficient Storage Provisioning – Thin provisioning delivers significant savings by separating the internal allocation of storage from the external allocation reported to hosts

* NetApp OnCommand System Manager – OnCommand System Manager is the simple yet powerful browser-based management tools that enable administrators to easily configure and manage individual NetApp storage systems or clusters of systems. OnCommand Unified Manager monitors and alerts on the health of your NetApp storage running on clustered Data ONTAP.

* WAFL and RAID-DP – NetApp introduced double-parity RAID, named RAID-DP, in 2003, starting with the Data ONTAP6. Since then it has become the default RAID group type used on NetApp storage. At the most basic layer, RAID-DP adds a second parity disk to each RAID group in an aggregate or traditional volume. A RAID group is an underlying construct that aggregates and traditional volumes are built upon. Each traditional NetApp RAID 4 group has some number of data disks and one parity disk, with aggregates and volumes containing one or more RAID 4 groups. Whereas the parity disk in a RAID 4 volume stores row parity across the disks in a RAID 4 group, the additional RAID-DP parity disk stores. diagonal parity across the disks in a RAID-DP group.

A) Unified Storage Data ONTAP –

Common FAS series NetApp storage Array/filers with NetApp Data ONTAP OS.

B) High Performance SAN Storage E-Series –

The NetApp E5500 data storage system sets new standards for performance efficiency in application-driven environments. The E5500 is equally adept at supporting high-IOPS mixed workloads and databases, high-performance file systems, and bandwidth-intensive streaming applications. NetApp’s patent-pending Dynamic Disk Pools (DDP) simplifies traditional RAID management by distributing data parity information and spare capacity across a pool of drives. The modular flexibility of the E-Series—with three disk drive/controller shelves, multiple drive types, and a complete selection of interfaces—enables custom configurations optimized and able to scale as needed. The maximum storage density of the E5500 reduces rack space by up to 60%, power use by up to 40%, and cooling requirements by up to 39%.

– Maximum RAW Capacity- 28TB,48TB, 240TB to 1.54PB

– Rack Unit 2U (12 or 24 drives) and 4U (60 drives)

– Maximum 16 Shelves or 384 total disk

– 2/3/4TB SAS Disk Drives and 400/800GB SSD Disks (Mixed)

 – 24GB ECC RAM  (ECC which stands for Error Correction Code RAM)

– 8 x 10Gbps iSCSI IO ports, 8 x 8Gb SAS IO ports, 8 x 16Gb FC IO ports

– SANtricity OS 11.10

 

C) NetApp disk shelves

NetApp offers a full range of high-capacity, high-performance, and self-encrypting disk drives plus ultra-high-performance solid-state drives (SSDs). Disk shelf options let you optimize for capacity, performance, or versatility. NetApp Optical SAS interconnects simplify infrastructure while providing industry-leading performance.

– Nondisruptive controller upgrades

– Self-managing Virtual Storage Tier technologies, including Flash Pool, optimize data placement on flash for maximum performance

– Supported Disk types- SATA,SAS,SSD and Flash disk

a) DS2246 – 2U Rack units, 24 Drives per enclosure, 12 Drives per rack unit,Optical SAS support

b) DS4246 – 4U Rack units, 24 Drives per enclosure, 6 Drives per rack unit,Optical SAS support, 2TB/3TB/4TB disk drives

c) DS4486 – 4U Rack units, 24 Drives per enclosure, 12 Drives per rack unit,Optical SAS support, 4TB disk drives, Tandem (dual) drives

d) DS4243 – 4U Rack units, 24 Drives per enclosure, 6 Drives per rack unit,Optical SAS support, DS4243 disk shelf is no longer available   in new system shipments.

For details please visit NetApp page- http://www.netapp.com/us/products/storage-systems/disk-shelves-and-storage-media/disk-shelves-tech-specs.aspx

 

D) NetApp Software –

    * NetApp FilerView Administration Tool – GUI tools was used to manage NetApp Filer. However If you plan to run Data ONTAP 8.1 or later software, you need to use OnCommand System Manager software.

* NetApp FlexArray Virtualization Software – FlexArray enables you to connect your existing storage arrays to FAS8000 controllers using your Fibre Channel SAN fabric. Array LUNs are provisioned to the FAS8000 and collected into a storage pool from which NetApp volumes are created and shared out to SAN hosts and NAS clients. The new volumes, managed by the FAS8000. FlexArray has the flexibility to serve both SAN and NAS protocols at the same time without any complex add-on components, making the FAS8000 the ideal storage virtualization platform.

* NetApp DataMotion – DataMotion data migration software lets you move data from one logical or physical storage device to another, without disrupting operations. You can keep your shared storage infrastructure running as you add capacity, refresh infrastructure, and balance performance. You can use DataMotion for vFiler and Volumes.

* NetApp Deduplication and Compression – NetApp data compression is a new feature that compresses data as it is written to NetApp FAS and V-Series storage systems. Like deduplication, NetApp data compression works in both SAN and NAS environments. NetApp data deduplication combines the benefits of granularity, performance, and resiliency to give you a significant advantage in the race to meet ever-increasing storage capacity demands.

* NetApp Flash Pool – is a NetApp Data ONTAP feature (introduced in version 8.1.1) that enables mixing regular HDDs with SSDs at an aggregate level. NetApp Flash Pool, an integral component of the NetApp Virtual Storage Tier, enables automated storage tiering. NetApp Flash Pool lets you mix solid state disk (SSD) technology and hard disk (HDD) technology at the aggregate level, to achieve SSD-like performance at HDD-like prices.

E)  NetApp Protection Software –

* Snapmirror – data replication technology provides disaster recovery protection and simplifies the management of data replication.

* MetroCluster – high-availability and disaster recovery software delivers continuous availability, transparent failover protection, and zero data loss.

* SnapVault – software speeds and simplifies backup and data recovery, protecting data at the block level.

* Open Systems SnapVault(OSSV) – software leverages block-level incremental backup technology to protect Windows, Linux/UNIX, SQL Server and VMware systems running on mixed storage.

* SnapRestore – data recovery software uses stored Data ONTAP Snapshot copies to recover anything from a single file to multi-terabyte volumes, in seconds.

F) NetApp StorageGRID – NetApp StorageGRID object storage software enables secure management of petabyte-scale distributed content repositories. Eliminating the typical constraints of data containers in blocks and files, the StorageGRID application offers secure, intelligent, and scalable data storage and management in a single global namespace. It optimizes metadata management and content placement through a global policy engine with built-in security. StorageGRID software automates the lifecycle of stored content by managing how files and objects are stored, placed, protected, and retrieved.

 

Thank You,
Arun Bagul

How to create volume in NetApp and how to NFS export

How to create volume in NetApp and how to NFS export

Introduction

NetApp Storage supports multiple protocols to access data like NFS, CIFS(SMB), FTP and WebDav etc. This article explains how to create NetApp Volume and export using NFS.

Step 1) Check Aggr Space and Ping Storage connectivity from server (where u will be mounting volume) –

# ping -c5 -M do -s 8972 192.168.0.10

netapp-filer1> df -hA
Aggregate                total       used      avail capacity
aggr2                     16TB       14TB     2320GB      86%
aggr3                     16TB       14TB     1681GB      90%
aggr0                   1490GB     1251GB      239GB      84%
aggr1                     16TB       15TB     1511GB      91%
aggr4                     12TB     5835GB     7044GB      45%
netapp-filer1>

Step 2) Create Volume –

netapp-filer1> vol create myvolume_bkup  -l en_US -s volume  aggr1 500g
Creation of volume ‘myvolume_bkup’ with size 1t on containing aggregate
‘aggr1’ has completed.

Step 3) Disable or Change snapshot and Reserve –

netapp-filer1> vol options myvolume_bkup
nosnap=off, nosnapdir=off, minra=off, no_atime_update=off, nvfail=off,
ignore_inconsistent=off, snapmirrored=off, create_ucode=off,
convert_ucode=off, maxdirsize=73400, schedsnapname=ordinal,
fs_size_fixed=off, compression=off, guarantee=volume, svo_enable=off,
svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off,
no_i2p=off, fractional_reserve=100, extent=off, try_first=volume_grow,
read_realloc=off, snapshot_clone_dependency=off, nbu_archival_snap=off

netapp-filer1> vol options myvolume_bkup  nosnap on
netapp-filer1> snap reserve myvolume_bkup  0

netapp-filer1> df -h  myvolume_bkup
Filesystem               total       used      avail capacity  Mounted on
/vol/myvolume_bkup/      500GB      176KB      499GB       0%  /vol/myvolume_bkup/
/vol/myvolume_bkup/.snapshot        0TB        0TB        0TB     —%  /vol/myvolume_bkup/.snapshot

Step 4) Exports NFS –

netapp-filer1> exportfs -p sec=sys,rw=192.168.0.25,root=192.168.0.25,nosuid  /vol/myvolume_bkup

Step 4) /etc/fstab entry on Server –
192.168.0.10:/vol/myvolume_bkup     /backup nfs     defaults,hard,rw,rsize=65536,wsize=65536,proto=tcp 0 0

Thank you,
Arun Bagul