SlideShare una empresa de Scribd logo
1 de 182
Descargar para leer sin conexión
The information is intended to outline our general product direction. It is intended for information purposes
only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or
functionality, and should not be relied upon in making purchasing decisions. NetApp makes no warranties,
expressed or implied, on future functionality and timeline. The development, release, and timing of any
features or functionality described for NetApp’s products remains at the sole discretion of NetApp.
NetApp's strategy and possible future developments, products and or platforms directions and functionality
are all subject to change without notice. NetApp has no obligation to pursue any course of business
outlined in this document or any related presentation, or to develop or release any functionality mentioned
therein.
NetApp Lab on Demand (LOD)
Basic Concepts for
Clustered Data ONTAP 8.3
April 2015 | SL10220 v1.1
2 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
CONTENTS
Introduction .............................................................................................................................. 3
Why clustered Data ONTAP?............................................................................................................................3
Lab Objectives..................................................................................................................................................4
Prerequisites ....................................................................................................................................................5
Accessing the Command Line...........................................................................................................................5
Lab Environment ...................................................................................................................... 7
Lab Activities ............................................................................................................................ 8
Clusters............................................................................................................................................................8
Create Storage for NFS and CIFS ...................................................................................................................38
Create Storage for iSCSI...............................................................................................................................107
Appendix 1 – Using the clustered Data ONTAP Command Line ....................................... 179
References............................................................................................................................ 181
Version History..................................................................................................................... 182
3 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
Introduction
This lab introduces the fundamentals of clustered Data ONTAP®
. In it you will start with a pre-created 2-
node cluster and configure Windows 2012R2 and Red Hat Enterprise Linux 6.5 hosts to access storage on
the cluster using CIFS, NFS, and iSCSI.
Why clustered Data ONTAP?
A helpful way to start understanding the benefits offered by clustered Data ONTAP is to consider server
virtualization. Before server virtualization, system administrators frequently deployed applications on
dedicated servers in order to maximize application performance and to avoid the instabilities often
encountered when combining multiple applications on the same operating system instance. While this
design approach was effective, it also had the following drawbacks:
 It does not scale well – adding new servers for every new application is extremely expensive.
 It is inefficient – most servers are significantly underutilized meaning that businesses are not extracting
the full benefit of their hardware investment.
 It is inflexible – re-allocating standalone server resources for other purposes is time consuming, staff
intensive, and highly disruptive.
Server virtualization directy addresses all three of these limitations by decoupling the application instance
from the underlying physical hardware. Multiple virtual servers can share a pool of physical hardware,
meaning that businesses can now consolidate their server workloads to a smaller set of more effectively
utilized physical servers. In addition, the ability to transparently migrate running virtual machines across a
pool of physical servers enables businesses to reduce the impact of downtime due to scheduled
maintenance activities.
Clustered Data ONTAP brings these same benefits, and many others, to storage systems. As with server
virtualization, clustered Data ONTAP enables you to combine multiple physical storage controllers into a
single logical cluster that can non-disruptively service multiple storage workload needs. With clustered Data
ONTAP you can:
 Combine different types and models of NetApp storage controllers (known as nodes) into a shared
physical storage resource pool (referred to as a cluster).
 Support multiple data access protocols (CIFS, NFS, Fibre Channel, iSCSI, FCoE) concurrently on the
same storage cluster.
 Consolidate various storage workloads to the cluster. Each workload can be assigned its own Storage
Virtual Machine (SVM), which is essentially a dedicated virtual storage controller, and its own data
volumes, LUNs, CIFS shares, and NFS exports.
 Support multitenancy with delegated administration of SVMs. Tenants can be different companies,
business units, or even individual application owners, each with their own distinct administrators whose
admin rights are limited to just the assigned SVM.
 Use Quality of Service (QoS) capabilities to manage resource utilization between storage workloads.
 Non-disruptively migrate live data volumes and client connections from one cluster node to another.
 Non-disruptively scale the cluster out by adding nodes. Nodes can likewise be non-disruptively
removed from the cluster, meaning that you can non-disruptively scale a cluster up and down during
hardware refresh cycles.
4 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
 Leverage multiple nodes in the cluster to simultaneously service a given SVM’s storage workloads.
This means that businesses can scale out their SVMs beyond the bounds of a single physical node in
response to growing storage and performance requirements, all non-disruptively.
 Apply software & firmware updates and configuration changes without downtime.
Lab Objectives
This lab explores fundamental concepts of clustered Data ONTAP, and utilizes a modular design to allow
you to focus on the topics that are of specific interest to you. The “Clusters” section is required for all
invocations of the lab (it is a prerequisite for the other sections). If you are interested in NAS functionality
then complete the “Storage Virtual Machines for NFS and CIFS” section. If you are interested in SAN
functionality, then complete the “Storage Virtual Machines for iSCSI” section and at least one of it’s
Windows or Linux subsections (you may do both if you so choose). If you are interested in nondisruptive
operations then you will need to first complete one of the Storage Virtual Machine sections just mentioned
before you can proceed to the “Nondisruptive Operations: section.
Here summary of the exercises in this lab, along with their Estimated Completion Times (ECT):
 Clusters (Required, ECT = 20 minutes).
o Explore a cluster
o View Advanced Drive Partitioning.
o Create a data aggregate.
o Create a Subnet.
 Storage Virtual machines for NFS and CIFS (Optional, ECT = 40 minutes)
o Create a Storage Virtual Machine.
o Create a volume on the Storage Virtual Machine.
o Configure the Storage Virtual Machine for CIFS and NFS access.
o Mount a CIFS share from the Storage Virtual Machine on a Windows client.
o Mount a NFS volume from the Storage Virtual Machine on a Linux client.
 Storage Virtual Machines for iSCSI (Optional, ECT = 90 minutes including all optional subsections)
o Create a Storage Virtual Machine.
o Create a volume on the Storage Virtual Machine.
 For Windows (Optional , ECT = 40 minutes)
o Create a Windows LUN on the volume and map the LUN to an igroup.
o Configure a Windows client for iSCSI and MPIO and mount the LUN.
 For Linux (Optional, ECT = 40 minutes)
o Create a Linux LUN on the volume and map the LUN to an igroup.
o Configure a Linux client for iSCSI and multipath and mount the LUN.
This lab includes instructions for completing each of these tasks using either System Manager, NetApp’s
graphical administration interface, or the Data ONTAP command line. The end state of the lab produced
by either method is exactly the same so use whichever method you are the most comfortable with.
5 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
In a lab section you will encounter orange bars similar to the following that indicate the beginning of the
graphical or command line procedures for that exercise. A few sections only offer one of these two options
rather than both, in which case the text in the orange bar will communicate that point.
Note that while switching back and forth between the graphical and command line methods from one
section of the lab guide to another is supported, this guide is not designed to support switching back and
forth between these methods within a single section. For the best experience we recommend that you stick
with a single method for the duration of a lab section.
Prerequisites
This lab introduces clustered Data ONTAP and so this guide makes no assumptions that the user has
previous experience with Data ONTAP. The lab does assume some basic familiarity with storage system
related concepts such as RAID, CIFS, NFS, LUNs, and DNS.
This lab includes steps for mapping shares and mounting LUNs on a Windows client. These steps assume
that the lab user has a basic familiarity with Microsoft Windows.
This lab also includes steps for mount NFS volumes and LUNs on a Linux client. All steps are performed
from the Linux command line and assumes a basic working knowledge of the Linux command line. A basic
working knowledge of a text editor such as vi may be useful, but is not required.
Accessing the Command Line
PuTTY is the terminal emulation program used in the lab to log into Linux hosts and storage controllers in
order to run command line commands.
1. The launch icon for the PuTTY application is pinned to the taskbar on the Windows host jumphost as
shown in the following screenshot; just double-click on the icon to launch it.
***EXAMPLE*** To perform this section’s tasks from the GUI: ***EXAMPLE***
6 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
Once PuTTY launches you can connect to one of the hosts in the lab by following these steps. This
example shows a user connecting to the Data ONTAP cluster named cluster1.
1. By default PuTTY should launch into the “Basic options for your PuTTY session” display as shown in
the screenshot. If you accidentally navigate away from this view just click on the Session category item
to return to this view.
2. Use the scrollbar in the Saved Sessions box to navigate down to the desired host and double-click it to
open the connection. A terminal window will open and you will be prompted to log into the host. You
can find the correct username and password for the host in Table 1 in the “Lab Environment” section at
the beginning of this guide.
The clustered Data ONTAP command lines supports a number of usability features that make the
command line much easier to use. If you are unfamiliar with those features then review “Appendix 1” of this
lab guide which contains a brief overview of them.
7 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
Lab Environment
The following figure contains a diagram of the environment for this lab.
All of the servers and storage controllers presented in this lab are virtual devices, and the networks that
interconnect them are exclusive to your lab session. While we encourage you to follow the demonstration
steps outlined in this lab guide, you are free to deviate from this guide and experiment with other Data
ONTAP features that interest you. While the virtual storage controllers (vsims) used in this lab offer nearly
all of the same functionality as physical storage controllers, they are not capable of providing the same
performance as a physical controller, which is why these labs are not suitable for performance testing.
Table 1 provides a list of the servers and storage controller nodes in the lab, along with their IP address.
Table 1: Lab Host Credentials
Hostname Description IP Address(es) Username Password
JUMPHOST
Windows 20012R2 Remote Access
host
192.168.0.5 DemoAdministrator Netapp1!
RHEL1 Red Hat 6.5 x64 Linux host 192.168.0.61 root Netapp1!
RHEL2 Red Hat 6.5 x64 Linux host 192.168.0.62 root Netapp1!
DC1 Active Directory Server 192.168.0.253 DemoAdministrator Netapp1!
cluster1 Data ONTAP cluster 192.168.0.101 admin Netapp1!
cluster1-01 Data ONTAP cluster node 192.168.0.111 admin Netapp1!
cluster1-02 Data ONTAP cluster node 192.168.0.112 admin Netapp1!
Table 2 lists the NetApp software that is pre-installed on the various hosts in this lab.
8 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
Table 2: Preinstalled NetApp Software
Hostname Description
JUMPHOST Data ONTAP DSM v4.1 for Windows MPIO, Windows Host Utility Kit v6.0.2
RHEL1, RHEL2 Linux Host Utilities Kit v6.2
Lab Activities
Clusters
Expected Completion Time: 20 Minutes
A cluster is a group of physical storage controllers, or nodes, that have been joined together for the
purpose of serving data to end users. The nodes in a cluster can pool their resources together so that the
cluster can distribute it’s work across the member nodes. Communication and data transfer between
member nodes (such as when a client accesses data on a node other than the one actually hosting the
data) takes place over a 10Gb cluster-interconnect network to which all the nodes are connected, while
management and client data traffic passes over separate management and data networks configured on
the member nodes.
Clusters typically consist of one or more NetApp storage controller High Availability (HA) pairs. Both
controllers in an HA pair actively host and serve data, but they are also capable of taking over their
partner’s responsibilities in the event of a service disruption by virtue of their redundant cable paths to each
other’s disk storage. Having multiple HA pairs in a cluster allows the cluster to scale out to handle greater
workloads, and to support non-disruptive migrations of volumes and client connections to other nodes in
the cluster resource pool. This means that cluster expansion and technology refreshes can take place
while the cluster remains fully online, and serving data.
Since clusters are almost always comprised of one or more HA pairs, a cluster almost always contains an
even number of controller nodes. There is one exception to this rule, and that is the “single node cluster”,
which is a special cluster configuration intended to support small storage deployments that can be satisfied
with a single physical controller head. The primary noticeable difference between single node and standard
clusters, besides the number of nodes, is that a single node cluster does not have a cluster network. Single
node clusters can later be converted into traditional multi-node clusters, and at that point become subject to
all the standard cluster requirements like the need to utilize an even number of nodes consisting of HA
pairs. This lab does not contain a single node cluster, and so this lab guide does not discuss them further.
Data ONTAP 8.3 clusters that only serve NFS and CIFS can scale up to a maximum of 24 nodes, although
the node limit may be lower depending on the model of FAS controller in use. Data ONTAP 8.3 clusters
that also host iSCSI and FC can scale up to a maximum of 8 nodes.
This lab utilizes simulated NetApp storage controllers rather than physical FAS controllers. The simulated
controller, also known as a vsim, is a virtual machine that simulates the functionality of a physical controller
without the need for dedicated controller hardware. The vsim is not designed for performance testing, but
does offer much of the same functionality as a physical FAS controller, including the ability to generate I/O
to disks. This makes the vsim is a powerful tool to explore and experiment with Data ONTAP product
features. The vsim does is limited when a feature requires a specific physical capability that the vsim does
not support; for example, vsims do not support Fibre Channel connections, which is why this lab uses
iSCSI to demonstrate block storage functionality.
9 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
This lab starts with a pre-created, minimally configured cluster. The pre-created cluster already includes
Data ONTAP licenses, the cluster’s basic network configuration, and a pair of pre-configured HA
controllers. In this next section you will create the aggregates that are used by the SVMs that you will
create in later sections of the lab. You will also take a look at the new Advanced Drive Partitioning feature
introduced in clustered Data ONTAP 8.3.
Connect to the Cluster with OnCommand System Manager
OnCommand System Manager is NetApp’s browser-based management tool for configuring and managing
NetApp storage systems and clusters. Prior to 8.3, System Manager was a separate application that you
had to download and install on your client OS. In 8.3, System Manager is now moved on-board the cluster,
so you just point your web browser to the cluster management address. The on-board System Manager
interface is essentially the same that NetApp offered in the System Manager 3.1, the version you install on
a client.
On the Jumphost, the Windows 2012R2 Server desktop you see when you first connect to the lab, open
the web browser of your choice. This lab guide uses Chrome, but you can use Firefox or Internet Explorer if
you prefer one of those. All three browsers already have System Manager set as the browser home page.
1. Launch Chrome to open System Manager.
The OnCommand System Manager Login window opens.
This section’s tasks can only be performed from the GUI:
10 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
1. Enter the User Name as admin and the Password as Netapp1! and then click Sign In.
System Manager is now logged in to cluster1 and displays a summary page for the cluster. If you are
unfamiliar with System Manager, here is a quick introduction to its layout.
11 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
Use the tabs on the left side of the window to manage various aspects of the cluster. The Cluster tab (1)
accesses configuration settings that apply to the cluster as a whole. The Storage Virtual Machines tab (2)
allows you to manage individual Storage Virtual Machines (SVMs, also known as Vservers). The Nodes tab
(3) contains configuration settings that are specific to individual controller nodes. Please take a few
moments to expand and browse these tabs to familiarize yourself with their contents.
Note: As you use System Manager in this lab, you may encounter situations where buttons at the bottom of a
System Manager pane are beyond the viewing size of the window, and no scroll bar exists to allow you to
scroll down to see them. If this happens, then you have two options; either increase the size of the browser
window (you might need to increase the resolution of your jumphost desktop to accommodate the larger
browser window), or in the System Manager window, use the tab key to cycle through all the various fields
and buttons, which eventually forces the window to scroll down to the non-visible items.
Advanced Drive Partitioning
Disks, whether Hard Disk Drives (HDD) or Solid State Disks (SSD), are the fundamental unit of physical
storage in clustered Data ONTAP, and are tied to a specific cluster node by virtue of their physical
connectivity (i.e., cabling) to a given controller head.
Data ONTAP manages disks in groups called aggregates. An aggregate defines the RAID properties for a
group of disks that are all physically attached to the same node. A given disk can only be a member of a
single aggregate.
12 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
By default each cluster node has one aggregate known as the root aggregate, which is a group of the
node’s local disks that host the node’s Data ONTAP operating system. A node’s root aggregate is
automatically created during Data ONTAP installation in a minimal RAID-DP configuration This means it is
initially comprised of 3 disks (1 data, 2 parity), and has a name that begins the string aggr0. For example,
in this lab the root aggregate of the node cluster1-01 is named “aggr0_cluster1_01.”, and the root
aggregate of the node cluster1-02 is named “aggr0_cluster1_02”.
On higher end FAS systems that have many disks, the requirement to dedicate 3 disks for each controller’s
root aggregate is not a burden, but for entry level FAS systems that only have 24 or 12 disks this root
aggregate disk overhead requirement signficantly reduces the disks available for storing user data. To
improve usable capacity, NetApp has introduced Advanced Drive Partitioning in 8.3, which divides the Hard
Disk Drives (HDDs) on nodes that have this feature enabled into two partititions; a small root partition, and
a much larger data partition. Data ONTAP allocates the root partitions to the node root aggregate, and the
data partitions for data aggregates. Each partition behaves like a virtual disk, so in terms of RAID Data
ONTAP treats these partitions just like physical disks when creating aggregates. The key benefit here is
that a much higher percentage of the node’s overall disk capacity is now available to host user data.
Data ONTAP only supports HDD partitioning for FAS 22xx and FAS25xx controllers, and only for HDDs
installed in their internal shelf on those models. Advanced Drive Partitioning can only be enabled at system
installation time, and there is no way to convert an existing system to use Advanced Drive Partitioning
other than to completely evacuate the affected HDDs and then re-install Data ONTAP.
All-Flash FAS (AFF) supports a variation of Advanced Drive Partitioning that utilizes SSDs instead of
HDDs. The capability is available for entry-level, mid-range, and high-end AFF platforms. Data ONTAP 8.3
also introduces SSD partitioning for use with Flash Pools, but the details of that feature lie outside the
scope of this lab.
In this section, you will see how to determine if a cluster node is utilizing Advanced Drive Partitioning.
System Manager provides a basic view into this information, but if you want to see more detail then you will
want to use the CLI.
13 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
1. In System Manager’s left pane, navigate to the Cluster tab.
2. Expand cluster1.
3. Expand Storage.
4. Click Disks.
5. In the main window, click on the Summary tab.
6. Scroll the main window down to the Spare Disks section, where you will see that each cluster node has
12 spare disks with a per-disk size of 26.88 GB. These spares represent the data partitions of the
physical disks that belong to each node.
To perform this section’s tasks from the GUI:
14 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
If you scroll back up to look at the “Assigned HDDs” section of the window, you will see that there are no
entries listed for the root partitions of the disks. Under daily operation, you will be primarly concerned with
data partitions rather than root partitions, and so this view focuses on just showing information about the
data partitions. To see information about the physical disks attached to your system you will need to select
the Inventory tab.
1. Click on the Inventory tab at the top of the Disks window.
System Manager’s main window now shows a list of the physical disks available across all the nodes in the
cluster, which nodes own those disks, and so on. If you look at the Container Type column you see that the
disks in your lab all show a value of “shared”; this value indicates that the physical disk is partitioned. For
disks that are not partitioned you would typically see values like “spare”, “data”, “parity”, and “dparity”.
15 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
For a FAS controller that will be using Advanced Drive Partitioning, Data ONTAP automatically determines
the size of the root and data disk partitions at system installation time based on the quantity and size of the
available disks assigned to each node. In this lab each cluster node has twelve 32 GB hard disks, and you
can see how your node’s root aggregates are consuming the root partitions on those disks by going to the
Aggregates page in System Manager.
1. On the Cluster tab, navigate to cluster1->Storage->Aggregates.
2. In the Aggregates list, select “aggr0_cluster1_01”, which is the root aggregate for cluster node cluster1-
01. Notice that the total size of this aggregate is a little over 10 GB. The Available and Used space
shown for this aggregate in your lab may vary from what is shown in this screenshot, depending on the
quantity and size of the snapshots that exist on your node’s root volume.
3. Click the Disk Layout tab at the bottom of the window. The lower pane of System Manager now
displays a list of the disks that are members of this aggregate. Notice that the usable space is 1.52 GB,
which is the size of the root partition on the disk. The Physical Space column displays to total capacity
of the whole disk that is available to clustered Data ONTAP, including the space allocated to both the
disk’s root and data partitions.
If you do not already have a PuTTY session established to cluster1, then launch PuTTY as described in the
“Accessing the Command Line” section at the beginning of this guide, and connect to the host cluster1
using the username admin and the password Netapp1!.
To perform this section’s tasks from the command line:
16 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
1. List all of the physical disks attached to the cluster:
cluster1::> storage disk show
Usable Disk Container Container
Disk Size Shelf Bay Type Type Name Owner
---------------- ---------- ----- --- ------- ----------- --------- --------
VMw-1.1 28.44GB - 0 VMDISK shared aggr0_cluster1_01
cluster1-01
VMw-1.2 28.44GB - 1 VMDISK shared aggr0_cluster1_01
cluster1-01
VMw-1.3 28.44GB - 2 VMDISK shared aggr0_cluster1_01
cluster1-01
VMw-1.4 28.44GB - 3 VMDISK shared aggr0_cluster1_01
cluster1-01
VMw-1.5 28.44GB - 4 VMDISK shared aggr0_cluster1_01
cluster1-01
VMw-1.6 28.44GB - 5 VMDISK shared aggr0_cluster1_01
cluster1-01
VMw-1.7 28.44GB - 6 VMDISK shared aggr0_cluster1_01
cluster1-01
VMw-1.8 28.44GB - 8 VMDISK shared aggr0_cluster1_01
cluster1-01
VMw-1.9 28.44GB - 9 VMDISK shared aggr0_cluster1_01
cluster1-01
VMw-1.10 28.44GB - 10 VMDISK shared aggr0_cluster1_01
cluster1-01
VMw-1.11 28.44GB - 11 VMDISK shared - cluster1-01
VMw-1.12 28.44GB - 12 VMDISK shared - cluster1-01
VMw-1.13 28.44GB - 0 VMDISK shared aggr0_cluster1_02
cluster1-02
VMw-1.14 28.44GB - 1 VMDISK shared aggr0_cluster1_02
cluster1-02
VMw-1.15 28.44GB - 2 VMDISK shared aggr0_cluster1_02
cluster1-02
VMw-1.16 28.44GB - 3 VMDISK shared aggr0_cluster1_02
cluster1-02
VMw-1.17 28.44GB - 4 VMDISK shared aggr0_cluster1_02
cluster1-02
VMw-1.18 28.44GB - 5 VMDISK shared aggr0_cluster1_02
cluster1-02
VMw-1.19 28.44GB - 6 VMDISK shared aggr0_cluster1_02
cluster1-02
VMw-1.20 28.44GB - 8 VMDISK shared aggr0_cluster1_02
cluster1-02
VMw-1.21 28.44GB - 9 VMDISK shared aggr0_cluster1_02
VMw-1.22 28.44GB - 10 VMDISK shared aggr0_cluster1_02
cluster1-02
VMw-1.23 28.44GB - 11 VMDISK shared - cluster1-02
VMw-1.24 28.44GB - 12 VMDISK shared - cluster1-02
24 entries were displayed.
cluster1::>
17 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The preceding command listed a total of 24 disks, 12 for each of the nodes in this two-node cluster. The
container type for all the disks is “shared”, which indicates that the disks are partitioned. For disks that are
not partitioned, you would typically see values like “spare”, “data”, “parity”, and “dparity”. The Owner field
indicates which node the disk is assigned to, and the Container Name field indicates which aggregate the
disk is assigned to. Notice that two disks for each node do not have a Container Name listed; these are
spare disks that Data ONTAP can use as replacements in the event of a disk failure.
2. At this point, the only aggregates that exist on this new cluster are the root aggregates. List the
aggregates that exist on the cluster:
cluster1::> aggr show
Aggregate Size Available Used% State #Vols Nodes RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
aggr0_cluster1_01
10.26GB 510.6MB 95% online 1 cluster1-01 raid_dp,
normal
aggr0_cluster1_02
10.26GB 510.6MB 95% online 1 cluster1-02 raid_dp,
normal
2 entries were displayed.
cluster1::>
3. Now list the disks that are members of the root aggregate for the node cluster-01. Here is the command
that you would ordinarily use to display that information for an aggregate that is not using partitioned
disks.
cluster1::> storage disk show -aggregate aggr0_cluster1_01
There are no entries matching your query.
Info: One or more aggregates queried for use shared disks. Use "storage aggregate show-status"
to get correct set of disks associated with these aggregates.
cluster1::>
18 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
4. As you can see, in this instance the preceding command is not able to produce a list of disks because
this aggregate is using shared disks. Instead it refers you to use the “storage aggregate show”
command to query the aggregate for a list of it’s assigned disk partitions.
cluster1::> storage aggregate show-status -aggregate aggr0_cluster1_01
Owner Node: cluster1-01
Aggregate: aggr0_cluster1_01 (online, raid_dp) (block checksums)
Plex: /aggr0_cluster1_01/plex0 (online, normal, active, pool0)
RAID Group /aggr0_cluster1_01/plex0/rg0 (normal, block checksums)
Usable Physical
Position Disk Pool Type RPM Size Size Status
-------- --------------------------- ---- ----- ------ -------- -------- --------
shared VMw-1.1 0 VMDISK - 1.52GB 28.44GB (normal)
shared VMw-1.2 0 VMDISK - 1.52GB 28.44GB (normal)
shared VMw-1.3 0 VMDISK - 1.52GB 28.44GB (normal)
shared VMw-1.4 0 VMDISK - 1.52GB 28.44GB (normal)
shared VMw-1.5 0 VMDISK - 1.52GB 28.44GB (normal)
shared VMw-1.6 0 VMDISK - 1.52GB 28.44GB (normal)
shared VMw-1.7 0 VMDISK - 1.52GB 28.44GB (normal)
shared VMw-1.8 0 VMDISK - 1.52GB 28.44GB (normal)
shared VMw-1.9 0 VMDISK - 1.52GB 28.44GB (normal)
shared VMw-1.10 0 VMDISK - 1.52GB 28.44GB (normal)
10 entries were displayed.
cluster1::>
The output shows that aggr0_cluster1_01 is comprised of 10 disks, each with a usable size of 1.52 GB,
and you know that the aggregate is using the listed disk’s root partitions because aggr0_cluster1_01 is a
root aggregate.
For a FAS controller that will be using Advanced Drive Partitioning, Data ONTAP automatically determines
the size of the root and data disk partitions at system installation time. That determination is based on the
quantity and size of the available disks assigned to each node. As you saw earlier, this particular cluster
node has 12 disks, so during installation Data ONTAP partitioned all 12 disks but only assigned 10 of those
root partitions to the root aggregate so that the node would have 2 spares disks available.to protect against
disk failures.
19 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
5. The Data ONTAP CLI includes a diagnostic level command that provides a more comprehensive single
view of a system’s partitioned disks. The following command shows the partitioned disks that belong to
the node cluster1-01.
cluster1::> set -priv diag
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y
cluster1::*> disk partition show -owner-node-name cluster1-01
Usable Container Container
Partition Size Type Name Owner
------------------------- ------- ------------- ----------------- -----------------
VMw-1.1.P1 26.88GB spare Pool0 cluster1-01
VMw-1.1.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.2.P1 26.88GB spare Pool0 cluster1-01
VMw-1.2.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.3.P1 26.88GB spare Pool0 cluster1-01
VMw-1.3.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.4.P1 26.88GB spare Pool0 cluster1-01
VMw-1.4.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.5.P1 26.88GB spare Pool0 cluster1-01
VMw-1.5.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.6.P1 26.88GB spare Pool0 cluster1-01
VMw-1.6.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.7.P1 26.88GB spare Pool0 cluster1-01
VMw-1.7.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0
VMw-1.8.P1 26.88GB spare Pool0 cluster1-01
VMw-1.8.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.9.P1 26.88GB spare Pool0 cluster1-01
VMw-1.9.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.10.P1 26.88GB spare Pool0 cluster1-01
VMw-1.10.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0
cluster1-01
VMw-1.11.P1 26.88GB spare Pool0 cluster1-01
VMw-1.11.P2 1.52GB spare Pool0 cluster1-01
VMw-1.12.P1 26.88GB spare Pool0 cluster1-01
VMw-1.12.P2 1.52GB spare Pool0 cluster1-01
24 entries were displayed.
cluster1::*> set -priv admin
cluster1::>
20 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
Create a New Aggregate on Each Cluster Node
The only aggregates that exist on a newly created cluster are the node root aggregates. The root
aggregate should not be used to host user data, so in this section you will be creating a new aggregate on
each of the nodes in cluster1 so they can host the storage virtual machines, volumes, and LUNs that you
will be creating later in this lab.
A node can host multiple aggregates depending on the data sizing, performance, and isolation needs of the
storage workloads that it will be hosting. When you create a Storage Virtual Machine (SVM) you assign it to
use one or more specific aggregates to host the SVM’s volumes. Multiple SVMs can be assigned to use
the same aggregate, which offers greater flexibility in managing storage space, whereas dedicating an
aggregate to just a single SVM provides greater workload isolation.
For this lab, you will be creating a single user data aggregate on each node in the cluster.
To perform this section’s tasks from the GUI:
You can create aggregates from either the Cluster tab or the Nodes tab. For this exercise use the Cluster
tab as follows:
1. Select the Cluster tab. To avoid confusion, always double-check to make sure that you are working in
the correct left pane tab context when performing activities in System Manager!
2. Go to cluster1->Storage->Aggregates.
3. Click on the Create button to launch the Create Aggregate Wizard.
21 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The Create Aggregate wizard window opens.
1. Specify the Name of the aggregate as aggr1_cluster1_01 shown and then click Browse.
The Select Disk Type window opens.
1. Select the Disk Type entry for the node cluster1-01.
2. Click the OK button.
22 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The Select DiskType window closes, and focus returns to the Create Aggregate window.
1. The “Disk Type” should now show as VMDISK. Set the “Number of Disks” to 5.
2. Click the Create button to create the new aggregate and to close the wizard.
23 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The Create Aggregate window close, and focus returns to the Aggregates view in System Manager.The
newly created aggregate should now be visible in the list of aggregates.
1. Select the entry for the aggregate aggr1_cluster1_01 if it is not already selected.
2. Click the Details tab to view more detailed information about this aggregate’s configuration.
3. Notice that aggr1_cluster1_01 is a 64-bit aggregate. In earlier versions of clustered Data ONTAP 8, an
aggregate could be either 32-bit or 64-bit, but Data ONTAP 8.3 only supports 64-bit aggregates. If you
have an existing clustered Data ONTAP 8.x system that has 32-bit aggregates and you plan to upgrade
that cluster to 8.3, you must convert those 32-bit aggregates to 64-bit aggregates prior to the upgrade.
The procedure for that migration is not covered in this lab, so if you need further details then please
refer to the clustered Data ONTAP documentation.
24 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
Now repeat the process to create a new aggregate on the node cluster1-02.
1. Click the Create button again.
The Create Aggregate window opens.
1. Specify the Aggregate’s “Name” as aggr1_cluster1_02 and then click the Browse button.
25 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The Select Disk Type window opens.
1. Select the Disk Type entry for the node cluster1-02.
2. Click the OK button.
The Select Disk Type window closes, and focus returns to the Create Aggregate window.
1. The “Disk Type” should now show as VMDISK. Set the Number of Disks to 5.
2. Click the Create button to create the new aggregate.
The Create Aggregate window closes, and focus returnsto the Aggregates view in System Manager.
26 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The new aggregate aggr1_cluster1_02 now appears in the cluster’s aggregate list.
27 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
To perform this section’s tasks from the command line:
From a PuTTY session logged in to cluster1 as the username admin and password Netapp1!.
Display a list of the disks attached to the node cluster-01. (Note that you can omit the -nodelist option to
display a list of all the disks in the cluster.) By default the PuTTY window may wrap output lines because
the window is too small; if this is the case for you then simply expand the window by selecting its edge and
dragging it wider, after which any subsequent output will utilize the visible width of the window.
cluster1::> disk show -nodelist cluster1-01
Usable Disk Container Container
Disk Size Shelf Bay Type Type Name Owner
---------------- ---------- ----- --- ------- ----------- --------- --------
VMw-1.25 28.44GB - 0 VMDISK shared aggr0_cluster1_01 cluster1-01
VMw-1.26 28.44GB - 1 VMDISK shared aggr0_cluster1_01 cluster1-01
VMw-1.27 28.44GB - 2 VMDISK shared aggr0_cluster1_01 cluster1-01
VMw-1.28 28.44GB - 3 VMDISK shared aggr0_cluster1_01 cluster1-01
VMw-1.29 28.44GB - 4 VMDISK shared aggr0_cluster1_01 cluster1-01
VMw-1.30 28.44GB - 5 VMDISK shared aggr0_cluster1_01 cluster1-01
VMw-1.31 28.44GB - 6 VMDISK shared aggr0_cluster1_01 cluster1-01
VMw-1.32 28.44GB - 8 VMDISK shared aggr0_cluster1_01 cluster1-01
VMw-1.33 28.44GB - 9 VMDISK shared aggr0_cluster1_01 cluster1-01
VMw-1.34 28.44GB - 10 VMDISK shared aggr0_cluster1_01 cluster1-01
VMw-1.35 28.44GB - 11 VMDISK shared - cluster1-01
VMw-1.36 28.44GB - 12 VMDISK shared - cluster1-01
VMw-1.37 28.44GB - 0 VMDISK shared aggr0_cluster1_02 cluster1-02
VMw-1.38 28.44GB - 1 VMDISK shared aggr0_cluster1_02 cluster1-02
VMw-1.39 28.44GB - 2 VMDISK shared aggr0_cluster1_02 cluster1-02
VMw-1.40 28.44GB - 3 VMDISK shared aggr0_cluster1_02 cluster1-02
VMw-1.41 28.44GB - 4 VMDISK shared aggr0_cluster1_02 cluster1-02
VMw-1.42 28.44GB - 5 VMDISK shared aggr0_cluster1_02 cluster1-02
VMw-1.43 28.44GB - 6 VMDISK shared aggr0_cluster1_02 cluster1-02
VMw-1.44 28.44GB - 8 VMDISK shared aggr0_cluster1_02 cluster1-02
VMw-1.45 28.44GB - 9 VMDISK shared aggr0_cluster1_02 cluster1-02
VMw-1.46 28.44GB - 10 VMDISK shared aggr0_cluster1_02 cluster1-02
VMw-1.47 28.44GB - 11 VMDISK shared - cluster1-02
VMw-1.48 28.44GB - 12 VMDISK shared - cluster1-02
24 entries were displayed.
cluster1::>
28 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
Create the aggregate named “aggr1_cluster1_01” on the node cluster1-01 and the aggregate named
“aggr1_cluster1_02” on the node cluster1-02.
cluster1::> aggr show
Aggregate Size Available Used% State #Vols Nodes RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
aggr0_cluster1_01 10.26GB 510.6MB 95% online 1 cluster1-01 raid_dp,
normal
aggr0_cluster1_02 10.26GB 510.6MB 95% online 1 cluster1-02 raid_dp,
normal
2 entries were displayed.
cluster1::> aggr create -aggregate aggr1_cluster1_01 -nodes cluster1-01 -diskcount 5
[Job 257] Job is queued: Create aggr1_cluster1_01.
[Job 257] Job succeeded: DONE
cluster1::> aggr create -aggregate aggr1_cluster1_02 -nodes cluster1-02 -diskcount 5
[Job 258] Job is queued: Create aggr1_cluster1_02.
[Job 258] Job succeeded: DONE
cluster1::> aggr show
Aggregate Size Available Used% State #Vols Nodes RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
aggr0_cluster1_01 10.26GB 510.6MB 95% online 1 cluster1-01 raid_dp,
normal
aggr0_cluster1_02 10.26GB 510.6MB 95% online 1 cluster1-02 raid_dp,
normal
aggr1_cluster1_01 72.53GB 72.53GB 0% online 0 cluster1-01 raid_dp,
normal
aggr1_cluster1_02 72.53GB 72.53GB 0% online 0 cluster1-02 raid_dp,
normal
4 entries were displayed.
cluster1::>
29 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
Networks
Clustered Data ONTAP provides a number of network components to that you use to manage your cluster.
Those components include:
Ports are the physical Ethernet and Fibre Channel connections on each node, the interface groups (ifgrps)
you can create to aggregate those connections, and the VLANs you can use to subdivide them.
A logical interface (LIF) is essentially an IP address that is associated with a port, and has a number of
associated characteristics such as an assigned home node, an assigned physical home port, a list of
physical ports it can fail over to, an assigned SVM, a role, a routing group, and so on. A given LIF can only
be assigned to a single SVM, and since LIFs are mapped to physical network ports on cluster nodes this
means that an SVM runs in part on all nodes that are hosting its LIFs.
Routing tables in clustered Data ONTAP are defined for each Storage Virtual Machine. Since each SVM
has it’s own routing table, changes to one SVM’s routing table does not have impact on any other SVM’s
routing table.
IPspaces are new in Data ONTAP 8.3 and allow you to configure a Data ONTAP cluster to logically
separate one IP network from another, even if those two networks are using the same IP address range.
IPspaces are a mult-tenancy feature designed to allow storage service providers to share a cluster
between different companies while still separating storage traffic for privacy and security. Every cluster
include a default IPspace to which Data ONTAP automatically assigns new SVMs, and that default IPspace
is probably sufficient for most NetApp customers who are deploying a cluster within a single company or
organization that uses a non-conflicting IP address range.
Broadcast Domains are also new in Data ONTAP 8.3, and are collections of ports that all have access to
the same layer 2 networks, both physical and virtual (i.e. VLANs). Every IPspace has it’s own set of
Broadcast Domains, and Data ONTAP provides a default broadcast domain to go along with the default
IPspace. Broadcast domains are used by Data ONTAP to determine what ports an SVM can use for it’s
LIFs.
Subnets are another new feature in Data ONTAP 8.3, and are a convenience feature intended to make LIF
creation and management easier for Data ONTAP administrators. A subnet is a pool of IP addresses that
you can specify by name when creating a LIF. Data ONTAP will automatically assign an available IP
address from the pool to the LIF, along with a subnet mask and a gateway. A subnet is scoped to a specific
broadcast domain, so all the subnet’s addresses belong to the same layer 3 network. Data ONTAP
manages the pool automatically as you create or delete LIFs, and if you manually configure a LIF with an
address from the pool then it will detect that the address is use and mark it as such in the pool.
DNS Zones allow an SVM to manage DNS name resolution for it’s own LIFs, and since multiple LIFs can
share the same DNS name, this allows the SVM to load balance traffic by IP address across the LIFs. To
use DNS Zones you must configure your DNS server to delegate DNS authority for the subdomain to the
SVM.
In this section of the lab, you will create a subnet that you will leverage in later sections to provision SVMs
and LIFs.You will not create IPspaces or Broadcast Domains as the system defaults are sufficient for this
lab.
30 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
To perform this section’s tasks from the GUI:
1. In the left pane of System Manager, select the Cluster tab.
2. In the left pane, navigate to cluster1->Configuration->Network.
3. In the right pane select the Broadcast Domains tab.
4. Select the Default subnet.
Review the Port Details section at the bottom of the Network pane and note that the e0c – e0g ports on
both cluster nodes are all part of this broadcast domain. These are the network ports that you will be using
in this lab.
31 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
Now create a new Subnet for this lab.
1. Select the Subnets tab, and notice that there are no subnets listed in the pane. Unlike Broadcast
Domains and IPSpaces, Data ONTAP does not provide a default Subnet.
2. Click the Create button.
32 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The Create Subnet window opens.
1. Set the fields in the window as follows.
 “Subnet Name”: Demo
 “Subnet IP/Subnet mask”: 192.168.0.0/24
 “Gateway”: 192.168.0.1
2. The values you enter in the “IP address” box depend on what sections of the lab guide you intend to
complete. It is important that you choose the right values here so that the values in your lab will
correctly match up with the values used in this lab guide.
 If you plan to complete just the NAS section or both the NAS and SAN sections then enter
192.168.0.131-192.168.0.139
 If you plan to complete just the SAN section then enter 192.168.0.133-192.168.0.139
3. Click the Browse button.
33 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The Select Broadcast Domain window opens.
1. Select the “Default” entry from the list.
2. Click the OK button.
34 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The Select Broadcast Domain window close, and focus returns to the Create Subnet window.
1. The values in your Create Subnet window should now match those shown in the following screenshot,
the only possible exception being for the IP Addresses field, whose value may differ depending on what
value range you chose to enter to match your plans for the lab.
Note: If you click the “Show ports on this domain” link under the Broadcast Domain textbox, you
can once again see the list of ports that this broadcast domain includes.
2. Click Create.
35 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The Create Subnet window closes, and focus returns to the Subnets tab in System Manager. Notice that
the main pane pane of the Subnets tab now includes an entery for your newly created subnet, and that the
lower portion of the pane includes metrics tracking the consumption of the IP addresses that belong to this
subnet.
Feel free to explore the contents of the other available tabs on the Network page. Here is a brief summary
of the information available on those tabs.
The Ethernet Ports tab displays the physical NICs on your controller, which will be a superset of the NICs
that you saw previously listed as belonging to the default broadcast domain. The other NICs you will see
listed on the Ethernet Ports tab include the node’s cluster network NICs.
The Network Interfaces tab displays a list of all of the LIFs on your cluster.
The FC/FCoE Adapters tab lists all the WWPNs for all the controllers NICs in the event they will be used
for iSCSI or FCoE connections. The simulated NetApp controllers you are using in this lab do not include
FC adapters, and this lab does not make use of FCoE.
36 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
To perform this section’s tasks from the command line:
1. Display a list of the cluster’s IPspaces. A cluster actually contains two IPspaces by default; the Cluster
IPspace, which correlates to the cluster network that Data ONTAP uses to have cluster nodes
communicate with each other, and the Default IPspace to which Data ONTAP automatically assigne all
new SVMs. You can create more IPspaces if necessary, but that activity will not be covered in this lab.
cluster1::> network ipspace show
IPspace Vserver List Broadcast Domains
------------------- ----------------------------- ----------------------------
Cluster
Cluster Cluster
Default
cluster1 Default
2 entries were displayed.
cluster1::>
2. Display a list of the cluster’s broadcast domains. Remember that broadcast domains are scoped to a
single IPspace. The e0a ports on the cluster nodes are part of the Cluster broadcast domain in the
Cluster IPspace. The remaining ports are part of the Default broadcast domain in the Default IPspace.
cluster1::> network port broadcast-domain show
IPspace Broadcast Update
Name Domain Name MTU Port List Status Details
------- ----------- ------ ----------------------------- --------------
Cluster Cluster 1500
cluster1-01:e0a complete
cluster1-01:e0b complete
cluster1-02:e0a complete
cluster1-02:e0b complete
Default Default 1500
cluster1-01:e0c complete
cluster1-01:e0d complete
cluster1-01:e0e complete
cluster1-01:e0f complete
cluster1-01:e0g complete
cluster1-02:e0c complete
cluster1-02:e0d complete
cluster1-02:e0e complete
cluster1-02:e0f complete
cluster1-02:e0g complete
2 entries were displayed.
cluster1::>
3. Display a list of the cluster’s subnets.
cluster1::> network subnet show
This table is currently empty.
cluster1::>
37 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
Data ONTAP does not include a default subnet, so you will need to create a subnet now. The specific
command you will use depends on what sections of this lab guide you plan to complete, as you want to
correctly align the IP address pool in your lab with the IP addresses used in the portions of this lab guide
that you want to complete.
4. If you plan to complete the NAS portion of this lab, enter the following command. Also use this this
command if you plan to complete both the NAS and SAN portions of this lab.
cluster1::> network subnet create -subnet-name Demo -broadcast-domain Default -ipspace Default
-subnet 192.168.0.0/24 -gateway 192.168.0.1 -ip-ranges 192.168.0.131-192.168.0.139
cluster1::>
5. If you only plan to complete the SAN portion of this lab, then enter the following command instead.
cluster1::> network subnet create -subnet-name Demo -broadcast-domain Default -ipspace Default
-subnet 192.168.0.0/24 -gateway 192.168.0.1 -ip-ranges 192.168.0.133-192.168.0.139
cluster1::>
6. Re-display the list of the cluster’s subnets. This example assumes you plan to complete the whole lab.
cluster1::> network subnet show
IPspace: Default
Subnet Broadcast Avail/
Name Subnet Domain Gateway Total Ranges
--------- ---------------- --------- --------------- --------- ---------------
Demo 192.168.0.1/24 Default 192.168.0.1 9/9 192.168.0.131-192.168.0.139
cluster1::>
7. If you are interested in seeing a list of all of the network ports on your cluster, you can use the following
command for that purpose.
cluster1::> network port show
Speed (Mbps)
Node Port IPspace Broadcast Domain Link MTU Admin/Oper
------ --------- ------------ ---------------- ----- ------- ------------
cluster1-01
e0a Cluster Cluster up 1500 auto/1000
e0b Cluster Cluster up 1500 auto/1000
e0c Default Default up 1500 auto/1000
e0d Default Default up 1500 auto/1000
e0e Default Default up 1500 auto/1000
e0f Default Default up 1500 auto/1000
e0g Default Default up 1500 auto/1000
cluster1-02
e0a Cluster Cluster up 1500 auto/1000
e0b Cluster Cluster up 1500 auto/1000
e0c Default Default up 1500 auto/1000
e0d Default Default up 1500 auto/1000
e0e Default Default up 1500 auto/1000
e0f Default Default up 1500 auto/1000
e0g Default Default up 1500 auto/1000
14 entries were displayed.
cluster1::>
38 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
Create Storage for NFS and CIFS
Expected Completion Time: 40 Minutes
If you are only interested in SAN protocols then you do not need to complete the lab steps in this section.
However, we do recommend that you review the conceptual information found here and at the beginning of
each of this section’s subsections before you advance to the SAN section, as most of this conceptual
material will not be repeated there.
Storage Virtual Machines (SVMs), previously known as Vservers, are the logical storage servers that
operate within a cluster for the purpose of serving data out to storage clients. A single cluster may host
hundreds of SVMs, with each SVM managing its own set of volumes (FlexVols), Logical Network Interfaces
(LIFs), storage access protocols (e.g. NFS/CIFS/iSCSI/FC/FCoE), and for NAS clients its own namespace.
The ability to support many SVMs in a single cluster is a key feature in clustered Data ONTAP, and
customers are encouraged to actively embrace that feature in order to take full advantage of a cluster’s
capabilities. An organization is ill-advised to start out on a deployment intended to scale with only a single
SVM.
You explicitly choose and configure which storage protocols you want a given SVM to support at SVM
creation time, and you can later add or remove protocols as desired.. A single SVM can host any
combination of the supported protocols.
An SVM’s assigned aggregates and LIFs determine which cluster nodes handle processing for that SVM.
As you saw earlier, an aggregate is directly connected to the specific node hosting its disks, which means
that an SVM runs in part on any nodes whose aggregates are hosting volumes for the SVM. An SVM also
has a direct relationship to any nodes that are hosting its LIFs. LIFs are essentially an IP address with a
number of associated characteristics such as an assigned home node, an assigned physical home port, a
list of physical ports it can fail over to, an assigned SVM, a role, a routing group, and so on. You can only
assign a given LIF to a single SVM, and since LIFs map to physical network ports on cluster nodes, this
means that an SVM runs in part on all nodes that are hosting its LIFs.
When you configure an SVM with multiple data LIFs, clients can use any of those LIFs to access volumes
hosted by the SVM. Which specific LIF IP address a client will use in a given instance, and by extension
which LIF, is a function of name resolution, the mapping of a hostname to an IP address. CIFS Servers
have responsibility under NetBIOS for resolving requests for their hostnames received from clients, and in
so doing can perform some load balancing by responding to different clients with different LIF addresses,
but this distribution is not sophisticated and requires external NetBIOS name servers in order to deal with
clients that are not on the local network. NFS Servers do not handle name resolution on their own.
39 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
DNS provides basic name resolution load balancing by advertising multiple IP addresses for the same
hostname. DNS is supported by both NFS and CIFS clients and works equally well with clients on local
area and wide area networks. Since DNS is an external service that resides outside of Data ONTAP this
architecture creates the potential for service disruptions if the DNS server is advertising IP addresses for
LIFs that are temporarily offline. To compensate for this condition you can configure DNS servers to
delegate the name resolution responsibility for the SVM’s hostname records to the SVM itself, so that it can
directly respond to name resolution requests involving its LIFs. This allows the SVM to consider LIF
availability and LIF utilization levels when deciding what LIF address to return in response to a DNS name
resolution request.
LIFS that map to physical network ports that reside on the same node as a volume’s containing aggregate
offer the most efficient client access path to the volume’s data. However, clients can also access volume
data through LIFs bound to physical network ports on other nodes in the cluster; in these cases clustered
Data ONTAP uses the high speed cluster network to bridge communication between the node hosting the
LIF and the node hosting the volume. NetApp best practice is to create at least one NAS LIF for a given
SVM on each cluster node that has an aggregate that is hosting volumes for that SVM. If you desire
additional resiliency then you can also create a NAS LIF on nodes not hosting aggregates for the SVM.
A NAS LIF (a LIF supporting only NFS and/or CIFS) can automatically failover from one cluster node to
another in the event of a component failure; any existing connections to that LIF from NFS and SMB 2.0
and later clients can non-disruptively tolerate the LIF failover event. When a LIF failover happens the NAS
LIF migrates to a different physical NIC, potentially to a NIC on a different node in the cluster, and
continues servicing network requests from that new node/port. Throughout this operation the NAS LIF
maintains its IP address; clients connected to the LIF may notice a brief delay while the failover is in
progress but as soon as it completes the clients resume any in-process NAS operations without any loss of
data.
The number of nodes in the cluster determines the total number of SVMs that can run in the cluster. Each
storage controller node can host a maximum of 125 SVMs, so you can calculate the cluster’s effective SVM
limit by multiplying the number of nodes by 125. There is no limit on the number of LIFs that an SVM can
host, but there is a limit on the number of LIFs that can run on a given node. That limit is 256 LIFs per
node, but if the node is part of an HA pair configured for failover then the limit is half that value, 128 LIFs
per node (so that a node can also accommodate it’s HA partner’s LIFs in the event of a failover event).
Each SVM has its own NAS namespace, a logical grouping of the SVM’s CIFS and NFS volumes into a
single logical filesystem view. Clients can access the entire namespace by mounting a single share or
export at the top of the namespace tree, meaning that SVM administrators can centrally maintain and
present a consistent view of the SVM’s data to all clients rather than having to reproduce that view
structure on each individual client. As an administrator maps and unmaps volumes from the namespace
those volumes instantly become visible or disappear from clients that have mounted CIFS and NFS
volumes higher in the SVM’s namespace. Administrators can also create NFS exports at individual junction
points within the namespace and can create CIFS shares at any directory path in the namespace.
Create a Storage Virtual Machine for NAS
In this section you will create a new SVM named svm1 on the cluster and will configure it to serve out a
volume over NFS and CIFS. You will be configuring two NAS data LIFs on the SVM, one per node in the
cluster.
40 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
To perform this section’s tasks from the GUI:
Start by creating the storage virtual machine.
1. In System Manager, open the Storage Virtual Machines tab.
2. Select cluster1.
3. Click the Create button to launch the Storage Virtual Machine Setup wizard.
41 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The Storage Virual machine (SVM) Setup window opens.
4. Set the fields as follows:
 SVM Name: svm1
 Data Protocols: check the CIFS and NFS checkboxes
Note: The list of available Data Protocols is dependent upon what protocols are licensed on your
cluster; if a given protocol isn’t listed, it is because you aren’t licensed for it.
 Security Style: NTFS
 Root Aggregate: aggr1_cluster1_01
The default values for IPspace, Volume Type, and Default Language are already populated for you by the
wizard, as is the DNS configuration. When ready, click Submit & Continue.
42 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The wizard creates the SVM and then advances to the protocols window. The protocols window can rather
large so this guide will present it in sections.
1. The Subnet setting defaults to Demo since this is the only subnet definition that exists in your lab. Click
the Browse button next to the Port textbox.
The Select Network Port or Adapter window opens.
1. Expand the list of ports for the node cluster1-01 and select port e0c.
2. Click the OK button.
43 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The Select Network Port or Adapter window closes and focus returns to the protocols portion of the
Storage Virtual Machine (SVM) Setup wizard.
1. The Port textbox should have been populated with the cluster and port value you just selected.
2. Populate the CIFS Server Configuration textboxes with the following values:
 CIFS Server Name: svm1
 Active Directory: demo.netapp.com
 Administrator Name: Administrator
 Password: Netapp1!
3. The optional “Provision a volume for CIFS storage” textboxes offer a quick way to provision a simple
volume and CIFS share at SVM creation time. This share will not be multi-protocol, and since in most
cases when you create a share, you will be doing so for an existing SVM. This lab guide will show that
more full-featured procedure for creating a volume and share in the following sections.
4. Expand the optional NIS Configuration section.
44 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
Scroll down in the window to see the expanded NIS Configuration section.
1. Clear the pre-populated values from the Domain Name and IP Address fields. In a NFS
environment where you are running NIS, you would want to configure these values, but this lab
environment is not utilizing NIS and in this case leaving these fields populated will create a name
resolution problem later in the lab.
2. As was the case with CIFS, the provision a volume for NFS storage textboxes offer a quick way to
provison a volume and create an NFS export for that volume. Once again, the volume will not be
inherently multi-protocol, and will in fact be a completely separate volume from the CIFS share volume
that you could have selected to create in the CIFS section. This lab will utilize the more full featured
volume creation process that you will see in later sections.
3. Click the Submit & Continue button to advance the wizard to the next screen.
45 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The SVM Administration section of the Storage Virtual Machine (SVM) Setup wizard opens.This window
allows you to set up an administration account that is scoped to just this SVM so that you can delegate
administrative tasks for this SVM to an SVM-specific administrator without giving that administrator cluster-
wide privileges. As the comments in this wizard window indicate, this account must also exist for use with
SnapDrive. Although you will not be using SnapDrive in this lab, it is usally a good idea to create this
account and you will do so here.
1. The “User Name” is pre-populated with the value vsadmin. Set the “Password” and “Confirm Password”
textboxes to netapp123. When finished, click the Submit & Continue button.
46 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The New Storage Virtual Machine (SVM) Summary window opens.
1. Review the settings for the new SVM, taking special note of the IP Address listed in the CIFS/NFS
Configuration section. Data ONTAP drew this address from the Subnets pool that you created earlier in
the lab.When finished, click the OK button.
47 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The window closes, and focus returns to the System Manager window, which now displays a summary
page for your newly created svm1 SVM.
1. Notice that in the main pane of the window the CIFS protocol is listed with a green background. This
indicates that a CIFS server is running for this SVM.
2. Notice that the NFS protocol is listed with a yellow background, which indicates that there is not a
running NFS server for this SVM. If you had configured the NIS server settings during the SVM Setup
wizard then the wizard would have started the NFS server, but since this lab is not using NIS you will
manually turn on NFS in a later step.
The New Storage Virtual Machine Setup Wizard only provisions a single LIF when creating a new SVM.
NetApp best practice is to configure a LIF on both nodes in an HA pair so that a client can access the
SVM’s shares through either node. To comply with that best practise you will now create a second LIF
hosted on the other node in the cluster.
48 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
System Manager for clustered Data ONTAP 8.2 and earlier presented LIF management under the Storage
Virtual Machines tab, only offering visibility to LIFs for a single SVM at a time. With 8.3, that functionality
has moved to the Cluster tab, where you now have a single view for managing all the LIFs in your cluster.
1. Select the Cluster tab in the left navigation pane of System Manager.
2. Navigate to cluster1-> Configuration->Network.
3. Select the Network Interfaces tab in the main Network pane.
4. Select the only LIF listed for the svm1 SVM. Notice that this LIF is named svm1_cifs_nfs_lif1; you will
be following that same naming convention for the new LIF.
5. Click on the Create button to launch the Network Interface Create Wizard.
49 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The Create Network Interface window opens.
1. Set the fields in the window to the following values:
 Name: svm1_cifs_nfs_lif2
 Interface Role: Serves Data
 SVM: svm1
 Protocol Access: Check CIFS and NFS checkboxes.
 Management Access: Check the Enable Management Access checkbox.
 Subnet: Demo
 Check The IP address is selected from this subnet checkbox.
 Also expand the Port Selection listbox and select the entry for cluster1-02 port e0c.
2. Click the Create button to continue.
50 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The Create Network Interface window close, and focus returns to the Network pane in System Manager.
1. Notice that a new entry for the svm1_cifs_nfs_lif2 LIF is now present under the Network Interfaces
tab. Select this entry and review the LIF’s properties.
Lastly, you need to configure DNS delegation for the SVM so that Linux and Windows clients can
intelligently utilize all of svm1’s configured NAS LIFs. To achieve this objective, the DNS server must
delegate to the cluster the responsibility for the DNS zone corresponding to the SVM’s hostname, which in
this case will be “svm1.demo.netapp.com”. The lab’s DNS server is already configured to delegate this
responsibility, but you must also configure the SVM to accept it. System Manager does not currently
include the capability to configure DNS delegation so you will need to use the CLI for this purpose.
51 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
1. Open a PuTTY connection to cluster1 following the instructions in the “Accessing the Command Line”
section at the beginning of this guide. Log in using the username admin and the password Netapp1!,
then enter the following commands.
cluster1::> network interface modify -vserver svm1 -lif svm1_cifs_nfs_lif1 -dns-zone
svm1.demo.netapp.com
cluster1::> network interface modify -vserver svm1 -lif svm1_cifs_nfs_lif2 -dns-zone
svm1.demo.netapp.com
cluster1::> network interface show -vserver svm1 -fields dns-zone,address
vserver lif address dns-zone
------- ----------------- ------------- -------------------
svm1 svm1_cifs_nfs_lif1 192.168.0.131 svm1.demo.netapp.com
svm1 svm1_cifs_nfs_lif2 192.168.0.132 svm1.demo.netapp.com
2 entries were displayed.
cluster1::>
2. Validate that delegation is working correctly by opening PowerShell on jumphost and using the
nslookup command as shown in the following CLI output. If the nslookup command returns IP
addresses as identified by the yellow highlighted text, then delegation is working correctly. If the
nslookup returns a “Non-existent domain” error then delegation is not working correctly and you will
need to review the Data ONTAP commands you just entered as they most likely contained an error.
Also notice from the following screenshot that different executions of the nslookup command return
different addresses, demonstrating that DNS load balancing is working correctly. You may need to run
the nslookup command more than 2 times before you see it report different addresses for the
hostname.
Windows PowerShell
Copyright (C) 2013 Microsoft Corporation. All rights reserved.
PS C:UsersAdministrator.DEMO> nslookup svm1.demo.netapp.com
Server: dc1.demo.netapp.com
Address: 192.168.0.253
Non-authoritative answer:
Name: svm1.demo.netapp.com
Address: 192.168.0.132
PS C:UsersAdministrator.DEMO> nslookup svm1.demo.netapp.com
Server: dc1.demo.netapp.com
Address: 192.168.0.253
Non-authoritative answer:
Name: svm1.demo.netapp.com
Address: 192.168.0.131
PS C:UsersAdministrator.DEMO
52 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
To perform this section’s tasks from the command line:
If you do not already have a PuTTY connection open to cluster1 then open one now following the directions
in the “Accessing the Command Line” section at the beginning of his lab guide. The username is admin
and the password is Netapp1!.
1. Create the SVM named svm1. Notice that the clustered Data ONTAP command line syntax still refers
to storage virtual machines as vservers.
cluster1::> vserver create -vserver svm1 -rootvolume svm1_root -aggregate aggr1_cluster1_01
-language C.UTF-8 -rootvolume-security ntfs -snapshot-policy default
[Job 259] Job is queued: Create svm1.
[Job 259]
[Job 259] Job succeeded:
Vserver creation completed
cluster1::>
2. Add CIFS and NFS protocol support to the SVM svm1:
cluster1::> vserver show-protocols -vserver svm1
Vserver: svm1
Protocols: nfs, cifs, fcp, iscsi, ndmp
cluster1::> vserver remove-protocols -vserver svm1 -protocols fcp,iscsi,ndmp
cluster1::> vserver show-protocols -vserver svm1
Vserver: svm1
Protocols: nfs, cifs
cluster1::> vserver show
Admin Operational Root
Vserver Type Subtype State State Volume Aggregate
----------- ------- ---------- ---------- ----------- ---------- ----------
cluster1 admin - - - - -
cluster1-01 node - - - - -
cluster1-02 node - - - - -
svm1 data default running running svm1_root aggr1_
cluster1_
01
4 entries were displayed.
cluster1::>
53 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
3. Display a list of the cluster’s network interfaces:
cluster1::> network interface show
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
Cluster
cluster1-01_clus1
up/up 169.254.224.98/16 cluster1-01 e0a true
cluster1-02_clus1
up/up 169.254.129.177/16 cluster1-02 e0a true
cluster1
cluster1-01_mgmt1
up/up 192.168.0.111/24 cluster1-01 e0c true
cluster1-02_mgmt1
up/up 192.168.0.112/24 cluster1-02 e0c true
cluster_mgmt up/up 192.168.0.101/24 cluster1-01 e0c true
5 entries were displayed.
cluster1::>
4. Notice that there are not yet any LIFs defined for the SVM svm1. Create the svm1_cifs_nfs_lif1 data
LIF for svm1:
cluster1::> network interface create -vserver svm1 -lif svm1_cifs_nfs_lif1 -role data
-data-protocol nfs,cifs -home-node cluster1-01 -home-port e0c -subnet-name Demo
-firewall-policy mgmt
cluster1::>
5. Create the svm1_cifs_nfs_lif2 data LIF for the SVM svm1:
cluster1::> network interface create -vserver svm1 -lif svm1_cifs_nfs_lif2 -role data
-data-protocol nfs,cifs -home-node cluster1-02 -home-port e0c -subnet-name Demo
-firewall-policy mgmt
cluster1::>
6. Display all of the LIFs owned by svm1:
cluster1::> network interface show -vserver svm1
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
svm1
svm1_cifs_nfs_lif1
up/up 192.168.0.131/24 cluster1-01 e0c true
svm1_cifs_nfs_lif2
up/up 192.168.0.132/24 cluster1-02 e0c true
2 entries were displayed.
cluster1::>
54 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
7. Configure the DNS domain and nameservers for the svm1 SVM:
cluster1::> vserver services dns show
Name
Vserver State Domains Servers
--------------- --------- ----------------------------------- ----------------
cluster1 enabled demo.netapp.com 192.168.0.253
cluster1::> vserver services dns create -vserver svm1 -name-servers 192.168.0.253 -domains
demo.netapp.com
cluster1::> vserver services dns show
Name
Vserver State Domains Servers
--------------- --------- ----------------------------------- ----------------
cluster1 enabled demo.netapp.com 192.168.0.253
svm1 enabled demo.netapp.com 192.168.0.253
2 entries were displayed.
cluster1::>
8. Configure the LIFs to accept DNS delegation responsibility for the svm1.demo.netapp.com zone so that
you can advertise addresses for both of the NAS data LIFs that belong to svm1. You could have done
this as part of the network interface create commands but we opted to do it separately here to show
you how you can modify an existing LIF.
cluster1::> network interface modify -vserver svm1 -lif svm1_cifs_nfs_lif1 -dns-zone
svm1.demo.netapp.com
cluster1::> network interface modify -vserver svm1 -lif svm1_cifs_nfs_lif2 -dns-zone
svm1.demo.netapp.com
cluster1::> network interface show -vserver svm1 -fields dns-zone,address
vserver lif address dns-zone
------- ------------------ ------------- --------------------
svm1 svm1_cifs_nfs_lif1 192.168.0.131 svm1.demo.netapp.com
svm1 svm1_cifs_nfs_lif2 192.168.0.132 svm1.demo.netapp.com
2 entries were displayed.
cluster1::>
55 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
9. Verify that DNS delegation is working correctly by opening a PuTTY connection to the Linux host rhel1
(username root and password Netapp1!) and executing the following commands. If the delegation is
working correctly then you should see IP addresses returned for the host svm1.demo.netapp.com, and
if you run the command several times you will eventually see that the responses vary the returned
address between the SVM’s two LIFs.
[root@rhel1 ~]# nslookup svm1.demo.netapp.com
Server: 192.168.0.253
Address: 192.168.0.253#53
Non-authoritative answer:
Name: svm1.demo.netapp.com
Address: 192.168.0.132
[root@rhel1 ~]# nslookup svm1.demo.netapp.com
Server: 192.168.0.253
Address: 192.168.0.253#53
Non-authoritative answer:
Name: svm1.demo.netapp.com
Address: 192.168.0.131
[root@rhel1 ~]#
10. This completes the planned LIF configuration changes for svm1, so now display a detailed
configuration report for the LIF svm1_cifs_nfs_lif1:
cluster1::> network interface show -lif svm1_cifs_nfs_lif1 -instance
Vserver Name: svm1
Logical Interface Name: svm1_cifs_nfs_lif1
Role: data
Data Protocol: nfs, cifs
Home Node: cluster1-01
Home Port: e0c
Current Node: cluster1-01
Current Port: e0c
Operational Status: up
Extended Status: -
Is Home: true
Network Address: 192.168.0.131
Netmask: 255.255.255.0
Bits in the Netmask: 24
IPv4 Link Local: -
Subnet Name: Demo
Administrative Status: up
Failover Policy: system-defined
Firewall Policy: mgmt
Auto Revert: false
Fully Qualified DNS Zone Name: svm1.demo.netapp.com
DNS Query Listen Enable: true
Failover Group Name: Default
FCP WWPN: -
Address family: ipv4
Comment: -
IPspace of LIF: Default
cluster1::>
56 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
11. When you issued the vserver create command to create svm1 you included an option to enable CIFS
for it, but that command did not actually create a CIFS server for the svm. Now it is time to create that
CIFS server.
cluster1::> vserver cifs show
This table is currently empty.
cluster1::> vserver cifs create -vserver svm1 -cifs-server svm1 -domain demo.netapp.com
In order to create an Active Directory machine account for the CIFS server, you
must supply the name and password of a Windows account with sufficient
privileges to add computers to the "CN=Computers" container within the
"DEMO.NETAPP.COM" domain.
Enter the user name: Administrator
Enter the password:
cluster1::> vserver cifs show
Server Status Domain/Workgroup Authentication
Vserver Name Admin Name Style
----------- --------------- --------- ---------------- --------------
svm1 SVM1 up DEMO domain
cluster1::>
57 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
Configure CIFS and NFS
Clustered Data ONTAP configures CIFS and NFS on a per SVM basis. When you created the svm1 SVM
in the previous section, you set up and enabled CIFS and NFS for that SVM. However, it is important to
understand that clients cannot yet access the SVM using CIFS and NFS. That is partially because you
have not yet created any volumes on the SVM, but also because you have not told the SVM what you want
to share and who you want to share it with.
Each SVM has its own namespace. A namespace is a logical grouping of a single SVM’s volumes into a
directory hierarchy that is private to just that SVM, with the root of that hierarchy hosted on the SVM’s root
volume (svm1_root in the case of the svm1 SVM), and it is through this namespace that the SVM shares
data to CIFS and NFS clients. The SVM’s other volumes are junctioned (i.e. mounted) within that root
volume or within other volumes that are already junctioned into the namespace. This hierarchy presents
NAS clients with a unified, centrally maintained view of the storage encompassed by the namespace,
regardless of where those junctioned volumes physically reside in the cluster. CIFS and NFS clients cannot
access a volume that has not been junctioned into the namespace.
CIFS and NFS clients can access the entire namespace by mounting a single NFS export or CIFS share
declared at the top of the namespace. While this is a very powerful capability, there is no requirement to
make the whole namespace accessible. You can create CIFS shares at any directory level in the
namespace, and you can create different NFS export rules at junction boundaries for individual volumes
and for individual qtrees within a junctioned volume.
Clustered Data ONTAP does not utilize an /etc/exports file to export NFS volumes; instead it uses a policy
model that dictates the NFS client access rules for the associated volumes. An NFS-enabled SVM implicitly
exports the root of its namespace and automatically associates that export with the SVM’s default export
policy, but that default policy is initially empty and until it is populated with access rules no NFS clients will
be able to access the namespace. The SVM’s default export policy applies to the root volume and also to
any volumes that an administrator junctions into the namespace, but an administrator can optionally create
additional export policies in order to implement different access rules within the namespace. You can apply
export policies to a volume as a whole and to individual qtrees within a volume, but a given volume or qtree
can only have one associated export policy. While you cannot create NFS exports at any other directory
level in the namespace, NFS clients can mount from any level in the namespace by leveraging the
namespace’s root export.
In this section of the lab, you are going to configure a default export policy for your SVM so that any
volumes you junction into its namespace will automatically pick up the same NFS export rules. You will
also create a single CIFS share at the top of the namespace so that all the volumes you junction into that
namespace are accessible through that one share. Finally, since your SVM will be sharing the same data
over NFS and CIFS, you will be setting up name mapping between UNIX and Windows user accounts to
facilitate smooth multiprotocol access to the volumes and files in the namespace.
58 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
To perform this section’s tasks from the GUI:
When you create an SVM, Data ONTAP automatically creates a root volume to hold that SVM’s
namespace. An SVM always has a root volume, whether or not it is configured to support NAS protocols.
Before you configure NFS and CIFS for your newly created SVM, take a quick look at the SVM’s root
volume:
1. Select the Storage Virtual Machines tab.
2. Navigate to cluster1->svm1->Storage->Volumes.
3. Note the existence of the svm1_root volume, which hosts the namespace for the svm1 SVM. The root
volume is not large; only 20 MB in this example. Root volumes are small because they only intended to
house the junctions that organize the SVM’s volumes; all of the files hosted on the SVM should reside
inside the volumes that are junctioned into the namespace rather than directly in the SVM’s root
volume.
59 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
Confirm that CIFS and NFS are running for our SVM using System Manager. Check CIFS first.
1. Under the Storage Virtual Machines tab, navigate to cluster1->svm1->Configuration->Protocols-
>CIFS.
2. In the CIFS pane, select the Configuration tab.
3. Note that the Service Status field is listed as Started, which indicates that there is a running CIFS
server for this SVM. If CIFS was not already running for this SVM, then you could configure and start it
using the Setup button found under the Configuration tab.
60 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
Now check that NFS is enabled for your SVM.
1. Select NFS under the Protocols section.
2. Notice that the NFS Server Status field shows as “Not Configured”. Remember that when you ran the
Vserver Setup wizard that you specified that you wanted the NFS protocol, but you cleared the NIS
fields since this lab isn’t using NFS. That combination of actions caused the wizard to lave the NFS
server for this SVM disabled.
3. Click the Enable button in this window to turn NFS on.
61 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The Server Status field in the NFS pane switches from Not Configured to Enabled.
At this point, you have confirmed that your SVM has a running CIFS server and a running NFS server.
However, you have not yet configured those two servers to actually serve any data, and the first step in
that process is to configure the SVM’s default NFS export policy.
62 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
When you create an SVM with NFS, clustered Data ONTAP automatically creates a default NFS export
policy for the SVM that contains an empty list of access rules. Without any access rules that policy will not
allow clients to access any exports, so you need to add a rule to the default policy so that the volumes you
will create on this SVM later in this lab will be automatically accessible to NFS clients. If any of this seems
a bit confusing, do not worry; the concept should become clearer as you work through this section and the
next one.
1. In System Manager, select the Storage Virtual Machines tab and then go to cluster1->svm1-
>Policies->Export Policies.
2. In the Export Polices window, select the default policy.
3. Click the Add button in the bottom portion of the Export Policies pane.
63 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The Create Export Rule window opens. Using this dialog you can create any number of rules that provide
fine grained access control for clients and specify their application order. For this lab, you are going to
create a single rule that grants unfettered access to any host on the lab’s private network.
1. Set the fields in the window to the following values:
 Client Specification: 0.0.0.0/0
 Rule Index: 1
 Access Protocols: Check the CIFS and NFS checkboxes.
 The default values in the other fields in the window are acceptable.
When you finish entering these values, click OK.
The Create Export Policy window closes and focus returns to the Export Policies pane in System Manager.
The new access rule you created now shows up in the bottom portion of the pane. With this updated
default export policy in place, NFS clients will now be able to mount the root of the svm1 SVM’s
namespace, and use that mount to access any volumes that you junction into the namespace.
64 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
Now create a CIFS share for the svm1 SVM. You are going to create a single share named “nsroot”at the
root of the SVM’s namespace.
1. Select the Storage Virtual Machines tab and navigate to cluster1->svm1->Storage->Shares.
2. In the Shares pane, select Create Share.
The Create Share dialog box opens.
1. Set the fields in the window to the following values:
 Folder to Share: / (If you alternately opt to use the Browse button, make sure you select the root
folder).
 Share Name: nsroot
2. Click the Create button.
65 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The Create Share window closes, and focus returns to Shares pane in System Manager. The new “nsroot”
share now shows up in the Shares pane, but you are not yet finished.
1. Select nsroot from the list of shares.
2. Click the Edit button to edit the share’s settings.
66 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The Edit nsroot Settings window opens.
1. Select the Permissions tab. Make sure that you grant the group “Everyone” Full Control permission.
You can set more fine grained permissions on the share from this tab, but this configuration is sufficient
for the exercises in this lab.
67 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
There are other settings to check in this window, so do not close it yet.
1. Select the Options tab at the top of the window and make sure that the Enable as read/write, Enable
Oplocks, Browsable, and Notify Change checkboxes are all checked. All other checkboxes should be
cleared.
2. If you had to change any of the settings listed in Step 1 then the Save and Close button will become
active, and you should click it. Otherwise, click the Cancel button.
The Edit nsroot Settings window closes and focus returns to the Shares pane in System Manager. Setup of
the svm1nsroot CIFS share is now complete.
For this lab you have created just one share at the root of your namespace, which allows users to access
any volume mounted in the namespace through that share. The advantage of this approach is that it
reduces the number of mapped drives that you have to manage on your clients; any changes you make to
the namespace become instantly visible and accessible to your clients. If you prefer to use multiple shares
then clustered Data ONTAP allows you to create additional shares rooted at any directory level within the
namespace.
68 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
Since you have configured your SVM to support both NFS and CIFS, you next need to set up username
mapping so that the UNIX root accounts and the DEMOAdministrator account will have synonymous
access to each other’s files. Setting up such a mapping may not be desirable in all environments, but it will
simplify data sharing for this lab since these are the two primary accounts you are using in this lab.
1. In System Manager, open the Storage Virtual Machines tab and navigate to cluster1->svm1-
>Configuration->Users and Groups->Name Mapping.
2. In the Name Mapping pane, click Add.
69 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The “Add Name Mapping Entry” window opens.
1. Create a Windows to UNIX mapping by completing all of the fields as follows:
 Direction: Windows to UNIX
 Position: 1
 Pattern: demoadministrator (the two backslashes listed here is not a typo, and administrator should
not be capitalized)
 Replacement: root
When you have finished populating these fields, click Add.
The window closes and focus retruns to the Name Mapping pane in System Manager. Click the Add button
again to create another mapping rule.
70 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The “Add Name Mapping Entry” window opens.
1. Create a UNIX to Windows to mapping by completing all of the fields as follows:
 Direction: UNIX to Windows
 Position: 1
 Pattern: root
 Replacement: demoadministrator (the two backslashes listed here are not a typo, and
“administrator” should not be capitalized)
When you have finished populating these fields, click Add.
The second “Add Name Mapping” window closes, and focus again returns to the Name Mapping pane in
System Manager. You should now see two mappings listed in this pane that together make the “root” and
“DEMOAdministrator” accounts equivalent to each other for the purpose of file access within the SVM.
71 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
To perform this section’s tasks from the command line:
1. Verify that CIFS is running by default for the SVM svm1:
cluster1::> vserver cifs show
Server Status Domain/Workgroup Authentication
Vserver Name Admin Name Style
----------- --------------- --------- ---------------- --------------
svm1 SVM1 up DEMO domain
cluster1::>
2. Verify that NFS is running for the SVM svm1. It is not initially, so turn it on.
cluster1::> vserver nfs status -vserver svm1
The NFS server is not running on Vserver "svm1".
cluster1::> vserver nfs create -vserver svm1 -v3 enabled -access true
cluster1::> vserver nfs status -vserver svm1
The NFS server is running on Vserver "svm1".
cluster1::> vserver nfs show
Vserver: svm1
General Access: true
v3: enabled
v4.0: disabled
4.1: disabled
UDP: enabled
TCP: enabled
Default Windows User: -
Default Windows Group: -
cluster1::>
72 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
3. Create an export policy for the SVM svm1 and configure the policy’s rules.
cluster1::> vserver export-policy show
Vserver Policy Name
--------------- -------------------
svm1 default
cluster1::> vserver export-policy rule show
This table is currently empty.
cluster1::> vserver export-policy rule create -vserver svm1 -policyname default
-clientmatch 0.0.0.0/0 -rorule any -rwrule any -superuser any -anon 65534 -ruleindex 1
cluster1::> vserver export-policy rule show
Policy Rule Access Client RO
Vserver Name Index Protocol Match Rule
------------ --------------- ------ -------- --------------------- ---------
svm1 default 1 any 0.0.0.0/0 any
cluster1::> vserver export-policy rule show -policyname default -instance
Vserver: svm1
Policy Name: default
Rule Index: 1
Access Protocol: any
Client Match Hostname, IP Address, Netgroup, or Domain: 0.0.0.0/0
RO Access Rule: any
RW Access Rule: any
User ID To Which Anonymous Users Are Mapped: 65534
Superuser Security Types: any
Honor SetUID Bits in SETATTR: true
Allow Creation of Devices: true
cluster1::>
73 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
4. Create a share at the root of the namespace for the SVM svm1:
cluster1::> vserver cifs share show
Vserver Share Path Properties Comment ACL
-------------- ------------- ----------------- ---------- -------- -----------
svm1 admin$ / browsable - -
svm1 c$ / oplocks - BUILTINAdministrators / Full
Control
browsable
changenotify
svm1 ipc$ / browsable - -
3 entries were displayed.
cluster1::> vserver cifs share create -vserver svm1 -share-name nsroot -path /
cluster1::> vserver cifs share show
Vserver Share Path Properties Comment ACL
-------------- ------------- ----------------- ---------- -------- -----------
svm1 admin$ / browsable - -
svm1 c$ / oplocks - BUILTINAdministrators / Full
Control
browsable
changenotify
svm1 ipc$ / browsable - -
svm1 nsroot / oplocks - Everyone / Full Control
browsable
changenotify
4 entries were displayed.
cluster1::>
5. Set up CIFS <-> NFS user name mapping for the SVM svm1:
cluster1::> vserver name-mapping show
This table is currently empty.
cluster1::> vserver name-mapping create -vserver svm1 -direction win-unix -position 1 -pattern
demoadministrator -replacement root
cluster1::> vserver name-mapping create -vserver svm1 -direction unix-win -position 1 -pattern
root -replacement demoadministrator
cluster1::> vserver name-mapping show
Vserver Direction Position
-------------- --------- --------
svm1 win-unix 1 Pattern: demoadministrator
Replacement: root
svm1 unix-win 1 Pattern: root
Replacement: demoadministrator
2 entries were displayed.
cluster1::>
74 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
Create a Volume and Map It to the Namespace
Volumes, or FlexVols, are the dynamically sized containers used by Data ONTAP to store data. A volume
only resides in a single aggregate at a time, but any given aggregate can host multiple volumes. Unlike an
aggregate, which can associate with multiple SVMS, a volume can only associate to a single SVM. The
maximum size of a volume can vary depending on what storage controller model is hosting it.
An SVM can host multiple volumes. While there is no specific limit on the number of FlexVols that can be
configured for a given SVM, each storage controller node is limited to hosting no more than 500 or 1000
FlexVols (varies based on controller model), which means that there is an effective limit on the total
number of volumes that a cluster can host, depending on how many nodes there are in your cluster.
Each storage controller node has a root aggregate (e.g. aggr0_<nodename>) that contains the node’s Data
ONTAP operating system. Do not use the node’s root aggregate to host any other volumes or user data;
always create additional aggregates and volumes for that purpose.
Clustered Data ONTAP FlexVols support a number of storage efficiency features including thin
provisioning, deduplication, and compression. One specific storage efficiency feature you will be seeing in
the section of the lab is thin provisioning, which dictates how space for a FlexVol is allocated in its
containing aggregate.
When you create a FlexVol with a volume guarantee of type “volume” you are thickly provisioning the
volume, pre-allocating all of the space for the volume on the containing aggregate, which ensures that the
volume will never run out of space unless the volume reaches 100% capacity. When you create a FlexVol
with a volume guarantee of “none” you are thinly provisioning the volume, only allocating space for it on the
containing aggregate at the time and in the quantity that the volume actually requires the space to store the
data.
This latter configuration allows you to increase your overall space utilization and even oversubscribe an
aggregate by allocating more volumes on it than the aggregate could actually accommodate if all the
subscribed volumes reached their full size. However, if an oversubscribed aggregate does fill up then all it’s
volumes will run out of space before they reach their maximum volume size, therefore oversubscription
deployments generally require a greater degree of administrative vigilance around space utilization.
In the Clusters section, you created a new aggregate named “aggr1_cluster1_01”; you will now use that
aggregate to host a new thinly provisioned volume named “engineering” for the SVM named “svm1”.
75 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
To perform this section’s tasks from the GUI:
1. In System Manager, open the Storage Virtual Machines tab.
2. Navigate to cluster1->svm1->Storage->Volumes.
3. Click Create to launch the Create Volume wizard.
76 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The Create Volume window opens.
1. Populate the following values into the data fields in the window.
 Name: engineering
 Aggregate: aggr1_cluster1_01
 Total Size: 10 GB
 Check the Thin Provisioned checkbox.
Leave the other values at their defaults.
2. Click the Create button.
77 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.
The Create Volume window closes, and focus returns to the Volumes pane in System Manager. The newly
created engineering volume should now appear in the Volumes list. Notice that the volume is 10 GB in
size, and is thin provisioned.
System Manager has also automatically mapped the engineering volume into the SVM’s NAS namespace.
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide
Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide

Más contenido relacionado

La actualidad más candente

Presentation linux on power
Presentation   linux on powerPresentation   linux on power
Presentation linux on powersolarisyougood
 
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...xKinAnx
 
Developing a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure EnvironmentsDeveloping a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure EnvironmentsCeph Community
 
JetStor portfolio update final_2020-2021
JetStor portfolio update final_2020-2021JetStor portfolio update final_2020-2021
JetStor portfolio update final_2020-2021Gene Leyzarovich
 
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...xKinAnx
 
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...xKinAnx
 
Snap protect se_presentation_v3.0
Snap protect se_presentation_v3.0Snap protect se_presentation_v3.0
Snap protect se_presentation_v3.0Mikis Eminov
 
Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral Program
Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral ProgramBig Lab Problems Solved with Spectrum Scale: Innovations for the Coral Program
Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral Programinside-BigData.com
 
Learning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under ContainersLearning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under Containersinside-BigData.com
 
ZFS for Databases
ZFS for DatabasesZFS for Databases
ZFS for Databasesahl0003
 
Ms Tech Ed Best Practices For Exchange Server Cluster Deployments June 2003
Ms Tech Ed   Best Practices For Exchange Server Cluster Deployments June 2003Ms Tech Ed   Best Practices For Exchange Server Cluster Deployments June 2003
Ms Tech Ed Best Practices For Exchange Server Cluster Deployments June 2003Armando Leon
 
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...xKinAnx
 
IBM Platform Computing Elastic Storage
IBM Platform Computing  Elastic StorageIBM Platform Computing  Elastic Storage
IBM Platform Computing Elastic StoragePatrick Bouillaud
 
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...xKinAnx
 
Revisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS SchedulerRevisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS SchedulerYongseok Oh
 
Ibm spectrum scale_backup_n_archive_v03_ash
Ibm spectrum scale_backup_n_archive_v03_ashIbm spectrum scale_backup_n_archive_v03_ash
Ibm spectrum scale_backup_n_archive_v03_ashAshutosh Mate
 
Geek Sync | Infrastructure for the Data Professional: An Introduction
Geek Sync | Infrastructure for the Data Professional: An IntroductionGeek Sync | Infrastructure for the Data Professional: An Introduction
Geek Sync | Infrastructure for the Data Professional: An IntroductionIDERA Software
 
Presentazione HPE @ VMUGIT UserCon 2015
Presentazione HPE @ VMUGIT UserCon 2015Presentazione HPE @ VMUGIT UserCon 2015
Presentazione HPE @ VMUGIT UserCon 2015VMUG IT
 

La actualidad más candente (20)

Presentation linux on power
Presentation   linux on powerPresentation   linux on power
Presentation linux on power
 
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
 
Developing a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure EnvironmentsDeveloping a Ceph Appliance for Secure Environments
Developing a Ceph Appliance for Secure Environments
 
JetStor portfolio update final_2020-2021
JetStor portfolio update final_2020-2021JetStor portfolio update final_2020-2021
JetStor portfolio update final_2020-2021
 
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
 
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
 
Snap protect se_presentation_v3.0
Snap protect se_presentation_v3.0Snap protect se_presentation_v3.0
Snap protect se_presentation_v3.0
 
Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral Program
Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral ProgramBig Lab Problems Solved with Spectrum Scale: Innovations for the Coral Program
Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral Program
 
Learning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under ContainersLearning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under Containers
 
ZFS for Databases
ZFS for DatabasesZFS for Databases
ZFS for Databases
 
ZFS appliance
ZFS applianceZFS appliance
ZFS appliance
 
Ms Tech Ed Best Practices For Exchange Server Cluster Deployments June 2003
Ms Tech Ed   Best Practices For Exchange Server Cluster Deployments June 2003Ms Tech Ed   Best Practices For Exchange Server Cluster Deployments June 2003
Ms Tech Ed Best Practices For Exchange Server Cluster Deployments June 2003
 
HP 3PAR SSMC 2.1
HP 3PAR SSMC 2.1HP 3PAR SSMC 2.1
HP 3PAR SSMC 2.1
 
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
 
IBM Platform Computing Elastic Storage
IBM Platform Computing  Elastic StorageIBM Platform Computing  Elastic Storage
IBM Platform Computing Elastic Storage
 
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
 
Revisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS SchedulerRevisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS Scheduler
 
Ibm spectrum scale_backup_n_archive_v03_ash
Ibm spectrum scale_backup_n_archive_v03_ashIbm spectrum scale_backup_n_archive_v03_ash
Ibm spectrum scale_backup_n_archive_v03_ash
 
Geek Sync | Infrastructure for the Data Professional: An Introduction
Geek Sync | Infrastructure for the Data Professional: An IntroductionGeek Sync | Infrastructure for the Data Professional: An Introduction
Geek Sync | Infrastructure for the Data Professional: An Introduction
 
Presentazione HPE @ VMUGIT UserCon 2015
Presentazione HPE @ VMUGIT UserCon 2015Presentazione HPE @ VMUGIT UserCon 2015
Presentazione HPE @ VMUGIT UserCon 2015
 

Destacado (6)

NetApp & Storage fundamentals
NetApp & Storage fundamentalsNetApp & Storage fundamentals
NetApp & Storage fundamentals
 
Data ONTAP 8.3: What's New?
Data ONTAP 8.3: What's New?Data ONTAP 8.3: What's New?
Data ONTAP 8.3: What's New?
 
Ns0 157(6)
Ns0 157(6)Ns0 157(6)
Ns0 157(6)
 
netapp c-mode terms
netapp c-mode termsnetapp c-mode terms
netapp c-mode terms
 
Clustered data ontap_8.3_physical_storage
Clustered data ontap_8.3_physical_storageClustered data ontap_8.3_physical_storage
Clustered data ontap_8.3_physical_storage
 
Netapp Storage
Netapp StorageNetapp Storage
Netapp Storage
 

Similar a Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide

Net app ecmlp2495163
Net app ecmlp2495163Net app ecmlp2495163
Net app ecmlp2495163forum4user
 
Openstack_administration
Openstack_administrationOpenstack_administration
Openstack_administrationAshish Sharma
 
Sql server 2019 New Features by Yevhen Nedaskivskyi
Sql server 2019 New Features by Yevhen NedaskivskyiSql server 2019 New Features by Yevhen Nedaskivskyi
Sql server 2019 New Features by Yevhen NedaskivskyiAlex Tumanoff
 
Monitoring&Logging - Stanislav Kolenkin
Monitoring&Logging - Stanislav Kolenkin  Monitoring&Logging - Stanislav Kolenkin
Monitoring&Logging - Stanislav Kolenkin Kuberton
 
Top 10 DevOps Areas Need To Focus
Top 10 DevOps Areas Need To FocusTop 10 DevOps Areas Need To Focus
Top 10 DevOps Areas Need To Focusdevopsjourney
 
Virtualization In Software Testing
Virtualization In Software TestingVirtualization In Software Testing
Virtualization In Software TestingColloquium
 
Virtualization with Lenovo X6 Blade Servers: white paper
Virtualization with Lenovo X6 Blade Servers: white paperVirtualization with Lenovo X6 Blade Servers: white paper
Virtualization with Lenovo X6 Blade Servers: white paperLenovo Data Center
 
OS for AI: Elastic Microservices & the Next Gen of ML
OS for AI: Elastic Microservices & the Next Gen of MLOS for AI: Elastic Microservices & the Next Gen of ML
OS for AI: Elastic Microservices & the Next Gen of MLNordic APIs
 
FPL'2014 - FlexTiles Workshop - 5 - FlexTiles Simulation Platform
FPL'2014 - FlexTiles Workshop - 5 - FlexTiles Simulation PlatformFPL'2014 - FlexTiles Workshop - 5 - FlexTiles Simulation Platform
FPL'2014 - FlexTiles Workshop - 5 - FlexTiles Simulation PlatformFlexTiles Team
 
4029569 ltsp-practice-guide
4029569 ltsp-practice-guide4029569 ltsp-practice-guide
4029569 ltsp-practice-guidekishorconnectix
 
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...Hendrik van Run
 
What’s Mule 4.3? How Does Anytime RTF Help? Our insights explain.
What’s Mule 4.3? How Does Anytime RTF Help? Our insights explain. What’s Mule 4.3? How Does Anytime RTF Help? Our insights explain.
What’s Mule 4.3? How Does Anytime RTF Help? Our insights explain. Kellton Tech Solutions Ltd
 
Sanger, upcoming Openstack for Bio-informaticians
Sanger, upcoming Openstack for Bio-informaticiansSanger, upcoming Openstack for Bio-informaticians
Sanger, upcoming Openstack for Bio-informaticiansPeter Clapham
 
EMC VNX FAST VP
EMC VNX FAST VP EMC VNX FAST VP
EMC VNX FAST VP EMC
 
EMC FAST VP for Unified Storage Systems
EMC FAST VP for Unified Storage Systems EMC FAST VP for Unified Storage Systems
EMC FAST VP for Unified Storage Systems EMC
 
Linux Assignment 3
Linux Assignment 3Linux Assignment 3
Linux Assignment 3Diane Allen
 

Similar a Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide (20)

Net app ecmlp2495163
Net app ecmlp2495163Net app ecmlp2495163
Net app ecmlp2495163
 
Openstack_administration
Openstack_administrationOpenstack_administration
Openstack_administration
 
Sql server 2019 New Features by Yevhen Nedaskivskyi
Sql server 2019 New Features by Yevhen NedaskivskyiSql server 2019 New Features by Yevhen Nedaskivskyi
Sql server 2019 New Features by Yevhen Nedaskivskyi
 
Monitoring&Logging - Stanislav Kolenkin
Monitoring&Logging - Stanislav Kolenkin  Monitoring&Logging - Stanislav Kolenkin
Monitoring&Logging - Stanislav Kolenkin
 
Ansible
AnsibleAnsible
Ansible
 
Top 10 DevOps Areas Need To Focus
Top 10 DevOps Areas Need To FocusTop 10 DevOps Areas Need To Focus
Top 10 DevOps Areas Need To Focus
 
Virtualization In Software Testing
Virtualization In Software TestingVirtualization In Software Testing
Virtualization In Software Testing
 
Virtualization with Lenovo X6 Blade Servers: white paper
Virtualization with Lenovo X6 Blade Servers: white paperVirtualization with Lenovo X6 Blade Servers: white paper
Virtualization with Lenovo X6 Blade Servers: white paper
 
Data Base Connector
Data Base Connector Data Base Connector
Data Base Connector
 
OS for AI: Elastic Microservices & the Next Gen of ML
OS for AI: Elastic Microservices & the Next Gen of MLOS for AI: Elastic Microservices & the Next Gen of ML
OS for AI: Elastic Microservices & the Next Gen of ML
 
FPL'2014 - FlexTiles Workshop - 5 - FlexTiles Simulation Platform
FPL'2014 - FlexTiles Workshop - 5 - FlexTiles Simulation PlatformFPL'2014 - FlexTiles Workshop - 5 - FlexTiles Simulation Platform
FPL'2014 - FlexTiles Workshop - 5 - FlexTiles Simulation Platform
 
4029569 ltsp-practice-guide
4029569 ltsp-practice-guide4029569 ltsp-practice-guide
4029569 ltsp-practice-guide
 
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...
 
What’s Mule 4.3? How Does Anytime RTF Help? Our insights explain.
What’s Mule 4.3? How Does Anytime RTF Help? Our insights explain. What’s Mule 4.3? How Does Anytime RTF Help? Our insights explain.
What’s Mule 4.3? How Does Anytime RTF Help? Our insights explain.
 
Sanger, upcoming Openstack for Bio-informaticians
Sanger, upcoming Openstack for Bio-informaticiansSanger, upcoming Openstack for Bio-informaticians
Sanger, upcoming Openstack for Bio-informaticians
 
Flexible compute
Flexible computeFlexible compute
Flexible compute
 
EMC VNX FAST VP
EMC VNX FAST VP EMC VNX FAST VP
EMC VNX FAST VP
 
EMC FAST VP for Unified Storage Systems
EMC FAST VP for Unified Storage Systems EMC FAST VP for Unified Storage Systems
EMC FAST VP for Unified Storage Systems
 
Gsi
GsiGsi
Gsi
 
Linux Assignment 3
Linux Assignment 3Linux Assignment 3
Linux Assignment 3
 

Último

Gardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch LetterGardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch LetterMateoGardella
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.pptRamjanShidvankar
 
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...KokoStevan
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxVishalSingh1417
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingTeacherCyreneCayanan
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docxPoojaSen20
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhikauryashika82
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxVishalSingh1417
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 

Último (20)

Gardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch LetterGardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch Letter
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Advance Mobile Application Development class 07
Advance Mobile Application Development class 07Advance Mobile Application Development class 07
Advance Mobile Application Development class 07
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writing
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 

Basic concepts for_clustered_data_ontap_8.3_v1.1-lab_guide

  • 1. The information is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. NetApp makes no warranties, expressed or implied, on future functionality and timeline. The development, release, and timing of any features or functionality described for NetApp’s products remains at the sole discretion of NetApp. NetApp's strategy and possible future developments, products and or platforms directions and functionality are all subject to change without notice. NetApp has no obligation to pursue any course of business outlined in this document or any related presentation, or to develop or release any functionality mentioned therein. NetApp Lab on Demand (LOD) Basic Concepts for Clustered Data ONTAP 8.3 April 2015 | SL10220 v1.1
  • 2. 2 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. CONTENTS Introduction .............................................................................................................................. 3 Why clustered Data ONTAP?............................................................................................................................3 Lab Objectives..................................................................................................................................................4 Prerequisites ....................................................................................................................................................5 Accessing the Command Line...........................................................................................................................5 Lab Environment ...................................................................................................................... 7 Lab Activities ............................................................................................................................ 8 Clusters............................................................................................................................................................8 Create Storage for NFS and CIFS ...................................................................................................................38 Create Storage for iSCSI...............................................................................................................................107 Appendix 1 – Using the clustered Data ONTAP Command Line ....................................... 179 References............................................................................................................................ 181 Version History..................................................................................................................... 182
  • 3. 3 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. Introduction This lab introduces the fundamentals of clustered Data ONTAP® . In it you will start with a pre-created 2- node cluster and configure Windows 2012R2 and Red Hat Enterprise Linux 6.5 hosts to access storage on the cluster using CIFS, NFS, and iSCSI. Why clustered Data ONTAP? A helpful way to start understanding the benefits offered by clustered Data ONTAP is to consider server virtualization. Before server virtualization, system administrators frequently deployed applications on dedicated servers in order to maximize application performance and to avoid the instabilities often encountered when combining multiple applications on the same operating system instance. While this design approach was effective, it also had the following drawbacks:  It does not scale well – adding new servers for every new application is extremely expensive.  It is inefficient – most servers are significantly underutilized meaning that businesses are not extracting the full benefit of their hardware investment.  It is inflexible – re-allocating standalone server resources for other purposes is time consuming, staff intensive, and highly disruptive. Server virtualization directy addresses all three of these limitations by decoupling the application instance from the underlying physical hardware. Multiple virtual servers can share a pool of physical hardware, meaning that businesses can now consolidate their server workloads to a smaller set of more effectively utilized physical servers. In addition, the ability to transparently migrate running virtual machines across a pool of physical servers enables businesses to reduce the impact of downtime due to scheduled maintenance activities. Clustered Data ONTAP brings these same benefits, and many others, to storage systems. As with server virtualization, clustered Data ONTAP enables you to combine multiple physical storage controllers into a single logical cluster that can non-disruptively service multiple storage workload needs. With clustered Data ONTAP you can:  Combine different types and models of NetApp storage controllers (known as nodes) into a shared physical storage resource pool (referred to as a cluster).  Support multiple data access protocols (CIFS, NFS, Fibre Channel, iSCSI, FCoE) concurrently on the same storage cluster.  Consolidate various storage workloads to the cluster. Each workload can be assigned its own Storage Virtual Machine (SVM), which is essentially a dedicated virtual storage controller, and its own data volumes, LUNs, CIFS shares, and NFS exports.  Support multitenancy with delegated administration of SVMs. Tenants can be different companies, business units, or even individual application owners, each with their own distinct administrators whose admin rights are limited to just the assigned SVM.  Use Quality of Service (QoS) capabilities to manage resource utilization between storage workloads.  Non-disruptively migrate live data volumes and client connections from one cluster node to another.  Non-disruptively scale the cluster out by adding nodes. Nodes can likewise be non-disruptively removed from the cluster, meaning that you can non-disruptively scale a cluster up and down during hardware refresh cycles.
  • 4. 4 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved.  Leverage multiple nodes in the cluster to simultaneously service a given SVM’s storage workloads. This means that businesses can scale out their SVMs beyond the bounds of a single physical node in response to growing storage and performance requirements, all non-disruptively.  Apply software & firmware updates and configuration changes without downtime. Lab Objectives This lab explores fundamental concepts of clustered Data ONTAP, and utilizes a modular design to allow you to focus on the topics that are of specific interest to you. The “Clusters” section is required for all invocations of the lab (it is a prerequisite for the other sections). If you are interested in NAS functionality then complete the “Storage Virtual Machines for NFS and CIFS” section. If you are interested in SAN functionality, then complete the “Storage Virtual Machines for iSCSI” section and at least one of it’s Windows or Linux subsections (you may do both if you so choose). If you are interested in nondisruptive operations then you will need to first complete one of the Storage Virtual Machine sections just mentioned before you can proceed to the “Nondisruptive Operations: section. Here summary of the exercises in this lab, along with their Estimated Completion Times (ECT):  Clusters (Required, ECT = 20 minutes). o Explore a cluster o View Advanced Drive Partitioning. o Create a data aggregate. o Create a Subnet.  Storage Virtual machines for NFS and CIFS (Optional, ECT = 40 minutes) o Create a Storage Virtual Machine. o Create a volume on the Storage Virtual Machine. o Configure the Storage Virtual Machine for CIFS and NFS access. o Mount a CIFS share from the Storage Virtual Machine on a Windows client. o Mount a NFS volume from the Storage Virtual Machine on a Linux client.  Storage Virtual Machines for iSCSI (Optional, ECT = 90 minutes including all optional subsections) o Create a Storage Virtual Machine. o Create a volume on the Storage Virtual Machine.  For Windows (Optional , ECT = 40 minutes) o Create a Windows LUN on the volume and map the LUN to an igroup. o Configure a Windows client for iSCSI and MPIO and mount the LUN.  For Linux (Optional, ECT = 40 minutes) o Create a Linux LUN on the volume and map the LUN to an igroup. o Configure a Linux client for iSCSI and multipath and mount the LUN. This lab includes instructions for completing each of these tasks using either System Manager, NetApp’s graphical administration interface, or the Data ONTAP command line. The end state of the lab produced by either method is exactly the same so use whichever method you are the most comfortable with.
  • 5. 5 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. In a lab section you will encounter orange bars similar to the following that indicate the beginning of the graphical or command line procedures for that exercise. A few sections only offer one of these two options rather than both, in which case the text in the orange bar will communicate that point. Note that while switching back and forth between the graphical and command line methods from one section of the lab guide to another is supported, this guide is not designed to support switching back and forth between these methods within a single section. For the best experience we recommend that you stick with a single method for the duration of a lab section. Prerequisites This lab introduces clustered Data ONTAP and so this guide makes no assumptions that the user has previous experience with Data ONTAP. The lab does assume some basic familiarity with storage system related concepts such as RAID, CIFS, NFS, LUNs, and DNS. This lab includes steps for mapping shares and mounting LUNs on a Windows client. These steps assume that the lab user has a basic familiarity with Microsoft Windows. This lab also includes steps for mount NFS volumes and LUNs on a Linux client. All steps are performed from the Linux command line and assumes a basic working knowledge of the Linux command line. A basic working knowledge of a text editor such as vi may be useful, but is not required. Accessing the Command Line PuTTY is the terminal emulation program used in the lab to log into Linux hosts and storage controllers in order to run command line commands. 1. The launch icon for the PuTTY application is pinned to the taskbar on the Windows host jumphost as shown in the following screenshot; just double-click on the icon to launch it. ***EXAMPLE*** To perform this section’s tasks from the GUI: ***EXAMPLE***
  • 6. 6 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. Once PuTTY launches you can connect to one of the hosts in the lab by following these steps. This example shows a user connecting to the Data ONTAP cluster named cluster1. 1. By default PuTTY should launch into the “Basic options for your PuTTY session” display as shown in the screenshot. If you accidentally navigate away from this view just click on the Session category item to return to this view. 2. Use the scrollbar in the Saved Sessions box to navigate down to the desired host and double-click it to open the connection. A terminal window will open and you will be prompted to log into the host. You can find the correct username and password for the host in Table 1 in the “Lab Environment” section at the beginning of this guide. The clustered Data ONTAP command lines supports a number of usability features that make the command line much easier to use. If you are unfamiliar with those features then review “Appendix 1” of this lab guide which contains a brief overview of them.
  • 7. 7 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. Lab Environment The following figure contains a diagram of the environment for this lab. All of the servers and storage controllers presented in this lab are virtual devices, and the networks that interconnect them are exclusive to your lab session. While we encourage you to follow the demonstration steps outlined in this lab guide, you are free to deviate from this guide and experiment with other Data ONTAP features that interest you. While the virtual storage controllers (vsims) used in this lab offer nearly all of the same functionality as physical storage controllers, they are not capable of providing the same performance as a physical controller, which is why these labs are not suitable for performance testing. Table 1 provides a list of the servers and storage controller nodes in the lab, along with their IP address. Table 1: Lab Host Credentials Hostname Description IP Address(es) Username Password JUMPHOST Windows 20012R2 Remote Access host 192.168.0.5 DemoAdministrator Netapp1! RHEL1 Red Hat 6.5 x64 Linux host 192.168.0.61 root Netapp1! RHEL2 Red Hat 6.5 x64 Linux host 192.168.0.62 root Netapp1! DC1 Active Directory Server 192.168.0.253 DemoAdministrator Netapp1! cluster1 Data ONTAP cluster 192.168.0.101 admin Netapp1! cluster1-01 Data ONTAP cluster node 192.168.0.111 admin Netapp1! cluster1-02 Data ONTAP cluster node 192.168.0.112 admin Netapp1! Table 2 lists the NetApp software that is pre-installed on the various hosts in this lab.
  • 8. 8 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. Table 2: Preinstalled NetApp Software Hostname Description JUMPHOST Data ONTAP DSM v4.1 for Windows MPIO, Windows Host Utility Kit v6.0.2 RHEL1, RHEL2 Linux Host Utilities Kit v6.2 Lab Activities Clusters Expected Completion Time: 20 Minutes A cluster is a group of physical storage controllers, or nodes, that have been joined together for the purpose of serving data to end users. The nodes in a cluster can pool their resources together so that the cluster can distribute it’s work across the member nodes. Communication and data transfer between member nodes (such as when a client accesses data on a node other than the one actually hosting the data) takes place over a 10Gb cluster-interconnect network to which all the nodes are connected, while management and client data traffic passes over separate management and data networks configured on the member nodes. Clusters typically consist of one or more NetApp storage controller High Availability (HA) pairs. Both controllers in an HA pair actively host and serve data, but they are also capable of taking over their partner’s responsibilities in the event of a service disruption by virtue of their redundant cable paths to each other’s disk storage. Having multiple HA pairs in a cluster allows the cluster to scale out to handle greater workloads, and to support non-disruptive migrations of volumes and client connections to other nodes in the cluster resource pool. This means that cluster expansion and technology refreshes can take place while the cluster remains fully online, and serving data. Since clusters are almost always comprised of one or more HA pairs, a cluster almost always contains an even number of controller nodes. There is one exception to this rule, and that is the “single node cluster”, which is a special cluster configuration intended to support small storage deployments that can be satisfied with a single physical controller head. The primary noticeable difference between single node and standard clusters, besides the number of nodes, is that a single node cluster does not have a cluster network. Single node clusters can later be converted into traditional multi-node clusters, and at that point become subject to all the standard cluster requirements like the need to utilize an even number of nodes consisting of HA pairs. This lab does not contain a single node cluster, and so this lab guide does not discuss them further. Data ONTAP 8.3 clusters that only serve NFS and CIFS can scale up to a maximum of 24 nodes, although the node limit may be lower depending on the model of FAS controller in use. Data ONTAP 8.3 clusters that also host iSCSI and FC can scale up to a maximum of 8 nodes. This lab utilizes simulated NetApp storage controllers rather than physical FAS controllers. The simulated controller, also known as a vsim, is a virtual machine that simulates the functionality of a physical controller without the need for dedicated controller hardware. The vsim is not designed for performance testing, but does offer much of the same functionality as a physical FAS controller, including the ability to generate I/O to disks. This makes the vsim is a powerful tool to explore and experiment with Data ONTAP product features. The vsim does is limited when a feature requires a specific physical capability that the vsim does not support; for example, vsims do not support Fibre Channel connections, which is why this lab uses iSCSI to demonstrate block storage functionality.
  • 9. 9 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. This lab starts with a pre-created, minimally configured cluster. The pre-created cluster already includes Data ONTAP licenses, the cluster’s basic network configuration, and a pair of pre-configured HA controllers. In this next section you will create the aggregates that are used by the SVMs that you will create in later sections of the lab. You will also take a look at the new Advanced Drive Partitioning feature introduced in clustered Data ONTAP 8.3. Connect to the Cluster with OnCommand System Manager OnCommand System Manager is NetApp’s browser-based management tool for configuring and managing NetApp storage systems and clusters. Prior to 8.3, System Manager was a separate application that you had to download and install on your client OS. In 8.3, System Manager is now moved on-board the cluster, so you just point your web browser to the cluster management address. The on-board System Manager interface is essentially the same that NetApp offered in the System Manager 3.1, the version you install on a client. On the Jumphost, the Windows 2012R2 Server desktop you see when you first connect to the lab, open the web browser of your choice. This lab guide uses Chrome, but you can use Firefox or Internet Explorer if you prefer one of those. All three browsers already have System Manager set as the browser home page. 1. Launch Chrome to open System Manager. The OnCommand System Manager Login window opens. This section’s tasks can only be performed from the GUI:
  • 10. 10 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. 1. Enter the User Name as admin and the Password as Netapp1! and then click Sign In. System Manager is now logged in to cluster1 and displays a summary page for the cluster. If you are unfamiliar with System Manager, here is a quick introduction to its layout.
  • 11. 11 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. Use the tabs on the left side of the window to manage various aspects of the cluster. The Cluster tab (1) accesses configuration settings that apply to the cluster as a whole. The Storage Virtual Machines tab (2) allows you to manage individual Storage Virtual Machines (SVMs, also known as Vservers). The Nodes tab (3) contains configuration settings that are specific to individual controller nodes. Please take a few moments to expand and browse these tabs to familiarize yourself with their contents. Note: As you use System Manager in this lab, you may encounter situations where buttons at the bottom of a System Manager pane are beyond the viewing size of the window, and no scroll bar exists to allow you to scroll down to see them. If this happens, then you have two options; either increase the size of the browser window (you might need to increase the resolution of your jumphost desktop to accommodate the larger browser window), or in the System Manager window, use the tab key to cycle through all the various fields and buttons, which eventually forces the window to scroll down to the non-visible items. Advanced Drive Partitioning Disks, whether Hard Disk Drives (HDD) or Solid State Disks (SSD), are the fundamental unit of physical storage in clustered Data ONTAP, and are tied to a specific cluster node by virtue of their physical connectivity (i.e., cabling) to a given controller head. Data ONTAP manages disks in groups called aggregates. An aggregate defines the RAID properties for a group of disks that are all physically attached to the same node. A given disk can only be a member of a single aggregate.
  • 12. 12 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. By default each cluster node has one aggregate known as the root aggregate, which is a group of the node’s local disks that host the node’s Data ONTAP operating system. A node’s root aggregate is automatically created during Data ONTAP installation in a minimal RAID-DP configuration This means it is initially comprised of 3 disks (1 data, 2 parity), and has a name that begins the string aggr0. For example, in this lab the root aggregate of the node cluster1-01 is named “aggr0_cluster1_01.”, and the root aggregate of the node cluster1-02 is named “aggr0_cluster1_02”. On higher end FAS systems that have many disks, the requirement to dedicate 3 disks for each controller’s root aggregate is not a burden, but for entry level FAS systems that only have 24 or 12 disks this root aggregate disk overhead requirement signficantly reduces the disks available for storing user data. To improve usable capacity, NetApp has introduced Advanced Drive Partitioning in 8.3, which divides the Hard Disk Drives (HDDs) on nodes that have this feature enabled into two partititions; a small root partition, and a much larger data partition. Data ONTAP allocates the root partitions to the node root aggregate, and the data partitions for data aggregates. Each partition behaves like a virtual disk, so in terms of RAID Data ONTAP treats these partitions just like physical disks when creating aggregates. The key benefit here is that a much higher percentage of the node’s overall disk capacity is now available to host user data. Data ONTAP only supports HDD partitioning for FAS 22xx and FAS25xx controllers, and only for HDDs installed in their internal shelf on those models. Advanced Drive Partitioning can only be enabled at system installation time, and there is no way to convert an existing system to use Advanced Drive Partitioning other than to completely evacuate the affected HDDs and then re-install Data ONTAP. All-Flash FAS (AFF) supports a variation of Advanced Drive Partitioning that utilizes SSDs instead of HDDs. The capability is available for entry-level, mid-range, and high-end AFF platforms. Data ONTAP 8.3 also introduces SSD partitioning for use with Flash Pools, but the details of that feature lie outside the scope of this lab. In this section, you will see how to determine if a cluster node is utilizing Advanced Drive Partitioning. System Manager provides a basic view into this information, but if you want to see more detail then you will want to use the CLI.
  • 13. 13 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. 1. In System Manager’s left pane, navigate to the Cluster tab. 2. Expand cluster1. 3. Expand Storage. 4. Click Disks. 5. In the main window, click on the Summary tab. 6. Scroll the main window down to the Spare Disks section, where you will see that each cluster node has 12 spare disks with a per-disk size of 26.88 GB. These spares represent the data partitions of the physical disks that belong to each node. To perform this section’s tasks from the GUI:
  • 14. 14 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. If you scroll back up to look at the “Assigned HDDs” section of the window, you will see that there are no entries listed for the root partitions of the disks. Under daily operation, you will be primarly concerned with data partitions rather than root partitions, and so this view focuses on just showing information about the data partitions. To see information about the physical disks attached to your system you will need to select the Inventory tab. 1. Click on the Inventory tab at the top of the Disks window. System Manager’s main window now shows a list of the physical disks available across all the nodes in the cluster, which nodes own those disks, and so on. If you look at the Container Type column you see that the disks in your lab all show a value of “shared”; this value indicates that the physical disk is partitioned. For disks that are not partitioned you would typically see values like “spare”, “data”, “parity”, and “dparity”.
  • 15. 15 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. For a FAS controller that will be using Advanced Drive Partitioning, Data ONTAP automatically determines the size of the root and data disk partitions at system installation time based on the quantity and size of the available disks assigned to each node. In this lab each cluster node has twelve 32 GB hard disks, and you can see how your node’s root aggregates are consuming the root partitions on those disks by going to the Aggregates page in System Manager. 1. On the Cluster tab, navigate to cluster1->Storage->Aggregates. 2. In the Aggregates list, select “aggr0_cluster1_01”, which is the root aggregate for cluster node cluster1- 01. Notice that the total size of this aggregate is a little over 10 GB. The Available and Used space shown for this aggregate in your lab may vary from what is shown in this screenshot, depending on the quantity and size of the snapshots that exist on your node’s root volume. 3. Click the Disk Layout tab at the bottom of the window. The lower pane of System Manager now displays a list of the disks that are members of this aggregate. Notice that the usable space is 1.52 GB, which is the size of the root partition on the disk. The Physical Space column displays to total capacity of the whole disk that is available to clustered Data ONTAP, including the space allocated to both the disk’s root and data partitions. If you do not already have a PuTTY session established to cluster1, then launch PuTTY as described in the “Accessing the Command Line” section at the beginning of this guide, and connect to the host cluster1 using the username admin and the password Netapp1!. To perform this section’s tasks from the command line:
  • 16. 16 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. 1. List all of the physical disks attached to the cluster: cluster1::> storage disk show Usable Disk Container Container Disk Size Shelf Bay Type Type Name Owner ---------------- ---------- ----- --- ------- ----------- --------- -------- VMw-1.1 28.44GB - 0 VMDISK shared aggr0_cluster1_01 cluster1-01 VMw-1.2 28.44GB - 1 VMDISK shared aggr0_cluster1_01 cluster1-01 VMw-1.3 28.44GB - 2 VMDISK shared aggr0_cluster1_01 cluster1-01 VMw-1.4 28.44GB - 3 VMDISK shared aggr0_cluster1_01 cluster1-01 VMw-1.5 28.44GB - 4 VMDISK shared aggr0_cluster1_01 cluster1-01 VMw-1.6 28.44GB - 5 VMDISK shared aggr0_cluster1_01 cluster1-01 VMw-1.7 28.44GB - 6 VMDISK shared aggr0_cluster1_01 cluster1-01 VMw-1.8 28.44GB - 8 VMDISK shared aggr0_cluster1_01 cluster1-01 VMw-1.9 28.44GB - 9 VMDISK shared aggr0_cluster1_01 cluster1-01 VMw-1.10 28.44GB - 10 VMDISK shared aggr0_cluster1_01 cluster1-01 VMw-1.11 28.44GB - 11 VMDISK shared - cluster1-01 VMw-1.12 28.44GB - 12 VMDISK shared - cluster1-01 VMw-1.13 28.44GB - 0 VMDISK shared aggr0_cluster1_02 cluster1-02 VMw-1.14 28.44GB - 1 VMDISK shared aggr0_cluster1_02 cluster1-02 VMw-1.15 28.44GB - 2 VMDISK shared aggr0_cluster1_02 cluster1-02 VMw-1.16 28.44GB - 3 VMDISK shared aggr0_cluster1_02 cluster1-02 VMw-1.17 28.44GB - 4 VMDISK shared aggr0_cluster1_02 cluster1-02 VMw-1.18 28.44GB - 5 VMDISK shared aggr0_cluster1_02 cluster1-02 VMw-1.19 28.44GB - 6 VMDISK shared aggr0_cluster1_02 cluster1-02 VMw-1.20 28.44GB - 8 VMDISK shared aggr0_cluster1_02 cluster1-02 VMw-1.21 28.44GB - 9 VMDISK shared aggr0_cluster1_02 VMw-1.22 28.44GB - 10 VMDISK shared aggr0_cluster1_02 cluster1-02 VMw-1.23 28.44GB - 11 VMDISK shared - cluster1-02 VMw-1.24 28.44GB - 12 VMDISK shared - cluster1-02 24 entries were displayed. cluster1::>
  • 17. 17 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The preceding command listed a total of 24 disks, 12 for each of the nodes in this two-node cluster. The container type for all the disks is “shared”, which indicates that the disks are partitioned. For disks that are not partitioned, you would typically see values like “spare”, “data”, “parity”, and “dparity”. The Owner field indicates which node the disk is assigned to, and the Container Name field indicates which aggregate the disk is assigned to. Notice that two disks for each node do not have a Container Name listed; these are spare disks that Data ONTAP can use as replacements in the event of a disk failure. 2. At this point, the only aggregates that exist on this new cluster are the root aggregates. List the aggregates that exist on the cluster: cluster1::> aggr show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr0_cluster1_01 10.26GB 510.6MB 95% online 1 cluster1-01 raid_dp, normal aggr0_cluster1_02 10.26GB 510.6MB 95% online 1 cluster1-02 raid_dp, normal 2 entries were displayed. cluster1::> 3. Now list the disks that are members of the root aggregate for the node cluster-01. Here is the command that you would ordinarily use to display that information for an aggregate that is not using partitioned disks. cluster1::> storage disk show -aggregate aggr0_cluster1_01 There are no entries matching your query. Info: One or more aggregates queried for use shared disks. Use "storage aggregate show-status" to get correct set of disks associated with these aggregates. cluster1::>
  • 18. 18 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. 4. As you can see, in this instance the preceding command is not able to produce a list of disks because this aggregate is using shared disks. Instead it refers you to use the “storage aggregate show” command to query the aggregate for a list of it’s assigned disk partitions. cluster1::> storage aggregate show-status -aggregate aggr0_cluster1_01 Owner Node: cluster1-01 Aggregate: aggr0_cluster1_01 (online, raid_dp) (block checksums) Plex: /aggr0_cluster1_01/plex0 (online, normal, active, pool0) RAID Group /aggr0_cluster1_01/plex0/rg0 (normal, block checksums) Usable Physical Position Disk Pool Type RPM Size Size Status -------- --------------------------- ---- ----- ------ -------- -------- -------- shared VMw-1.1 0 VMDISK - 1.52GB 28.44GB (normal) shared VMw-1.2 0 VMDISK - 1.52GB 28.44GB (normal) shared VMw-1.3 0 VMDISK - 1.52GB 28.44GB (normal) shared VMw-1.4 0 VMDISK - 1.52GB 28.44GB (normal) shared VMw-1.5 0 VMDISK - 1.52GB 28.44GB (normal) shared VMw-1.6 0 VMDISK - 1.52GB 28.44GB (normal) shared VMw-1.7 0 VMDISK - 1.52GB 28.44GB (normal) shared VMw-1.8 0 VMDISK - 1.52GB 28.44GB (normal) shared VMw-1.9 0 VMDISK - 1.52GB 28.44GB (normal) shared VMw-1.10 0 VMDISK - 1.52GB 28.44GB (normal) 10 entries were displayed. cluster1::> The output shows that aggr0_cluster1_01 is comprised of 10 disks, each with a usable size of 1.52 GB, and you know that the aggregate is using the listed disk’s root partitions because aggr0_cluster1_01 is a root aggregate. For a FAS controller that will be using Advanced Drive Partitioning, Data ONTAP automatically determines the size of the root and data disk partitions at system installation time. That determination is based on the quantity and size of the available disks assigned to each node. As you saw earlier, this particular cluster node has 12 disks, so during installation Data ONTAP partitioned all 12 disks but only assigned 10 of those root partitions to the root aggregate so that the node would have 2 spares disks available.to protect against disk failures.
  • 19. 19 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. 5. The Data ONTAP CLI includes a diagnostic level command that provides a more comprehensive single view of a system’s partitioned disks. The following command shows the partitioned disks that belong to the node cluster1-01. cluster1::> set -priv diag Warning: These diagnostic commands are for use by NetApp personnel only. Do you want to continue? {y|n}: y cluster1::*> disk partition show -owner-node-name cluster1-01 Usable Container Container Partition Size Type Name Owner ------------------------- ------- ------------- ----------------- ----------------- VMw-1.1.P1 26.88GB spare Pool0 cluster1-01 VMw-1.1.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0 cluster1-01 VMw-1.2.P1 26.88GB spare Pool0 cluster1-01 VMw-1.2.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0 cluster1-01 VMw-1.3.P1 26.88GB spare Pool0 cluster1-01 VMw-1.3.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0 cluster1-01 VMw-1.4.P1 26.88GB spare Pool0 cluster1-01 VMw-1.4.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0 cluster1-01 VMw-1.5.P1 26.88GB spare Pool0 cluster1-01 VMw-1.5.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0 cluster1-01 VMw-1.6.P1 26.88GB spare Pool0 cluster1-01 VMw-1.6.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0 cluster1-01 VMw-1.7.P1 26.88GB spare Pool0 cluster1-01 VMw-1.7.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0 VMw-1.8.P1 26.88GB spare Pool0 cluster1-01 VMw-1.8.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0 cluster1-01 VMw-1.9.P1 26.88GB spare Pool0 cluster1-01 VMw-1.9.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0 cluster1-01 VMw-1.10.P1 26.88GB spare Pool0 cluster1-01 VMw-1.10.P2 1.52GB aggregate /aggr0_cluster1_01/plex0/rg0 cluster1-01 VMw-1.11.P1 26.88GB spare Pool0 cluster1-01 VMw-1.11.P2 1.52GB spare Pool0 cluster1-01 VMw-1.12.P1 26.88GB spare Pool0 cluster1-01 VMw-1.12.P2 1.52GB spare Pool0 cluster1-01 24 entries were displayed. cluster1::*> set -priv admin cluster1::>
  • 20. 20 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. Create a New Aggregate on Each Cluster Node The only aggregates that exist on a newly created cluster are the node root aggregates. The root aggregate should not be used to host user data, so in this section you will be creating a new aggregate on each of the nodes in cluster1 so they can host the storage virtual machines, volumes, and LUNs that you will be creating later in this lab. A node can host multiple aggregates depending on the data sizing, performance, and isolation needs of the storage workloads that it will be hosting. When you create a Storage Virtual Machine (SVM) you assign it to use one or more specific aggregates to host the SVM’s volumes. Multiple SVMs can be assigned to use the same aggregate, which offers greater flexibility in managing storage space, whereas dedicating an aggregate to just a single SVM provides greater workload isolation. For this lab, you will be creating a single user data aggregate on each node in the cluster. To perform this section’s tasks from the GUI: You can create aggregates from either the Cluster tab or the Nodes tab. For this exercise use the Cluster tab as follows: 1. Select the Cluster tab. To avoid confusion, always double-check to make sure that you are working in the correct left pane tab context when performing activities in System Manager! 2. Go to cluster1->Storage->Aggregates. 3. Click on the Create button to launch the Create Aggregate Wizard.
  • 21. 21 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The Create Aggregate wizard window opens. 1. Specify the Name of the aggregate as aggr1_cluster1_01 shown and then click Browse. The Select Disk Type window opens. 1. Select the Disk Type entry for the node cluster1-01. 2. Click the OK button.
  • 22. 22 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The Select DiskType window closes, and focus returns to the Create Aggregate window. 1. The “Disk Type” should now show as VMDISK. Set the “Number of Disks” to 5. 2. Click the Create button to create the new aggregate and to close the wizard.
  • 23. 23 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The Create Aggregate window close, and focus returns to the Aggregates view in System Manager.The newly created aggregate should now be visible in the list of aggregates. 1. Select the entry for the aggregate aggr1_cluster1_01 if it is not already selected. 2. Click the Details tab to view more detailed information about this aggregate’s configuration. 3. Notice that aggr1_cluster1_01 is a 64-bit aggregate. In earlier versions of clustered Data ONTAP 8, an aggregate could be either 32-bit or 64-bit, but Data ONTAP 8.3 only supports 64-bit aggregates. If you have an existing clustered Data ONTAP 8.x system that has 32-bit aggregates and you plan to upgrade that cluster to 8.3, you must convert those 32-bit aggregates to 64-bit aggregates prior to the upgrade. The procedure for that migration is not covered in this lab, so if you need further details then please refer to the clustered Data ONTAP documentation.
  • 24. 24 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. Now repeat the process to create a new aggregate on the node cluster1-02. 1. Click the Create button again. The Create Aggregate window opens. 1. Specify the Aggregate’s “Name” as aggr1_cluster1_02 and then click the Browse button.
  • 25. 25 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The Select Disk Type window opens. 1. Select the Disk Type entry for the node cluster1-02. 2. Click the OK button. The Select Disk Type window closes, and focus returns to the Create Aggregate window. 1. The “Disk Type” should now show as VMDISK. Set the Number of Disks to 5. 2. Click the Create button to create the new aggregate. The Create Aggregate window closes, and focus returnsto the Aggregates view in System Manager.
  • 26. 26 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The new aggregate aggr1_cluster1_02 now appears in the cluster’s aggregate list.
  • 27. 27 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. To perform this section’s tasks from the command line: From a PuTTY session logged in to cluster1 as the username admin and password Netapp1!. Display a list of the disks attached to the node cluster-01. (Note that you can omit the -nodelist option to display a list of all the disks in the cluster.) By default the PuTTY window may wrap output lines because the window is too small; if this is the case for you then simply expand the window by selecting its edge and dragging it wider, after which any subsequent output will utilize the visible width of the window. cluster1::> disk show -nodelist cluster1-01 Usable Disk Container Container Disk Size Shelf Bay Type Type Name Owner ---------------- ---------- ----- --- ------- ----------- --------- -------- VMw-1.25 28.44GB - 0 VMDISK shared aggr0_cluster1_01 cluster1-01 VMw-1.26 28.44GB - 1 VMDISK shared aggr0_cluster1_01 cluster1-01 VMw-1.27 28.44GB - 2 VMDISK shared aggr0_cluster1_01 cluster1-01 VMw-1.28 28.44GB - 3 VMDISK shared aggr0_cluster1_01 cluster1-01 VMw-1.29 28.44GB - 4 VMDISK shared aggr0_cluster1_01 cluster1-01 VMw-1.30 28.44GB - 5 VMDISK shared aggr0_cluster1_01 cluster1-01 VMw-1.31 28.44GB - 6 VMDISK shared aggr0_cluster1_01 cluster1-01 VMw-1.32 28.44GB - 8 VMDISK shared aggr0_cluster1_01 cluster1-01 VMw-1.33 28.44GB - 9 VMDISK shared aggr0_cluster1_01 cluster1-01 VMw-1.34 28.44GB - 10 VMDISK shared aggr0_cluster1_01 cluster1-01 VMw-1.35 28.44GB - 11 VMDISK shared - cluster1-01 VMw-1.36 28.44GB - 12 VMDISK shared - cluster1-01 VMw-1.37 28.44GB - 0 VMDISK shared aggr0_cluster1_02 cluster1-02 VMw-1.38 28.44GB - 1 VMDISK shared aggr0_cluster1_02 cluster1-02 VMw-1.39 28.44GB - 2 VMDISK shared aggr0_cluster1_02 cluster1-02 VMw-1.40 28.44GB - 3 VMDISK shared aggr0_cluster1_02 cluster1-02 VMw-1.41 28.44GB - 4 VMDISK shared aggr0_cluster1_02 cluster1-02 VMw-1.42 28.44GB - 5 VMDISK shared aggr0_cluster1_02 cluster1-02 VMw-1.43 28.44GB - 6 VMDISK shared aggr0_cluster1_02 cluster1-02 VMw-1.44 28.44GB - 8 VMDISK shared aggr0_cluster1_02 cluster1-02 VMw-1.45 28.44GB - 9 VMDISK shared aggr0_cluster1_02 cluster1-02 VMw-1.46 28.44GB - 10 VMDISK shared aggr0_cluster1_02 cluster1-02 VMw-1.47 28.44GB - 11 VMDISK shared - cluster1-02 VMw-1.48 28.44GB - 12 VMDISK shared - cluster1-02 24 entries were displayed. cluster1::>
  • 28. 28 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. Create the aggregate named “aggr1_cluster1_01” on the node cluster1-01 and the aggregate named “aggr1_cluster1_02” on the node cluster1-02. cluster1::> aggr show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr0_cluster1_01 10.26GB 510.6MB 95% online 1 cluster1-01 raid_dp, normal aggr0_cluster1_02 10.26GB 510.6MB 95% online 1 cluster1-02 raid_dp, normal 2 entries were displayed. cluster1::> aggr create -aggregate aggr1_cluster1_01 -nodes cluster1-01 -diskcount 5 [Job 257] Job is queued: Create aggr1_cluster1_01. [Job 257] Job succeeded: DONE cluster1::> aggr create -aggregate aggr1_cluster1_02 -nodes cluster1-02 -diskcount 5 [Job 258] Job is queued: Create aggr1_cluster1_02. [Job 258] Job succeeded: DONE cluster1::> aggr show Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr0_cluster1_01 10.26GB 510.6MB 95% online 1 cluster1-01 raid_dp, normal aggr0_cluster1_02 10.26GB 510.6MB 95% online 1 cluster1-02 raid_dp, normal aggr1_cluster1_01 72.53GB 72.53GB 0% online 0 cluster1-01 raid_dp, normal aggr1_cluster1_02 72.53GB 72.53GB 0% online 0 cluster1-02 raid_dp, normal 4 entries were displayed. cluster1::>
  • 29. 29 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. Networks Clustered Data ONTAP provides a number of network components to that you use to manage your cluster. Those components include: Ports are the physical Ethernet and Fibre Channel connections on each node, the interface groups (ifgrps) you can create to aggregate those connections, and the VLANs you can use to subdivide them. A logical interface (LIF) is essentially an IP address that is associated with a port, and has a number of associated characteristics such as an assigned home node, an assigned physical home port, a list of physical ports it can fail over to, an assigned SVM, a role, a routing group, and so on. A given LIF can only be assigned to a single SVM, and since LIFs are mapped to physical network ports on cluster nodes this means that an SVM runs in part on all nodes that are hosting its LIFs. Routing tables in clustered Data ONTAP are defined for each Storage Virtual Machine. Since each SVM has it’s own routing table, changes to one SVM’s routing table does not have impact on any other SVM’s routing table. IPspaces are new in Data ONTAP 8.3 and allow you to configure a Data ONTAP cluster to logically separate one IP network from another, even if those two networks are using the same IP address range. IPspaces are a mult-tenancy feature designed to allow storage service providers to share a cluster between different companies while still separating storage traffic for privacy and security. Every cluster include a default IPspace to which Data ONTAP automatically assigns new SVMs, and that default IPspace is probably sufficient for most NetApp customers who are deploying a cluster within a single company or organization that uses a non-conflicting IP address range. Broadcast Domains are also new in Data ONTAP 8.3, and are collections of ports that all have access to the same layer 2 networks, both physical and virtual (i.e. VLANs). Every IPspace has it’s own set of Broadcast Domains, and Data ONTAP provides a default broadcast domain to go along with the default IPspace. Broadcast domains are used by Data ONTAP to determine what ports an SVM can use for it’s LIFs. Subnets are another new feature in Data ONTAP 8.3, and are a convenience feature intended to make LIF creation and management easier for Data ONTAP administrators. A subnet is a pool of IP addresses that you can specify by name when creating a LIF. Data ONTAP will automatically assign an available IP address from the pool to the LIF, along with a subnet mask and a gateway. A subnet is scoped to a specific broadcast domain, so all the subnet’s addresses belong to the same layer 3 network. Data ONTAP manages the pool automatically as you create or delete LIFs, and if you manually configure a LIF with an address from the pool then it will detect that the address is use and mark it as such in the pool. DNS Zones allow an SVM to manage DNS name resolution for it’s own LIFs, and since multiple LIFs can share the same DNS name, this allows the SVM to load balance traffic by IP address across the LIFs. To use DNS Zones you must configure your DNS server to delegate DNS authority for the subdomain to the SVM. In this section of the lab, you will create a subnet that you will leverage in later sections to provision SVMs and LIFs.You will not create IPspaces or Broadcast Domains as the system defaults are sufficient for this lab.
  • 30. 30 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. To perform this section’s tasks from the GUI: 1. In the left pane of System Manager, select the Cluster tab. 2. In the left pane, navigate to cluster1->Configuration->Network. 3. In the right pane select the Broadcast Domains tab. 4. Select the Default subnet. Review the Port Details section at the bottom of the Network pane and note that the e0c – e0g ports on both cluster nodes are all part of this broadcast domain. These are the network ports that you will be using in this lab.
  • 31. 31 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. Now create a new Subnet for this lab. 1. Select the Subnets tab, and notice that there are no subnets listed in the pane. Unlike Broadcast Domains and IPSpaces, Data ONTAP does not provide a default Subnet. 2. Click the Create button.
  • 32. 32 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The Create Subnet window opens. 1. Set the fields in the window as follows.  “Subnet Name”: Demo  “Subnet IP/Subnet mask”: 192.168.0.0/24  “Gateway”: 192.168.0.1 2. The values you enter in the “IP address” box depend on what sections of the lab guide you intend to complete. It is important that you choose the right values here so that the values in your lab will correctly match up with the values used in this lab guide.  If you plan to complete just the NAS section or both the NAS and SAN sections then enter 192.168.0.131-192.168.0.139  If you plan to complete just the SAN section then enter 192.168.0.133-192.168.0.139 3. Click the Browse button.
  • 33. 33 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The Select Broadcast Domain window opens. 1. Select the “Default” entry from the list. 2. Click the OK button.
  • 34. 34 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The Select Broadcast Domain window close, and focus returns to the Create Subnet window. 1. The values in your Create Subnet window should now match those shown in the following screenshot, the only possible exception being for the IP Addresses field, whose value may differ depending on what value range you chose to enter to match your plans for the lab. Note: If you click the “Show ports on this domain” link under the Broadcast Domain textbox, you can once again see the list of ports that this broadcast domain includes. 2. Click Create.
  • 35. 35 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The Create Subnet window closes, and focus returns to the Subnets tab in System Manager. Notice that the main pane pane of the Subnets tab now includes an entery for your newly created subnet, and that the lower portion of the pane includes metrics tracking the consumption of the IP addresses that belong to this subnet. Feel free to explore the contents of the other available tabs on the Network page. Here is a brief summary of the information available on those tabs. The Ethernet Ports tab displays the physical NICs on your controller, which will be a superset of the NICs that you saw previously listed as belonging to the default broadcast domain. The other NICs you will see listed on the Ethernet Ports tab include the node’s cluster network NICs. The Network Interfaces tab displays a list of all of the LIFs on your cluster. The FC/FCoE Adapters tab lists all the WWPNs for all the controllers NICs in the event they will be used for iSCSI or FCoE connections. The simulated NetApp controllers you are using in this lab do not include FC adapters, and this lab does not make use of FCoE.
  • 36. 36 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. To perform this section’s tasks from the command line: 1. Display a list of the cluster’s IPspaces. A cluster actually contains two IPspaces by default; the Cluster IPspace, which correlates to the cluster network that Data ONTAP uses to have cluster nodes communicate with each other, and the Default IPspace to which Data ONTAP automatically assigne all new SVMs. You can create more IPspaces if necessary, but that activity will not be covered in this lab. cluster1::> network ipspace show IPspace Vserver List Broadcast Domains ------------------- ----------------------------- ---------------------------- Cluster Cluster Cluster Default cluster1 Default 2 entries were displayed. cluster1::> 2. Display a list of the cluster’s broadcast domains. Remember that broadcast domains are scoped to a single IPspace. The e0a ports on the cluster nodes are part of the Cluster broadcast domain in the Cluster IPspace. The remaining ports are part of the Default broadcast domain in the Default IPspace. cluster1::> network port broadcast-domain show IPspace Broadcast Update Name Domain Name MTU Port List Status Details ------- ----------- ------ ----------------------------- -------------- Cluster Cluster 1500 cluster1-01:e0a complete cluster1-01:e0b complete cluster1-02:e0a complete cluster1-02:e0b complete Default Default 1500 cluster1-01:e0c complete cluster1-01:e0d complete cluster1-01:e0e complete cluster1-01:e0f complete cluster1-01:e0g complete cluster1-02:e0c complete cluster1-02:e0d complete cluster1-02:e0e complete cluster1-02:e0f complete cluster1-02:e0g complete 2 entries were displayed. cluster1::> 3. Display a list of the cluster’s subnets. cluster1::> network subnet show This table is currently empty. cluster1::>
  • 37. 37 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. Data ONTAP does not include a default subnet, so you will need to create a subnet now. The specific command you will use depends on what sections of this lab guide you plan to complete, as you want to correctly align the IP address pool in your lab with the IP addresses used in the portions of this lab guide that you want to complete. 4. If you plan to complete the NAS portion of this lab, enter the following command. Also use this this command if you plan to complete both the NAS and SAN portions of this lab. cluster1::> network subnet create -subnet-name Demo -broadcast-domain Default -ipspace Default -subnet 192.168.0.0/24 -gateway 192.168.0.1 -ip-ranges 192.168.0.131-192.168.0.139 cluster1::> 5. If you only plan to complete the SAN portion of this lab, then enter the following command instead. cluster1::> network subnet create -subnet-name Demo -broadcast-domain Default -ipspace Default -subnet 192.168.0.0/24 -gateway 192.168.0.1 -ip-ranges 192.168.0.133-192.168.0.139 cluster1::> 6. Re-display the list of the cluster’s subnets. This example assumes you plan to complete the whole lab. cluster1::> network subnet show IPspace: Default Subnet Broadcast Avail/ Name Subnet Domain Gateway Total Ranges --------- ---------------- --------- --------------- --------- --------------- Demo 192.168.0.1/24 Default 192.168.0.1 9/9 192.168.0.131-192.168.0.139 cluster1::> 7. If you are interested in seeing a list of all of the network ports on your cluster, you can use the following command for that purpose. cluster1::> network port show Speed (Mbps) Node Port IPspace Broadcast Domain Link MTU Admin/Oper ------ --------- ------------ ---------------- ----- ------- ------------ cluster1-01 e0a Cluster Cluster up 1500 auto/1000 e0b Cluster Cluster up 1500 auto/1000 e0c Default Default up 1500 auto/1000 e0d Default Default up 1500 auto/1000 e0e Default Default up 1500 auto/1000 e0f Default Default up 1500 auto/1000 e0g Default Default up 1500 auto/1000 cluster1-02 e0a Cluster Cluster up 1500 auto/1000 e0b Cluster Cluster up 1500 auto/1000 e0c Default Default up 1500 auto/1000 e0d Default Default up 1500 auto/1000 e0e Default Default up 1500 auto/1000 e0f Default Default up 1500 auto/1000 e0g Default Default up 1500 auto/1000 14 entries were displayed. cluster1::>
  • 38. 38 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. Create Storage for NFS and CIFS Expected Completion Time: 40 Minutes If you are only interested in SAN protocols then you do not need to complete the lab steps in this section. However, we do recommend that you review the conceptual information found here and at the beginning of each of this section’s subsections before you advance to the SAN section, as most of this conceptual material will not be repeated there. Storage Virtual Machines (SVMs), previously known as Vservers, are the logical storage servers that operate within a cluster for the purpose of serving data out to storage clients. A single cluster may host hundreds of SVMs, with each SVM managing its own set of volumes (FlexVols), Logical Network Interfaces (LIFs), storage access protocols (e.g. NFS/CIFS/iSCSI/FC/FCoE), and for NAS clients its own namespace. The ability to support many SVMs in a single cluster is a key feature in clustered Data ONTAP, and customers are encouraged to actively embrace that feature in order to take full advantage of a cluster’s capabilities. An organization is ill-advised to start out on a deployment intended to scale with only a single SVM. You explicitly choose and configure which storage protocols you want a given SVM to support at SVM creation time, and you can later add or remove protocols as desired.. A single SVM can host any combination of the supported protocols. An SVM’s assigned aggregates and LIFs determine which cluster nodes handle processing for that SVM. As you saw earlier, an aggregate is directly connected to the specific node hosting its disks, which means that an SVM runs in part on any nodes whose aggregates are hosting volumes for the SVM. An SVM also has a direct relationship to any nodes that are hosting its LIFs. LIFs are essentially an IP address with a number of associated characteristics such as an assigned home node, an assigned physical home port, a list of physical ports it can fail over to, an assigned SVM, a role, a routing group, and so on. You can only assign a given LIF to a single SVM, and since LIFs map to physical network ports on cluster nodes, this means that an SVM runs in part on all nodes that are hosting its LIFs. When you configure an SVM with multiple data LIFs, clients can use any of those LIFs to access volumes hosted by the SVM. Which specific LIF IP address a client will use in a given instance, and by extension which LIF, is a function of name resolution, the mapping of a hostname to an IP address. CIFS Servers have responsibility under NetBIOS for resolving requests for their hostnames received from clients, and in so doing can perform some load balancing by responding to different clients with different LIF addresses, but this distribution is not sophisticated and requires external NetBIOS name servers in order to deal with clients that are not on the local network. NFS Servers do not handle name resolution on their own.
  • 39. 39 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. DNS provides basic name resolution load balancing by advertising multiple IP addresses for the same hostname. DNS is supported by both NFS and CIFS clients and works equally well with clients on local area and wide area networks. Since DNS is an external service that resides outside of Data ONTAP this architecture creates the potential for service disruptions if the DNS server is advertising IP addresses for LIFs that are temporarily offline. To compensate for this condition you can configure DNS servers to delegate the name resolution responsibility for the SVM’s hostname records to the SVM itself, so that it can directly respond to name resolution requests involving its LIFs. This allows the SVM to consider LIF availability and LIF utilization levels when deciding what LIF address to return in response to a DNS name resolution request. LIFS that map to physical network ports that reside on the same node as a volume’s containing aggregate offer the most efficient client access path to the volume’s data. However, clients can also access volume data through LIFs bound to physical network ports on other nodes in the cluster; in these cases clustered Data ONTAP uses the high speed cluster network to bridge communication between the node hosting the LIF and the node hosting the volume. NetApp best practice is to create at least one NAS LIF for a given SVM on each cluster node that has an aggregate that is hosting volumes for that SVM. If you desire additional resiliency then you can also create a NAS LIF on nodes not hosting aggregates for the SVM. A NAS LIF (a LIF supporting only NFS and/or CIFS) can automatically failover from one cluster node to another in the event of a component failure; any existing connections to that LIF from NFS and SMB 2.0 and later clients can non-disruptively tolerate the LIF failover event. When a LIF failover happens the NAS LIF migrates to a different physical NIC, potentially to a NIC on a different node in the cluster, and continues servicing network requests from that new node/port. Throughout this operation the NAS LIF maintains its IP address; clients connected to the LIF may notice a brief delay while the failover is in progress but as soon as it completes the clients resume any in-process NAS operations without any loss of data. The number of nodes in the cluster determines the total number of SVMs that can run in the cluster. Each storage controller node can host a maximum of 125 SVMs, so you can calculate the cluster’s effective SVM limit by multiplying the number of nodes by 125. There is no limit on the number of LIFs that an SVM can host, but there is a limit on the number of LIFs that can run on a given node. That limit is 256 LIFs per node, but if the node is part of an HA pair configured for failover then the limit is half that value, 128 LIFs per node (so that a node can also accommodate it’s HA partner’s LIFs in the event of a failover event). Each SVM has its own NAS namespace, a logical grouping of the SVM’s CIFS and NFS volumes into a single logical filesystem view. Clients can access the entire namespace by mounting a single share or export at the top of the namespace tree, meaning that SVM administrators can centrally maintain and present a consistent view of the SVM’s data to all clients rather than having to reproduce that view structure on each individual client. As an administrator maps and unmaps volumes from the namespace those volumes instantly become visible or disappear from clients that have mounted CIFS and NFS volumes higher in the SVM’s namespace. Administrators can also create NFS exports at individual junction points within the namespace and can create CIFS shares at any directory path in the namespace. Create a Storage Virtual Machine for NAS In this section you will create a new SVM named svm1 on the cluster and will configure it to serve out a volume over NFS and CIFS. You will be configuring two NAS data LIFs on the SVM, one per node in the cluster.
  • 40. 40 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. To perform this section’s tasks from the GUI: Start by creating the storage virtual machine. 1. In System Manager, open the Storage Virtual Machines tab. 2. Select cluster1. 3. Click the Create button to launch the Storage Virtual Machine Setup wizard.
  • 41. 41 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The Storage Virual machine (SVM) Setup window opens. 4. Set the fields as follows:  SVM Name: svm1  Data Protocols: check the CIFS and NFS checkboxes Note: The list of available Data Protocols is dependent upon what protocols are licensed on your cluster; if a given protocol isn’t listed, it is because you aren’t licensed for it.  Security Style: NTFS  Root Aggregate: aggr1_cluster1_01 The default values for IPspace, Volume Type, and Default Language are already populated for you by the wizard, as is the DNS configuration. When ready, click Submit & Continue.
  • 42. 42 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The wizard creates the SVM and then advances to the protocols window. The protocols window can rather large so this guide will present it in sections. 1. The Subnet setting defaults to Demo since this is the only subnet definition that exists in your lab. Click the Browse button next to the Port textbox. The Select Network Port or Adapter window opens. 1. Expand the list of ports for the node cluster1-01 and select port e0c. 2. Click the OK button.
  • 43. 43 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The Select Network Port or Adapter window closes and focus returns to the protocols portion of the Storage Virtual Machine (SVM) Setup wizard. 1. The Port textbox should have been populated with the cluster and port value you just selected. 2. Populate the CIFS Server Configuration textboxes with the following values:  CIFS Server Name: svm1  Active Directory: demo.netapp.com  Administrator Name: Administrator  Password: Netapp1! 3. The optional “Provision a volume for CIFS storage” textboxes offer a quick way to provision a simple volume and CIFS share at SVM creation time. This share will not be multi-protocol, and since in most cases when you create a share, you will be doing so for an existing SVM. This lab guide will show that more full-featured procedure for creating a volume and share in the following sections. 4. Expand the optional NIS Configuration section.
  • 44. 44 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. Scroll down in the window to see the expanded NIS Configuration section. 1. Clear the pre-populated values from the Domain Name and IP Address fields. In a NFS environment where you are running NIS, you would want to configure these values, but this lab environment is not utilizing NIS and in this case leaving these fields populated will create a name resolution problem later in the lab. 2. As was the case with CIFS, the provision a volume for NFS storage textboxes offer a quick way to provison a volume and create an NFS export for that volume. Once again, the volume will not be inherently multi-protocol, and will in fact be a completely separate volume from the CIFS share volume that you could have selected to create in the CIFS section. This lab will utilize the more full featured volume creation process that you will see in later sections. 3. Click the Submit & Continue button to advance the wizard to the next screen.
  • 45. 45 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The SVM Administration section of the Storage Virtual Machine (SVM) Setup wizard opens.This window allows you to set up an administration account that is scoped to just this SVM so that you can delegate administrative tasks for this SVM to an SVM-specific administrator without giving that administrator cluster- wide privileges. As the comments in this wizard window indicate, this account must also exist for use with SnapDrive. Although you will not be using SnapDrive in this lab, it is usally a good idea to create this account and you will do so here. 1. The “User Name” is pre-populated with the value vsadmin. Set the “Password” and “Confirm Password” textboxes to netapp123. When finished, click the Submit & Continue button.
  • 46. 46 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The New Storage Virtual Machine (SVM) Summary window opens. 1. Review the settings for the new SVM, taking special note of the IP Address listed in the CIFS/NFS Configuration section. Data ONTAP drew this address from the Subnets pool that you created earlier in the lab.When finished, click the OK button.
  • 47. 47 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The window closes, and focus returns to the System Manager window, which now displays a summary page for your newly created svm1 SVM. 1. Notice that in the main pane of the window the CIFS protocol is listed with a green background. This indicates that a CIFS server is running for this SVM. 2. Notice that the NFS protocol is listed with a yellow background, which indicates that there is not a running NFS server for this SVM. If you had configured the NIS server settings during the SVM Setup wizard then the wizard would have started the NFS server, but since this lab is not using NIS you will manually turn on NFS in a later step. The New Storage Virtual Machine Setup Wizard only provisions a single LIF when creating a new SVM. NetApp best practice is to configure a LIF on both nodes in an HA pair so that a client can access the SVM’s shares through either node. To comply with that best practise you will now create a second LIF hosted on the other node in the cluster.
  • 48. 48 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. System Manager for clustered Data ONTAP 8.2 and earlier presented LIF management under the Storage Virtual Machines tab, only offering visibility to LIFs for a single SVM at a time. With 8.3, that functionality has moved to the Cluster tab, where you now have a single view for managing all the LIFs in your cluster. 1. Select the Cluster tab in the left navigation pane of System Manager. 2. Navigate to cluster1-> Configuration->Network. 3. Select the Network Interfaces tab in the main Network pane. 4. Select the only LIF listed for the svm1 SVM. Notice that this LIF is named svm1_cifs_nfs_lif1; you will be following that same naming convention for the new LIF. 5. Click on the Create button to launch the Network Interface Create Wizard.
  • 49. 49 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The Create Network Interface window opens. 1. Set the fields in the window to the following values:  Name: svm1_cifs_nfs_lif2  Interface Role: Serves Data  SVM: svm1  Protocol Access: Check CIFS and NFS checkboxes.  Management Access: Check the Enable Management Access checkbox.  Subnet: Demo  Check The IP address is selected from this subnet checkbox.  Also expand the Port Selection listbox and select the entry for cluster1-02 port e0c. 2. Click the Create button to continue.
  • 50. 50 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The Create Network Interface window close, and focus returns to the Network pane in System Manager. 1. Notice that a new entry for the svm1_cifs_nfs_lif2 LIF is now present under the Network Interfaces tab. Select this entry and review the LIF’s properties. Lastly, you need to configure DNS delegation for the SVM so that Linux and Windows clients can intelligently utilize all of svm1’s configured NAS LIFs. To achieve this objective, the DNS server must delegate to the cluster the responsibility for the DNS zone corresponding to the SVM’s hostname, which in this case will be “svm1.demo.netapp.com”. The lab’s DNS server is already configured to delegate this responsibility, but you must also configure the SVM to accept it. System Manager does not currently include the capability to configure DNS delegation so you will need to use the CLI for this purpose.
  • 51. 51 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. 1. Open a PuTTY connection to cluster1 following the instructions in the “Accessing the Command Line” section at the beginning of this guide. Log in using the username admin and the password Netapp1!, then enter the following commands. cluster1::> network interface modify -vserver svm1 -lif svm1_cifs_nfs_lif1 -dns-zone svm1.demo.netapp.com cluster1::> network interface modify -vserver svm1 -lif svm1_cifs_nfs_lif2 -dns-zone svm1.demo.netapp.com cluster1::> network interface show -vserver svm1 -fields dns-zone,address vserver lif address dns-zone ------- ----------------- ------------- ------------------- svm1 svm1_cifs_nfs_lif1 192.168.0.131 svm1.demo.netapp.com svm1 svm1_cifs_nfs_lif2 192.168.0.132 svm1.demo.netapp.com 2 entries were displayed. cluster1::> 2. Validate that delegation is working correctly by opening PowerShell on jumphost and using the nslookup command as shown in the following CLI output. If the nslookup command returns IP addresses as identified by the yellow highlighted text, then delegation is working correctly. If the nslookup returns a “Non-existent domain” error then delegation is not working correctly and you will need to review the Data ONTAP commands you just entered as they most likely contained an error. Also notice from the following screenshot that different executions of the nslookup command return different addresses, demonstrating that DNS load balancing is working correctly. You may need to run the nslookup command more than 2 times before you see it report different addresses for the hostname. Windows PowerShell Copyright (C) 2013 Microsoft Corporation. All rights reserved. PS C:UsersAdministrator.DEMO> nslookup svm1.demo.netapp.com Server: dc1.demo.netapp.com Address: 192.168.0.253 Non-authoritative answer: Name: svm1.demo.netapp.com Address: 192.168.0.132 PS C:UsersAdministrator.DEMO> nslookup svm1.demo.netapp.com Server: dc1.demo.netapp.com Address: 192.168.0.253 Non-authoritative answer: Name: svm1.demo.netapp.com Address: 192.168.0.131 PS C:UsersAdministrator.DEMO
  • 52. 52 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. To perform this section’s tasks from the command line: If you do not already have a PuTTY connection open to cluster1 then open one now following the directions in the “Accessing the Command Line” section at the beginning of his lab guide. The username is admin and the password is Netapp1!. 1. Create the SVM named svm1. Notice that the clustered Data ONTAP command line syntax still refers to storage virtual machines as vservers. cluster1::> vserver create -vserver svm1 -rootvolume svm1_root -aggregate aggr1_cluster1_01 -language C.UTF-8 -rootvolume-security ntfs -snapshot-policy default [Job 259] Job is queued: Create svm1. [Job 259] [Job 259] Job succeeded: Vserver creation completed cluster1::> 2. Add CIFS and NFS protocol support to the SVM svm1: cluster1::> vserver show-protocols -vserver svm1 Vserver: svm1 Protocols: nfs, cifs, fcp, iscsi, ndmp cluster1::> vserver remove-protocols -vserver svm1 -protocols fcp,iscsi,ndmp cluster1::> vserver show-protocols -vserver svm1 Vserver: svm1 Protocols: nfs, cifs cluster1::> vserver show Admin Operational Root Vserver Type Subtype State State Volume Aggregate ----------- ------- ---------- ---------- ----------- ---------- ---------- cluster1 admin - - - - - cluster1-01 node - - - - - cluster1-02 node - - - - - svm1 data default running running svm1_root aggr1_ cluster1_ 01 4 entries were displayed. cluster1::>
  • 53. 53 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. 3. Display a list of the cluster’s network interfaces: cluster1::> network interface show Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- Cluster cluster1-01_clus1 up/up 169.254.224.98/16 cluster1-01 e0a true cluster1-02_clus1 up/up 169.254.129.177/16 cluster1-02 e0a true cluster1 cluster1-01_mgmt1 up/up 192.168.0.111/24 cluster1-01 e0c true cluster1-02_mgmt1 up/up 192.168.0.112/24 cluster1-02 e0c true cluster_mgmt up/up 192.168.0.101/24 cluster1-01 e0c true 5 entries were displayed. cluster1::> 4. Notice that there are not yet any LIFs defined for the SVM svm1. Create the svm1_cifs_nfs_lif1 data LIF for svm1: cluster1::> network interface create -vserver svm1 -lif svm1_cifs_nfs_lif1 -role data -data-protocol nfs,cifs -home-node cluster1-01 -home-port e0c -subnet-name Demo -firewall-policy mgmt cluster1::> 5. Create the svm1_cifs_nfs_lif2 data LIF for the SVM svm1: cluster1::> network interface create -vserver svm1 -lif svm1_cifs_nfs_lif2 -role data -data-protocol nfs,cifs -home-node cluster1-02 -home-port e0c -subnet-name Demo -firewall-policy mgmt cluster1::> 6. Display all of the LIFs owned by svm1: cluster1::> network interface show -vserver svm1 Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- svm1 svm1_cifs_nfs_lif1 up/up 192.168.0.131/24 cluster1-01 e0c true svm1_cifs_nfs_lif2 up/up 192.168.0.132/24 cluster1-02 e0c true 2 entries were displayed. cluster1::>
  • 54. 54 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. 7. Configure the DNS domain and nameservers for the svm1 SVM: cluster1::> vserver services dns show Name Vserver State Domains Servers --------------- --------- ----------------------------------- ---------------- cluster1 enabled demo.netapp.com 192.168.0.253 cluster1::> vserver services dns create -vserver svm1 -name-servers 192.168.0.253 -domains demo.netapp.com cluster1::> vserver services dns show Name Vserver State Domains Servers --------------- --------- ----------------------------------- ---------------- cluster1 enabled demo.netapp.com 192.168.0.253 svm1 enabled demo.netapp.com 192.168.0.253 2 entries were displayed. cluster1::> 8. Configure the LIFs to accept DNS delegation responsibility for the svm1.demo.netapp.com zone so that you can advertise addresses for both of the NAS data LIFs that belong to svm1. You could have done this as part of the network interface create commands but we opted to do it separately here to show you how you can modify an existing LIF. cluster1::> network interface modify -vserver svm1 -lif svm1_cifs_nfs_lif1 -dns-zone svm1.demo.netapp.com cluster1::> network interface modify -vserver svm1 -lif svm1_cifs_nfs_lif2 -dns-zone svm1.demo.netapp.com cluster1::> network interface show -vserver svm1 -fields dns-zone,address vserver lif address dns-zone ------- ------------------ ------------- -------------------- svm1 svm1_cifs_nfs_lif1 192.168.0.131 svm1.demo.netapp.com svm1 svm1_cifs_nfs_lif2 192.168.0.132 svm1.demo.netapp.com 2 entries were displayed. cluster1::>
  • 55. 55 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. 9. Verify that DNS delegation is working correctly by opening a PuTTY connection to the Linux host rhel1 (username root and password Netapp1!) and executing the following commands. If the delegation is working correctly then you should see IP addresses returned for the host svm1.demo.netapp.com, and if you run the command several times you will eventually see that the responses vary the returned address between the SVM’s two LIFs. [root@rhel1 ~]# nslookup svm1.demo.netapp.com Server: 192.168.0.253 Address: 192.168.0.253#53 Non-authoritative answer: Name: svm1.demo.netapp.com Address: 192.168.0.132 [root@rhel1 ~]# nslookup svm1.demo.netapp.com Server: 192.168.0.253 Address: 192.168.0.253#53 Non-authoritative answer: Name: svm1.demo.netapp.com Address: 192.168.0.131 [root@rhel1 ~]# 10. This completes the planned LIF configuration changes for svm1, so now display a detailed configuration report for the LIF svm1_cifs_nfs_lif1: cluster1::> network interface show -lif svm1_cifs_nfs_lif1 -instance Vserver Name: svm1 Logical Interface Name: svm1_cifs_nfs_lif1 Role: data Data Protocol: nfs, cifs Home Node: cluster1-01 Home Port: e0c Current Node: cluster1-01 Current Port: e0c Operational Status: up Extended Status: - Is Home: true Network Address: 192.168.0.131 Netmask: 255.255.255.0 Bits in the Netmask: 24 IPv4 Link Local: - Subnet Name: Demo Administrative Status: up Failover Policy: system-defined Firewall Policy: mgmt Auto Revert: false Fully Qualified DNS Zone Name: svm1.demo.netapp.com DNS Query Listen Enable: true Failover Group Name: Default FCP WWPN: - Address family: ipv4 Comment: - IPspace of LIF: Default cluster1::>
  • 56. 56 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. 11. When you issued the vserver create command to create svm1 you included an option to enable CIFS for it, but that command did not actually create a CIFS server for the svm. Now it is time to create that CIFS server. cluster1::> vserver cifs show This table is currently empty. cluster1::> vserver cifs create -vserver svm1 -cifs-server svm1 -domain demo.netapp.com In order to create an Active Directory machine account for the CIFS server, you must supply the name and password of a Windows account with sufficient privileges to add computers to the "CN=Computers" container within the "DEMO.NETAPP.COM" domain. Enter the user name: Administrator Enter the password: cluster1::> vserver cifs show Server Status Domain/Workgroup Authentication Vserver Name Admin Name Style ----------- --------------- --------- ---------------- -------------- svm1 SVM1 up DEMO domain cluster1::>
  • 57. 57 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. Configure CIFS and NFS Clustered Data ONTAP configures CIFS and NFS on a per SVM basis. When you created the svm1 SVM in the previous section, you set up and enabled CIFS and NFS for that SVM. However, it is important to understand that clients cannot yet access the SVM using CIFS and NFS. That is partially because you have not yet created any volumes on the SVM, but also because you have not told the SVM what you want to share and who you want to share it with. Each SVM has its own namespace. A namespace is a logical grouping of a single SVM’s volumes into a directory hierarchy that is private to just that SVM, with the root of that hierarchy hosted on the SVM’s root volume (svm1_root in the case of the svm1 SVM), and it is through this namespace that the SVM shares data to CIFS and NFS clients. The SVM’s other volumes are junctioned (i.e. mounted) within that root volume or within other volumes that are already junctioned into the namespace. This hierarchy presents NAS clients with a unified, centrally maintained view of the storage encompassed by the namespace, regardless of where those junctioned volumes physically reside in the cluster. CIFS and NFS clients cannot access a volume that has not been junctioned into the namespace. CIFS and NFS clients can access the entire namespace by mounting a single NFS export or CIFS share declared at the top of the namespace. While this is a very powerful capability, there is no requirement to make the whole namespace accessible. You can create CIFS shares at any directory level in the namespace, and you can create different NFS export rules at junction boundaries for individual volumes and for individual qtrees within a junctioned volume. Clustered Data ONTAP does not utilize an /etc/exports file to export NFS volumes; instead it uses a policy model that dictates the NFS client access rules for the associated volumes. An NFS-enabled SVM implicitly exports the root of its namespace and automatically associates that export with the SVM’s default export policy, but that default policy is initially empty and until it is populated with access rules no NFS clients will be able to access the namespace. The SVM’s default export policy applies to the root volume and also to any volumes that an administrator junctions into the namespace, but an administrator can optionally create additional export policies in order to implement different access rules within the namespace. You can apply export policies to a volume as a whole and to individual qtrees within a volume, but a given volume or qtree can only have one associated export policy. While you cannot create NFS exports at any other directory level in the namespace, NFS clients can mount from any level in the namespace by leveraging the namespace’s root export. In this section of the lab, you are going to configure a default export policy for your SVM so that any volumes you junction into its namespace will automatically pick up the same NFS export rules. You will also create a single CIFS share at the top of the namespace so that all the volumes you junction into that namespace are accessible through that one share. Finally, since your SVM will be sharing the same data over NFS and CIFS, you will be setting up name mapping between UNIX and Windows user accounts to facilitate smooth multiprotocol access to the volumes and files in the namespace.
  • 58. 58 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. To perform this section’s tasks from the GUI: When you create an SVM, Data ONTAP automatically creates a root volume to hold that SVM’s namespace. An SVM always has a root volume, whether or not it is configured to support NAS protocols. Before you configure NFS and CIFS for your newly created SVM, take a quick look at the SVM’s root volume: 1. Select the Storage Virtual Machines tab. 2. Navigate to cluster1->svm1->Storage->Volumes. 3. Note the existence of the svm1_root volume, which hosts the namespace for the svm1 SVM. The root volume is not large; only 20 MB in this example. Root volumes are small because they only intended to house the junctions that organize the SVM’s volumes; all of the files hosted on the SVM should reside inside the volumes that are junctioned into the namespace rather than directly in the SVM’s root volume.
  • 59. 59 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. Confirm that CIFS and NFS are running for our SVM using System Manager. Check CIFS first. 1. Under the Storage Virtual Machines tab, navigate to cluster1->svm1->Configuration->Protocols- >CIFS. 2. In the CIFS pane, select the Configuration tab. 3. Note that the Service Status field is listed as Started, which indicates that there is a running CIFS server for this SVM. If CIFS was not already running for this SVM, then you could configure and start it using the Setup button found under the Configuration tab.
  • 60. 60 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. Now check that NFS is enabled for your SVM. 1. Select NFS under the Protocols section. 2. Notice that the NFS Server Status field shows as “Not Configured”. Remember that when you ran the Vserver Setup wizard that you specified that you wanted the NFS protocol, but you cleared the NIS fields since this lab isn’t using NFS. That combination of actions caused the wizard to lave the NFS server for this SVM disabled. 3. Click the Enable button in this window to turn NFS on.
  • 61. 61 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The Server Status field in the NFS pane switches from Not Configured to Enabled. At this point, you have confirmed that your SVM has a running CIFS server and a running NFS server. However, you have not yet configured those two servers to actually serve any data, and the first step in that process is to configure the SVM’s default NFS export policy.
  • 62. 62 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. When you create an SVM with NFS, clustered Data ONTAP automatically creates a default NFS export policy for the SVM that contains an empty list of access rules. Without any access rules that policy will not allow clients to access any exports, so you need to add a rule to the default policy so that the volumes you will create on this SVM later in this lab will be automatically accessible to NFS clients. If any of this seems a bit confusing, do not worry; the concept should become clearer as you work through this section and the next one. 1. In System Manager, select the Storage Virtual Machines tab and then go to cluster1->svm1- >Policies->Export Policies. 2. In the Export Polices window, select the default policy. 3. Click the Add button in the bottom portion of the Export Policies pane.
  • 63. 63 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The Create Export Rule window opens. Using this dialog you can create any number of rules that provide fine grained access control for clients and specify their application order. For this lab, you are going to create a single rule that grants unfettered access to any host on the lab’s private network. 1. Set the fields in the window to the following values:  Client Specification: 0.0.0.0/0  Rule Index: 1  Access Protocols: Check the CIFS and NFS checkboxes.  The default values in the other fields in the window are acceptable. When you finish entering these values, click OK. The Create Export Policy window closes and focus returns to the Export Policies pane in System Manager. The new access rule you created now shows up in the bottom portion of the pane. With this updated default export policy in place, NFS clients will now be able to mount the root of the svm1 SVM’s namespace, and use that mount to access any volumes that you junction into the namespace.
  • 64. 64 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. Now create a CIFS share for the svm1 SVM. You are going to create a single share named “nsroot”at the root of the SVM’s namespace. 1. Select the Storage Virtual Machines tab and navigate to cluster1->svm1->Storage->Shares. 2. In the Shares pane, select Create Share. The Create Share dialog box opens. 1. Set the fields in the window to the following values:  Folder to Share: / (If you alternately opt to use the Browse button, make sure you select the root folder).  Share Name: nsroot 2. Click the Create button.
  • 65. 65 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The Create Share window closes, and focus returns to Shares pane in System Manager. The new “nsroot” share now shows up in the Shares pane, but you are not yet finished. 1. Select nsroot from the list of shares. 2. Click the Edit button to edit the share’s settings.
  • 66. 66 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The Edit nsroot Settings window opens. 1. Select the Permissions tab. Make sure that you grant the group “Everyone” Full Control permission. You can set more fine grained permissions on the share from this tab, but this configuration is sufficient for the exercises in this lab.
  • 67. 67 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. There are other settings to check in this window, so do not close it yet. 1. Select the Options tab at the top of the window and make sure that the Enable as read/write, Enable Oplocks, Browsable, and Notify Change checkboxes are all checked. All other checkboxes should be cleared. 2. If you had to change any of the settings listed in Step 1 then the Save and Close button will become active, and you should click it. Otherwise, click the Cancel button. The Edit nsroot Settings window closes and focus returns to the Shares pane in System Manager. Setup of the svm1nsroot CIFS share is now complete. For this lab you have created just one share at the root of your namespace, which allows users to access any volume mounted in the namespace through that share. The advantage of this approach is that it reduces the number of mapped drives that you have to manage on your clients; any changes you make to the namespace become instantly visible and accessible to your clients. If you prefer to use multiple shares then clustered Data ONTAP allows you to create additional shares rooted at any directory level within the namespace.
  • 68. 68 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. Since you have configured your SVM to support both NFS and CIFS, you next need to set up username mapping so that the UNIX root accounts and the DEMOAdministrator account will have synonymous access to each other’s files. Setting up such a mapping may not be desirable in all environments, but it will simplify data sharing for this lab since these are the two primary accounts you are using in this lab. 1. In System Manager, open the Storage Virtual Machines tab and navigate to cluster1->svm1- >Configuration->Users and Groups->Name Mapping. 2. In the Name Mapping pane, click Add.
  • 69. 69 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The “Add Name Mapping Entry” window opens. 1. Create a Windows to UNIX mapping by completing all of the fields as follows:  Direction: Windows to UNIX  Position: 1  Pattern: demoadministrator (the two backslashes listed here is not a typo, and administrator should not be capitalized)  Replacement: root When you have finished populating these fields, click Add. The window closes and focus retruns to the Name Mapping pane in System Manager. Click the Add button again to create another mapping rule.
  • 70. 70 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The “Add Name Mapping Entry” window opens. 1. Create a UNIX to Windows to mapping by completing all of the fields as follows:  Direction: UNIX to Windows  Position: 1  Pattern: root  Replacement: demoadministrator (the two backslashes listed here are not a typo, and “administrator” should not be capitalized) When you have finished populating these fields, click Add. The second “Add Name Mapping” window closes, and focus again returns to the Name Mapping pane in System Manager. You should now see two mappings listed in this pane that together make the “root” and “DEMOAdministrator” accounts equivalent to each other for the purpose of file access within the SVM.
  • 71. 71 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. To perform this section’s tasks from the command line: 1. Verify that CIFS is running by default for the SVM svm1: cluster1::> vserver cifs show Server Status Domain/Workgroup Authentication Vserver Name Admin Name Style ----------- --------------- --------- ---------------- -------------- svm1 SVM1 up DEMO domain cluster1::> 2. Verify that NFS is running for the SVM svm1. It is not initially, so turn it on. cluster1::> vserver nfs status -vserver svm1 The NFS server is not running on Vserver "svm1". cluster1::> vserver nfs create -vserver svm1 -v3 enabled -access true cluster1::> vserver nfs status -vserver svm1 The NFS server is running on Vserver "svm1". cluster1::> vserver nfs show Vserver: svm1 General Access: true v3: enabled v4.0: disabled 4.1: disabled UDP: enabled TCP: enabled Default Windows User: - Default Windows Group: - cluster1::>
  • 72. 72 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. 3. Create an export policy for the SVM svm1 and configure the policy’s rules. cluster1::> vserver export-policy show Vserver Policy Name --------------- ------------------- svm1 default cluster1::> vserver export-policy rule show This table is currently empty. cluster1::> vserver export-policy rule create -vserver svm1 -policyname default -clientmatch 0.0.0.0/0 -rorule any -rwrule any -superuser any -anon 65534 -ruleindex 1 cluster1::> vserver export-policy rule show Policy Rule Access Client RO Vserver Name Index Protocol Match Rule ------------ --------------- ------ -------- --------------------- --------- svm1 default 1 any 0.0.0.0/0 any cluster1::> vserver export-policy rule show -policyname default -instance Vserver: svm1 Policy Name: default Rule Index: 1 Access Protocol: any Client Match Hostname, IP Address, Netgroup, or Domain: 0.0.0.0/0 RO Access Rule: any RW Access Rule: any User ID To Which Anonymous Users Are Mapped: 65534 Superuser Security Types: any Honor SetUID Bits in SETATTR: true Allow Creation of Devices: true cluster1::>
  • 73. 73 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. 4. Create a share at the root of the namespace for the SVM svm1: cluster1::> vserver cifs share show Vserver Share Path Properties Comment ACL -------------- ------------- ----------------- ---------- -------- ----------- svm1 admin$ / browsable - - svm1 c$ / oplocks - BUILTINAdministrators / Full Control browsable changenotify svm1 ipc$ / browsable - - 3 entries were displayed. cluster1::> vserver cifs share create -vserver svm1 -share-name nsroot -path / cluster1::> vserver cifs share show Vserver Share Path Properties Comment ACL -------------- ------------- ----------------- ---------- -------- ----------- svm1 admin$ / browsable - - svm1 c$ / oplocks - BUILTINAdministrators / Full Control browsable changenotify svm1 ipc$ / browsable - - svm1 nsroot / oplocks - Everyone / Full Control browsable changenotify 4 entries were displayed. cluster1::> 5. Set up CIFS <-> NFS user name mapping for the SVM svm1: cluster1::> vserver name-mapping show This table is currently empty. cluster1::> vserver name-mapping create -vserver svm1 -direction win-unix -position 1 -pattern demoadministrator -replacement root cluster1::> vserver name-mapping create -vserver svm1 -direction unix-win -position 1 -pattern root -replacement demoadministrator cluster1::> vserver name-mapping show Vserver Direction Position -------------- --------- -------- svm1 win-unix 1 Pattern: demoadministrator Replacement: root svm1 unix-win 1 Pattern: root Replacement: demoadministrator 2 entries were displayed. cluster1::>
  • 74. 74 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. Create a Volume and Map It to the Namespace Volumes, or FlexVols, are the dynamically sized containers used by Data ONTAP to store data. A volume only resides in a single aggregate at a time, but any given aggregate can host multiple volumes. Unlike an aggregate, which can associate with multiple SVMS, a volume can only associate to a single SVM. The maximum size of a volume can vary depending on what storage controller model is hosting it. An SVM can host multiple volumes. While there is no specific limit on the number of FlexVols that can be configured for a given SVM, each storage controller node is limited to hosting no more than 500 or 1000 FlexVols (varies based on controller model), which means that there is an effective limit on the total number of volumes that a cluster can host, depending on how many nodes there are in your cluster. Each storage controller node has a root aggregate (e.g. aggr0_<nodename>) that contains the node’s Data ONTAP operating system. Do not use the node’s root aggregate to host any other volumes or user data; always create additional aggregates and volumes for that purpose. Clustered Data ONTAP FlexVols support a number of storage efficiency features including thin provisioning, deduplication, and compression. One specific storage efficiency feature you will be seeing in the section of the lab is thin provisioning, which dictates how space for a FlexVol is allocated in its containing aggregate. When you create a FlexVol with a volume guarantee of type “volume” you are thickly provisioning the volume, pre-allocating all of the space for the volume on the containing aggregate, which ensures that the volume will never run out of space unless the volume reaches 100% capacity. When you create a FlexVol with a volume guarantee of “none” you are thinly provisioning the volume, only allocating space for it on the containing aggregate at the time and in the quantity that the volume actually requires the space to store the data. This latter configuration allows you to increase your overall space utilization and even oversubscribe an aggregate by allocating more volumes on it than the aggregate could actually accommodate if all the subscribed volumes reached their full size. However, if an oversubscribed aggregate does fill up then all it’s volumes will run out of space before they reach their maximum volume size, therefore oversubscription deployments generally require a greater degree of administrative vigilance around space utilization. In the Clusters section, you created a new aggregate named “aggr1_cluster1_01”; you will now use that aggregate to host a new thinly provisioned volume named “engineering” for the SVM named “svm1”.
  • 75. 75 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. To perform this section’s tasks from the GUI: 1. In System Manager, open the Storage Virtual Machines tab. 2. Navigate to cluster1->svm1->Storage->Volumes. 3. Click Create to launch the Create Volume wizard.
  • 76. 76 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The Create Volume window opens. 1. Populate the following values into the data fields in the window.  Name: engineering  Aggregate: aggr1_cluster1_01  Total Size: 10 GB  Check the Thin Provisioned checkbox. Leave the other values at their defaults. 2. Click the Create button.
  • 77. 77 Basic Concepts for Clustered Data ONTAP 8.3 © 2015 NetApp, Inc. All rights reserved. The Create Volume window closes, and focus returns to the Volumes pane in System Manager. The newly created engineering volume should now appear in the Volumes list. Notice that the volume is 10 GB in size, and is thin provisioned. System Manager has also automatically mapped the engineering volume into the SVM’s NAS namespace.