Más contenido relacionado La actualidad más candente (18) Similar a Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_clustered_nfs (20) Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_clustered_nfs1. Spectrum Scale 4.1 System Administration
Clustered NFS (cNFS)
© Copyright IBM Corporation 2015
3. What is cNFS?
• Allows you to have multiple Spectrum Scale servers sharing
out a common namespace using NFS.
• Requires Spectrum Scale servers running Linux.
• Provides High Availability over NFS.
• High Performance and scalability.
• Lost cost entry points
• Incremental growth.
• Compatibility with NFS clients
© Copyright IBM Corporation 2015
4. What is cNFS?
• Enables customers to run a Spectrum Scale data serving
cluster in which some or all nodes export the Spectrum Scale
file system via NFS.
• Provides scalable capacity accessed via NFS© Copyright IBM Corporation 2015
5. What problems does cNFS solve for customers?
• NFS is typically served up using NAS appliances
• NAS appliances are generally limited to a MAX of 100TB
• Scaling using NAS appliances means adding another NAS
appliance and therefore an additional namespace to manage
• Scaling to 2PB means the Administrator had to manage 20
filesystems and guess at balancing the load across all 20
• This ordeal has to happen over and over again, by hand, each
time another NAS filer is added
• Spectrum Scale provides a single namespace and auto
rebalancing every time you increase capacity
• Spectrum Scale Provides High Availability over NFS by
providing multiple NFS entry points into the file system and
NFS failover to another entry point in the event of a problem
. © Copyright IBM Corporation 2015
6. What problems does cNFS solve for customers?
• Scaling using NAS appliances means adding another NAS
appliance and therefore an additional namespace to manage
• Scaling to 2PB means the Administrator has to manage 20
machines and 20 filesystems and then guess at how best to
distribute the load across all 20
• This ordeal has to happen over and over again, by hand, each
time another NAS filer is added
• Spectrum Scale provides a single namespace and auto
rebalancing every time you increase capacity.
• With Spectrum Scale, you can just add more storage and go
back to bed
© Copyright IBM Corporation 2015
7. Components of cNFS?
• Load Balancing via RR DNS
– Single ip representing list of ip addresses
– Clients multiplex to different nodes using this single ip.
• Monitoring
– Monitors all cNFS components
• NFS components (nfsd, mountd, statd, lockd)
• Network(interface, routes, remote services…).
• Failover
– Failover nfs traffic from one cNFS server to another on detection of
failure by monitoring utility.
© Copyright IBM Corporation 2015
8. Failover steps
• The NFS monitoring utility detects an NFS-related failure.
• The NFS monitoring utility stops NFS serving and fails (that is, kills) the
Spectrum Scale daemon.
• The Spectrum Scale cluster detects the Spectrum Scale node failure. All of
the clustered NFS nodes enter a grace period to block all NFS client lock
requests.
• The Spectrum Scale cluster completes recovery including the release of any
locks held by the failing node.
• The NFS cluster moves the NFS locks from the failing node to another node
in the cluster and invokes NFS recovery.
• The NFS cluster performs IP address takeover (including the sending of
gratuitous ARPs).
• The NFS cluster notifies all relevant NFS clients to start lock reclamation.
• Clients reclaim locks according to NFS standards.
• At the end of the grace period all operations return to normal.
• In certain scenarios, a condition can exist where GFPS local locks issued
during the grace period might not be recognized by NFS during reclaim
process.
© Copyright IBM Corporation 2015
9. Pre-reqs for cNFS
• System pre-reqs
– SLES 11 or later, and RHEL5.4 or later
– Linux 2.6 Kernel
• Earlier version of SLES/RHEL require OS patches:
– Lockd Patch (both RHEl and SLES for NLM locking)
– Sm-notify (SLES versions only, included in >RHEL4)
– Rpc.statd(RHEL versions only).
• Network pre-reqs
– Define separate IP addresses for cNFS
• Virtual or real
• Static IP only
• Shouldn’t be started (will be started by cNFS).
© Copyright IBM Corporation 2015
10. Setting up cNFS
• Create a separate Spectrum Scale location for the cNFS
shared files.
mmchconfig cNFSSharedRoot=<dir>
• Configure /etc/exports on all NFS servers
• Define cNFS ipaddresses
mmchnode --cNFS-interface="10.10.1.1" --N node1
• Configure cluster wide cNFS params(optional)
mmchconfig
cNFSvip=<dns_name>,cNFSmountdport=<mountd_port>
cNFSnfsdprocs=<nfsd_procs>
• Create multiple failover groups for cNFS ipaddresses
(optional)
mmchnode -N nodename --cNFS-groupid=xx
© Copyright IBM Corporation 2015
11. cNFS administration (1 of 2)
• As of Spectrum Scale 4.1, cNFS now supports IPv6 and NFS
v4
• Query cNFS cluster info
mmlscluster –cNFS
• To query the cNFS configuration
mmlsconfig |grep cNFS
© Copyright IBM Corporation 2015
12. © Copyright IBM Corporation 2011
cNFS administration (2 of 2)
• Disabling a cNFS node
– Temporarily disable for service etc
– Remove nfs-ip from RR DNS
– Failover existing client to another NFS server
mmshutdown -N <node_name>
mmchnode -N <node_name> --cNFS-disable
• Enabling a cNFS Node
– Add ip back to RR DNS
mmchnode -N <node_name> --cNFS-enable
© Copyright IBM Corporation 2015
13. cNFS tuning
• NFS Server Tuning
– Same as standard Linux
– Nfsdprocs
– /proc/fs/nfsd/threads
• Spectrum Scale Parameters
– Increase maxFilesToCache
– Increase pagepool
– nfsPrefetchStrategy=1
© Copyright IBM Corporation 2015
14. Sneak Peak at June Release for Protocol Nodes
© Copyright IBM Corporation 2015
15. Review
• cNFS allows multiple Spectrum Scale nodes to share a
common NFS set of NFS exports
© Copyright IBM Corporation 2015
16. Unit summary
Having completed this unit, you should be able to:
• Describe clustered NFS
• Install and configure cNFS
© Copyright IBM Corporation 2015