SlideShare una empresa de Scribd logo
1 de 4
Descargar para leer sin conexión
Systems and Technology Group
Solution Brief




                                                                 Intelligent storage
                                                                 management with IBM
                                                                 General Parallel File System
                                                                 Optimize storage management and boost your return
                                                                 on investment


                                                                 The explosive growth of data, transactions and digitally aware devices is
                                                                 straining IT infrastructure and operations, while storage budgets shrink
                Highlights :
                                                                 and user expectations continue to multiply. As infrastructures support more
           •	   Increase performance and scalability             users and become overburdened with storage, availability of data at the
                of your IBM AIX and Linux and
                                                                 file level may degrade—and data management becomes more complicated.
                Windows™ systems

           •	   Maintain high data availability with virtually   In the past, organizations have addressed data management challenges
                disruption-free expansion and migration
                                                                 by clustering servers or using network attached storage. But clustered
           •	   Improve efficiency through enterprisewide        servers can be limited in scalability and often require redundant copies
                information sharing                              of data. Traditional network attached storage solutions are restricted in
                                                                 performance, security and scalability. A single file server cannot scale, and
           •	   Ease data management and enhance
                administrative control using information         even a roomful of file servers is not flexible enough to provide the dynamic,
                lifecycle management toolset                     24x7 data access required of a data-intensive computing environment.
                                                                 To overcome these issues, you need to look at a new, more effective
           •	   Simplify file system administration with
                extensible management and monitoring             approach to managing data.
                infrastructure
                                                                 The IBM General Parallel File System (IBM GPFS™) can help you
           •	   Gain greater return on investment and
                grow affordably with better hardware and         move beyond simply adding storage to optimizing data management.
                infrastructure utilization                       GPFS is a high-performance, shared-disk file management solution
                                                                 that can provide faster, more reliable access to a common set of file
                                                                 data. Enabling a view of distributed data with a single global namespace
                                                                 across platforms, GPFS is also designed to provide:

                                                                 •	   Online storage management
                                                                 •	   Scalable data access through tightly integrated information lifecycle
                                                                      tools capable of managing petabytes of data and billions of files
                                                                 •	   Centralized administration
                                                                 •	   Shared access to file systems from remote GPFS clusters
                                                                 •	   Improved storage use
Systems and Technology Group
Solution Brief




Handling the explosive growth of digital                           Providing greater data access to optimize
information more efficiently                                       performance for demanding applications
GPFS provides several essential services to allow you to           GPFS is designed for high-performance parallel workloads.
effectively manage growing quantities of unstructured data.        Data and metadata flow from all the nodes to all the disks in
GPFS leverages its cluster architecture to provide quicker         parallel under control of a distributed lock manager. It has a
access to your file data. File data is automatically spread        very flexible cluster architecture which enables the design of
across multiple storage devices, providing optimal use of          a data storage solution that not only meets current needs but
your available storage to deliver high performance.                can also quickly be adapted to new requirements or technologies.
                                                                   GPFS configurations include direct-attached storage, network
GPFS differs significantly from Network File System (NFS).         block input and output (I/O) or a combination of the two, and
With GPFS, there is no single-server bottleneck or protocol        multisite operations with synchronous data mirroring.
overhead for data transfer. GPFS takes file management beyond a
single system by providing scalable access from multiple systems   Applications requiring the highest per node throughput often
to a global namespace. GPFS interacts with applications like a     deploy the direct-attached model, where disks are physically
local file system but is designed to deliver higher performance,   attached to all nodes in the cluster that accesses the file system.
scalability and fault tolerance by allowing access to the data     The direct-attached model requires an operating system
from multiple systems directly and in parallel.                    device interface to the storage, for example, using a Fibre
                                                                   Channel SAN or InfiniBand connection.
GPFS allows multiple applications or users to share access
to a single file simultaneously while maintaining file-data        To optimize the cost of your storage infrastructure, deploy
integrity. For example, multiple animators or editors can          a large number of systems or seamlessly integrate multiple
work on different parts of a single video file from multiple       platforms and interconnect storage technologies, the network
workstations at the same time. This capability helps you           block I/O (also called network shared disk [NSD]) model can
reduce the amount of storage and management overhead               be used. Network block I/O is a software layer that transparently
required to maintain several copies of the source file. GPFS       forwards block I/O requests from a GPFS client application
uses fine-grained locking based on a sophisticated scalable        node to an NSD server node to perform the disk I/O operation,
token (lock) management system to help ensure data consistency     and then passes the data back to the client. Using a network
during concurrent access and prevent multiple applications or      block I/O configuration can be more cost effective than a
users from updating the same portion of a file at the same time.   full-access SAN. And you can use this type of configuration
                                                                   to tie together systems across a wide area network (WAN).
                                                                   You can optimize the configuration of clustered systems to
                                                                   attached storage for your specific mix of workloads. You can
                                                                   also develop a data workflow within one location or across
                                                                   the WAN to help save time and the cost of managing file data.

                                                                   To exploit disk parallelism when reading a large file from a
                                                                   single-threaded application whenever it can recognize a
                                                                   pattern, GPFS intelligently prefetches data into its buffer
                                                                   pool, issuing I/O requests in parallel to as many disks as
Expanding beyond a storage area network (SAN), a single            necessary to achieve the peak bandwidth of the underlying
GPFS file system can be accessed by nodes using a TCP/IP           storage-hardware infrastructure. GPFS recognizes multiple
or InfiniBand connection. Using this block-based network           I/O patterns, including sequential, reverse sequential and
data access, GPFS can outperform network-based sharing             various forms of striped access patterns. In addition, for
technologies like NFS—and even local file systems such as EXT3     high-bandwidth environments like digital media, GPFS
journaling file system for Linux® or Journaled File System.        can read or write large blocks of data in a single operation,
Systems and Technology Group
Solution Brief




minimizing the overhead of I/O operations. GPFS has been                within the same directory. For example, GPFS can be configured
proven in applications with aggregate data and bandwidth                to use low latency disks for index operations and high capacity
requirements that exceed the capacity of typical shared file systems.   disks for data operations of a relational database. You can make
                                                                        these configurations even if all database files are created in the
Configuring file systems for high                                       same directory. In addition, storage pools can simplify storage
availability and reliability                                            management by allowing you to transparently retask older
For optimal reliability, GPFS can be configured to eliminate            storage systems, moving them from a tier 1 role to tier 2.
single points of failure. The file system can be configured to remain
available automatically in the event of a disk or server failure.       As files are created, policy-driven automation places file data
                                                                        in the appropriate storage pool based on business rules you
A GPFS file is designed to transparently fail over token (lock)         define, regardless of where the file is placed in the directory
operations and other GPFS cluster services, which can be distributed    structure. Once files exist, the policy engine can move files
throughout the entire cluster to eliminate the need for dedicated       among storage pools or tape or delete them after a specified
metadata servers. GPFS can be configured to automatically               time period, or provide reports on the contents of the file system.
recover from node, storage and other infrastructure failures.
                                                                        The high-performance policy engine leverages the parallel
GPFS provides this functionality by supporting data replication         metadata infrastructure of GPFS to provide highly scalable
to increase availability in the event of a storage media failure;       policy processing, which allows you to actively manage billions
multiple paths to the data in the event of a communications or          of files across multiple storage tiers.
server failure; and file system activity logging, enabling consistent
fast recovery after system failures. In addition, GPFS supports
snapshots to provide a space-efficient image of a file system at
a specified time, which allows online backup and can help protect
against user error.

GPFS offers time-tested reliability and has been installed on
thousands of nodes across industries, from weather research to
broadcasting, retail, financial industry analytics and web service
providers. GPFS also is the basis of many cloud storage offerings.

Simplifying data management with
advanced tools                                                          Easing file system administration
GPFS provides an information lifecycle management (ILM)                 GPFS provides a common management interface that simplifies
toolset that helps simplify data management by giving you               administration even in very large environments. The administration
more control over data placement. The toolset includes                  model is easy to use and consistent with standard IBM AIX®
storage pooling and a high-performance, scalable, rule-based            and Linux® file system administration. Cluster operations
policy engine.                                                          can be managed from any node in the GPFS cluster. These
                                                                        commands support cluster management and other standard
Storage pools enable you to transparently manage multiple tiers of      file system administration functions including user quotas,
storage based on performance or reliability. You can use storage        snapshots and storage management.
pools to transparently provide the appropriate type of storage to
multiple applications or different portions of a single application     The ability to run multiple versions of GPFS in the same cluster
                                                                        means you can do rolling upgrades of operating systems and
                                                                        GPFS versions. Rolling upgrades can provide a near-zero
Systems and Technology Group
Solution Brief




downtime method for upgrading cluster nodes, one node at
a time, allowing application file data access throughout the
upgrade process.

Heterogeneous servers and storage systems can be added
to and removed from a GPFS cluster while the file system              © Copyright IBM Corporation 2010
remains online, further simplifying management and helping            IBM Corporation
enable 24x7 operations. When storage is added or removed,             Marketing Communications
the data can be dynamically rebalanced to maintain optimal            Systems Group
                                                                      Route 100
performance.                                                          Somers, NY 10589
                                                                      U.S.A.
Helping reduce the cost of managing                                   Produced in the United States of America
your storage environment                                              July 2010
                                                                      All Rights Reserved
GPFS can reduce data duplication and make more efficient
use of storage components by combining isolated islands of            IBM, the IBM logo, ibm.com, AIX and GPFS are trademarks or registered
                                                                      trademarks of International Business Machines Corporation in the United
information into a centralized, high-performance storage              States, other countries, or both. If these and other IBM trademarked terms
infrastructure. It also can help improve server hardware              are marked on their first occurrence in this information with a trademark
utilization by allowing dynamic storage access to all data            symbol (® or TM), these symbols indicate U.S. registered or common law
                                                                      trademarks owned by IBM at the time this information was published.
from any node. By improving storage use, optimizing process           Such trademarks may also be registered or common law trademarks in
workflow and simplifying storage administration, GPFS lets            other countries. A current list of IBM trademarks is available on the web
you take a multipronged approach to reducing storage costs.           at “Copyright and trademark information” at
                                                                      ibm.com/legal/copytrade.shtml.

Summary                                                               Linux is a registered trademark of Linus Torvalds in the United States,
                                                                      other countries, or both.
GPFS already is managing data volumes that most companies
will not need to support for five years or more. You may not          Microsoft, Windows, Windows NT, and the Windows logo are trademarks
                                                                      of Microsoft Corporation in the United States, other countries, or both.
need a multipetabyte file system today, but with GPFS you
know you will have room to expand as your data volume                 InfiniBand is a trademark and/or service mark of the InfiniBand Trade
                                                                      Association.
increases. GPFS can scale to meet the dynamic needs of
your business, providing a solid application infrastructure           Other company, product, and service names may be trademarks or service
with NFS file serving capabilities.                                   marks of others.


GPFS provides world-class performance, scalability and
availability for your file data and is designed to optimize the use            Please Recycle
of storage, support scale-out applications and provide a highly
available platform for data-intensive applications. A dynamic
infrastructure that includes GPFS can help your company
streamline data workflows, improve service, reduce costs and
manage risk—delivering real business results today while
positioning you for future growth.

More information
To learn more about IBM GPFS, contact your IBM
representative, IBM Business Partner or visit the GPFS
website:

ibm.com/systems/gpfs




                                                                                                                          DCS03005-USEN-01

Más contenido relacionado

La actualidad más candente

La actualidad más candente (20)

Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
 
S ss0885 spectrum-scale-elastic-edge2015-v5
S ss0885 spectrum-scale-elastic-edge2015-v5S ss0885 spectrum-scale-elastic-edge2015-v5
S ss0885 spectrum-scale-elastic-edge2015-v5
 
Webinar: How Snapshots CAN be Backups
Webinar: How Snapshots CAN be BackupsWebinar: How Snapshots CAN be Backups
Webinar: How Snapshots CAN be Backups
 
Introducing IBM Spectrum Scale 4.2 and Elastic Storage Server 3.5
Introducing IBM Spectrum Scale 4.2 and Elastic Storage Server 3.5Introducing IBM Spectrum Scale 4.2 and Elastic Storage Server 3.5
Introducing IBM Spectrum Scale 4.2 and Elastic Storage Server 3.5
 
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...
 
IBM Spectrum Scale for File and Object Storage
IBM Spectrum Scale for File and Object StorageIBM Spectrum Scale for File and Object Storage
IBM Spectrum Scale for File and Object Storage
 
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
 
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
 
Award winning scale-up and scale-out storage for Xen
Award winning scale-up and scale-out storage for XenAward winning scale-up and scale-out storage for Xen
Award winning scale-up and scale-out storage for Xen
 
IBM Spectrum Scale for File and Object Storage
IBM Spectrum Scale for File and Object StorageIBM Spectrum Scale for File and Object Storage
IBM Spectrum Scale for File and Object Storage
 
Netapp online training-30Hours Classes, 30Hours Lab,Assignments, Project
Netapp online training-30Hours Classes, 30Hours Lab,Assignments, ProjectNetapp online training-30Hours Classes, 30Hours Lab,Assignments, Project
Netapp online training-30Hours Classes, 30Hours Lab,Assignments, Project
 
IBM Spectrum Scale Security
IBM Spectrum Scale Security IBM Spectrum Scale Security
IBM Spectrum Scale Security
 
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
 
Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral Program
Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral ProgramBig Lab Problems Solved with Spectrum Scale: Innovations for the Coral Program
Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral Program
 
Spectrum Scale - Diversified analytic solution based on various storage servi...
Spectrum Scale - Diversified analytic solution based on various storage servi...Spectrum Scale - Diversified analytic solution based on various storage servi...
Spectrum Scale - Diversified analytic solution based on various storage servi...
 
Spectrum Scale Unified File and Object with WAN Caching
Spectrum Scale Unified File and Object with WAN CachingSpectrum Scale Unified File and Object with WAN Caching
Spectrum Scale Unified File and Object with WAN Caching
 
Introducing StorNext5 and Lattus
Introducing StorNext5 and LattusIntroducing StorNext5 and Lattus
Introducing StorNext5 and Lattus
 
S de0882 new-generation-tiering-edge2015-v3
S de0882 new-generation-tiering-edge2015-v3S de0882 new-generation-tiering-edge2015-v3
S de0882 new-generation-tiering-edge2015-v3
 
Elastic storage in the cloud session 5224 final v2
Elastic storage in the cloud session 5224 final v2Elastic storage in the cloud session 5224 final v2
Elastic storage in the cloud session 5224 final v2
 
Data OnTAP Cluster Mode Administrator
Data OnTAP Cluster Mode AdministratorData OnTAP Cluster Mode Administrator
Data OnTAP Cluster Mode Administrator
 

Similar a GPFS Solution Brief

NAS Systems Scale Out to Meet Growing Storage Demands
NAS Systems Scale Out to Meet Growing Storage DemandsNAS Systems Scale Out to Meet Growing Storage Demands
NAS Systems Scale Out to Meet Growing Storage Demands
IBM India Smarter Computing
 
spectrum Storage Whitepaper
spectrum Storage Whitepaperspectrum Storage Whitepaper
spectrum Storage Whitepaper
Carina Kordan
 
Hitachi overview-brochure-hus-hnas-family
Hitachi overview-brochure-hus-hnas-familyHitachi overview-brochure-hus-hnas-family
Hitachi overview-brochure-hus-hnas-family
Hitachi Vantara
 
Introduction to Data Storage and Cloud Computing
Introduction to Data Storage and Cloud ComputingIntroduction to Data Storage and Cloud Computing
Introduction to Data Storage and Cloud Computing
Rutuja751147
 
[IC Manage] Workspace Acceleration & Network Storage Reduction
[IC Manage] Workspace Acceleration & Network Storage Reduction[IC Manage] Workspace Acceleration & Network Storage Reduction
[IC Manage] Workspace Acceleration & Network Storage Reduction
Perforce
 
Ibm total storage san file system as an infrastructure for multimedia servers...
Ibm total storage san file system as an infrastructure for multimedia servers...Ibm total storage san file system as an infrastructure for multimedia servers...
Ibm total storage san file system as an infrastructure for multimedia servers...
Banking at Ho Chi Minh city
 

Similar a GPFS Solution Brief (20)

NAS Systems Scale Out to Meet Growing Storage Demands
NAS Systems Scale Out to Meet Growing Storage DemandsNAS Systems Scale Out to Meet Growing Storage Demands
NAS Systems Scale Out to Meet Growing Storage Demands
 
IBM SONAS Brochure
IBM SONAS BrochureIBM SONAS Brochure
IBM SONAS Brochure
 
Survey of distributed storage system
Survey of distributed storage systemSurvey of distributed storage system
Survey of distributed storage system
 
The Enterprise File Fabric for Google Cloud Platform
The Enterprise File Fabric for Google Cloud PlatformThe Enterprise File Fabric for Google Cloud Platform
The Enterprise File Fabric for Google Cloud Platform
 
Consolidating File Servers into the Cloud
Consolidating File Servers into the CloudConsolidating File Servers into the Cloud
Consolidating File Servers into the Cloud
 
spectrum Storage Whitepaper
spectrum Storage Whitepaperspectrum Storage Whitepaper
spectrum Storage Whitepaper
 
Storage for Microsoft®Windows Enfironments
Storage for Microsoft®Windows EnfironmentsStorage for Microsoft®Windows Enfironments
Storage for Microsoft®Windows Enfironments
 
Rhs story61712
Rhs story61712Rhs story61712
Rhs story61712
 
Slides: Accelerating Queries on Cloud Data Lakes
Slides: Accelerating Queries on Cloud Data LakesSlides: Accelerating Queries on Cloud Data Lakes
Slides: Accelerating Queries on Cloud Data Lakes
 
Hitachi overview-brochure-hus-hnas-family
Hitachi overview-brochure-hus-hnas-familyHitachi overview-brochure-hus-hnas-family
Hitachi overview-brochure-hus-hnas-family
 
The Object Evolution - EMC Object-Based Storage for Active Archiving and Appl...
The Object Evolution - EMC Object-Based Storage for Active Archiving and Appl...The Object Evolution - EMC Object-Based Storage for Active Archiving and Appl...
The Object Evolution - EMC Object-Based Storage for Active Archiving and Appl...
 
Introduction to Data Storage and Cloud Computing
Introduction to Data Storage and Cloud ComputingIntroduction to Data Storage and Cloud Computing
Introduction to Data Storage and Cloud Computing
 
Swiss National Supercomputing Center
Swiss National Supercomputing CenterSwiss National Supercomputing Center
Swiss National Supercomputing Center
 
Hyperconvergence Facts and FAQs
Hyperconvergence Facts and FAQsHyperconvergence Facts and FAQs
Hyperconvergence Facts and FAQs
 
DFS PPT.pptx
DFS PPT.pptxDFS PPT.pptx
DFS PPT.pptx
 
Achieving Separation of Compute and Storage in a Cloud World
Achieving Separation of Compute and Storage in a Cloud WorldAchieving Separation of Compute and Storage in a Cloud World
Achieving Separation of Compute and Storage in a Cloud World
 
[IC Manage] Workspace Acceleration & Network Storage Reduction
[IC Manage] Workspace Acceleration & Network Storage Reduction[IC Manage] Workspace Acceleration & Network Storage Reduction
[IC Manage] Workspace Acceleration & Network Storage Reduction
 
Veritas - Software Defined Storage
Veritas - Software Defined StorageVeritas - Software Defined Storage
Veritas - Software Defined Storage
 
Cloud Storage Adoption, Practice, and Deployment
Cloud Storage Adoption, Practice, and DeploymentCloud Storage Adoption, Practice, and Deployment
Cloud Storage Adoption, Practice, and Deployment
 
Ibm total storage san file system as an infrastructure for multimedia servers...
Ibm total storage san file system as an infrastructure for multimedia servers...Ibm total storage san file system as an infrastructure for multimedia servers...
Ibm total storage san file system as an infrastructure for multimedia servers...
 

Más de IBM India Smarter Computing

Más de IBM India Smarter Computing (20)

Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments
 
All-flash Needs End to End Storage Efficiency
All-flash Needs End to End Storage EfficiencyAll-flash Needs End to End Storage Efficiency
All-flash Needs End to End Storage Efficiency
 
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
 
IBM FlashSystem 840 Product Guide
IBM FlashSystem 840 Product GuideIBM FlashSystem 840 Product Guide
IBM FlashSystem 840 Product Guide
 
IBM System x3250 M5
IBM System x3250 M5IBM System x3250 M5
IBM System x3250 M5
 
IBM NeXtScale nx360 M4
IBM NeXtScale nx360 M4IBM NeXtScale nx360 M4
IBM NeXtScale nx360 M4
 
IBM System x3650 M4 HD
IBM System x3650 M4 HDIBM System x3650 M4 HD
IBM System x3650 M4 HD
 
IBM System x3300 M4
IBM System x3300 M4IBM System x3300 M4
IBM System x3300 M4
 
IBM System x iDataPlex dx360 M4
IBM System x iDataPlex dx360 M4IBM System x iDataPlex dx360 M4
IBM System x iDataPlex dx360 M4
 
IBM System x3500 M4
IBM System x3500 M4IBM System x3500 M4
IBM System x3500 M4
 
IBM System x3550 M4
IBM System x3550 M4IBM System x3550 M4
IBM System x3550 M4
 
IBM System x3650 M4
IBM System x3650 M4IBM System x3650 M4
IBM System x3650 M4
 
IBM System x3500 M3
IBM System x3500 M3IBM System x3500 M3
IBM System x3500 M3
 
IBM System x3400 M3
IBM System x3400 M3IBM System x3400 M3
IBM System x3400 M3
 
IBM System x3250 M3
IBM System x3250 M3IBM System x3250 M3
IBM System x3250 M3
 
IBM System x3200 M3
IBM System x3200 M3IBM System x3200 M3
IBM System x3200 M3
 
IBM PowerVC Introduction and Configuration
IBM PowerVC Introduction and ConfigurationIBM PowerVC Introduction and Configuration
IBM PowerVC Introduction and Configuration
 
A Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceA Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization Performance
 
IBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architecture
 
X6: The sixth generation of EXA Technology
X6: The sixth generation of EXA TechnologyX6: The sixth generation of EXA Technology
X6: The sixth generation of EXA Technology
 

Último

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 

Último (20)

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Cyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdfCyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdf
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUKSpring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 

GPFS Solution Brief

  • 1. Systems and Technology Group Solution Brief Intelligent storage management with IBM General Parallel File System Optimize storage management and boost your return on investment The explosive growth of data, transactions and digitally aware devices is straining IT infrastructure and operations, while storage budgets shrink Highlights : and user expectations continue to multiply. As infrastructures support more • Increase performance and scalability users and become overburdened with storage, availability of data at the of your IBM AIX and Linux and file level may degrade—and data management becomes more complicated. Windows™ systems • Maintain high data availability with virtually In the past, organizations have addressed data management challenges disruption-free expansion and migration by clustering servers or using network attached storage. But clustered • Improve efficiency through enterprisewide servers can be limited in scalability and often require redundant copies information sharing of data. Traditional network attached storage solutions are restricted in performance, security and scalability. A single file server cannot scale, and • Ease data management and enhance administrative control using information even a roomful of file servers is not flexible enough to provide the dynamic, lifecycle management toolset 24x7 data access required of a data-intensive computing environment. To overcome these issues, you need to look at a new, more effective • Simplify file system administration with extensible management and monitoring approach to managing data. infrastructure The IBM General Parallel File System (IBM GPFS™) can help you • Gain greater return on investment and grow affordably with better hardware and move beyond simply adding storage to optimizing data management. infrastructure utilization GPFS is a high-performance, shared-disk file management solution that can provide faster, more reliable access to a common set of file data. Enabling a view of distributed data with a single global namespace across platforms, GPFS is also designed to provide: • Online storage management • Scalable data access through tightly integrated information lifecycle tools capable of managing petabytes of data and billions of files • Centralized administration • Shared access to file systems from remote GPFS clusters • Improved storage use
  • 2. Systems and Technology Group Solution Brief Handling the explosive growth of digital Providing greater data access to optimize information more efficiently performance for demanding applications GPFS provides several essential services to allow you to GPFS is designed for high-performance parallel workloads. effectively manage growing quantities of unstructured data. Data and metadata flow from all the nodes to all the disks in GPFS leverages its cluster architecture to provide quicker parallel under control of a distributed lock manager. It has a access to your file data. File data is automatically spread very flexible cluster architecture which enables the design of across multiple storage devices, providing optimal use of a data storage solution that not only meets current needs but your available storage to deliver high performance. can also quickly be adapted to new requirements or technologies. GPFS configurations include direct-attached storage, network GPFS differs significantly from Network File System (NFS). block input and output (I/O) or a combination of the two, and With GPFS, there is no single-server bottleneck or protocol multisite operations with synchronous data mirroring. overhead for data transfer. GPFS takes file management beyond a single system by providing scalable access from multiple systems Applications requiring the highest per node throughput often to a global namespace. GPFS interacts with applications like a deploy the direct-attached model, where disks are physically local file system but is designed to deliver higher performance, attached to all nodes in the cluster that accesses the file system. scalability and fault tolerance by allowing access to the data The direct-attached model requires an operating system from multiple systems directly and in parallel. device interface to the storage, for example, using a Fibre Channel SAN or InfiniBand connection. GPFS allows multiple applications or users to share access to a single file simultaneously while maintaining file-data To optimize the cost of your storage infrastructure, deploy integrity. For example, multiple animators or editors can a large number of systems or seamlessly integrate multiple work on different parts of a single video file from multiple platforms and interconnect storage technologies, the network workstations at the same time. This capability helps you block I/O (also called network shared disk [NSD]) model can reduce the amount of storage and management overhead be used. Network block I/O is a software layer that transparently required to maintain several copies of the source file. GPFS forwards block I/O requests from a GPFS client application uses fine-grained locking based on a sophisticated scalable node to an NSD server node to perform the disk I/O operation, token (lock) management system to help ensure data consistency and then passes the data back to the client. Using a network during concurrent access and prevent multiple applications or block I/O configuration can be more cost effective than a users from updating the same portion of a file at the same time. full-access SAN. And you can use this type of configuration to tie together systems across a wide area network (WAN). You can optimize the configuration of clustered systems to attached storage for your specific mix of workloads. You can also develop a data workflow within one location or across the WAN to help save time and the cost of managing file data. To exploit disk parallelism when reading a large file from a single-threaded application whenever it can recognize a pattern, GPFS intelligently prefetches data into its buffer pool, issuing I/O requests in parallel to as many disks as Expanding beyond a storage area network (SAN), a single necessary to achieve the peak bandwidth of the underlying GPFS file system can be accessed by nodes using a TCP/IP storage-hardware infrastructure. GPFS recognizes multiple or InfiniBand connection. Using this block-based network I/O patterns, including sequential, reverse sequential and data access, GPFS can outperform network-based sharing various forms of striped access patterns. In addition, for technologies like NFS—and even local file systems such as EXT3 high-bandwidth environments like digital media, GPFS journaling file system for Linux® or Journaled File System. can read or write large blocks of data in a single operation,
  • 3. Systems and Technology Group Solution Brief minimizing the overhead of I/O operations. GPFS has been within the same directory. For example, GPFS can be configured proven in applications with aggregate data and bandwidth to use low latency disks for index operations and high capacity requirements that exceed the capacity of typical shared file systems. disks for data operations of a relational database. You can make these configurations even if all database files are created in the Configuring file systems for high same directory. In addition, storage pools can simplify storage availability and reliability management by allowing you to transparently retask older For optimal reliability, GPFS can be configured to eliminate storage systems, moving them from a tier 1 role to tier 2. single points of failure. The file system can be configured to remain available automatically in the event of a disk or server failure. As files are created, policy-driven automation places file data in the appropriate storage pool based on business rules you A GPFS file is designed to transparently fail over token (lock) define, regardless of where the file is placed in the directory operations and other GPFS cluster services, which can be distributed structure. Once files exist, the policy engine can move files throughout the entire cluster to eliminate the need for dedicated among storage pools or tape or delete them after a specified metadata servers. GPFS can be configured to automatically time period, or provide reports on the contents of the file system. recover from node, storage and other infrastructure failures. The high-performance policy engine leverages the parallel GPFS provides this functionality by supporting data replication metadata infrastructure of GPFS to provide highly scalable to increase availability in the event of a storage media failure; policy processing, which allows you to actively manage billions multiple paths to the data in the event of a communications or of files across multiple storage tiers. server failure; and file system activity logging, enabling consistent fast recovery after system failures. In addition, GPFS supports snapshots to provide a space-efficient image of a file system at a specified time, which allows online backup and can help protect against user error. GPFS offers time-tested reliability and has been installed on thousands of nodes across industries, from weather research to broadcasting, retail, financial industry analytics and web service providers. GPFS also is the basis of many cloud storage offerings. Simplifying data management with advanced tools Easing file system administration GPFS provides an information lifecycle management (ILM) GPFS provides a common management interface that simplifies toolset that helps simplify data management by giving you administration even in very large environments. The administration more control over data placement. The toolset includes model is easy to use and consistent with standard IBM AIX® storage pooling and a high-performance, scalable, rule-based and Linux® file system administration. Cluster operations policy engine. can be managed from any node in the GPFS cluster. These commands support cluster management and other standard Storage pools enable you to transparently manage multiple tiers of file system administration functions including user quotas, storage based on performance or reliability. You can use storage snapshots and storage management. pools to transparently provide the appropriate type of storage to multiple applications or different portions of a single application The ability to run multiple versions of GPFS in the same cluster means you can do rolling upgrades of operating systems and GPFS versions. Rolling upgrades can provide a near-zero
  • 4. Systems and Technology Group Solution Brief downtime method for upgrading cluster nodes, one node at a time, allowing application file data access throughout the upgrade process. Heterogeneous servers and storage systems can be added to and removed from a GPFS cluster while the file system © Copyright IBM Corporation 2010 remains online, further simplifying management and helping IBM Corporation enable 24x7 operations. When storage is added or removed, Marketing Communications the data can be dynamically rebalanced to maintain optimal Systems Group Route 100 performance. Somers, NY 10589 U.S.A. Helping reduce the cost of managing Produced in the United States of America your storage environment July 2010 All Rights Reserved GPFS can reduce data duplication and make more efficient use of storage components by combining isolated islands of IBM, the IBM logo, ibm.com, AIX and GPFS are trademarks or registered trademarks of International Business Machines Corporation in the United information into a centralized, high-performance storage States, other countries, or both. If these and other IBM trademarked terms infrastructure. It also can help improve server hardware are marked on their first occurrence in this information with a trademark utilization by allowing dynamic storage access to all data symbol (® or TM), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. from any node. By improving storage use, optimizing process Such trademarks may also be registered or common law trademarks in workflow and simplifying storage administration, GPFS lets other countries. A current list of IBM trademarks is available on the web you take a multipronged approach to reducing storage costs. at “Copyright and trademark information” at ibm.com/legal/copytrade.shtml. Summary Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. GPFS already is managing data volumes that most companies will not need to support for five years or more. You may not Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. need a multipetabyte file system today, but with GPFS you know you will have room to expand as your data volume InfiniBand is a trademark and/or service mark of the InfiniBand Trade Association. increases. GPFS can scale to meet the dynamic needs of your business, providing a solid application infrastructure Other company, product, and service names may be trademarks or service with NFS file serving capabilities. marks of others. GPFS provides world-class performance, scalability and availability for your file data and is designed to optimize the use Please Recycle of storage, support scale-out applications and provide a highly available platform for data-intensive applications. A dynamic infrastructure that includes GPFS can help your company streamline data workflows, improve service, reduce costs and manage risk—delivering real business results today while positioning you for future growth. More information To learn more about IBM GPFS, contact your IBM representative, IBM Business Partner or visit the GPFS website: ibm.com/systems/gpfs DCS03005-USEN-01