SlideShare una empresa de Scribd logo
1 de 18
NameNode HA
Suresh Srinivas- Hortonworks
Aaron T. Myers - Cloudera
Overview
• Part 1 – Suresh Srinivas(Hortonworks)
  − HDFS Availability and Data Integrity – what is the record?
  − NN HA Design
• Part 2 – Aaron T. Myers (Cloudera)
  − NN HA Design continued
         Client-NN Connection failover
  − Operations and Admin of HA
  − Future Work




                                          2
Current HDFS Availability & Data Integrity

• Simple design, storage fault tolerance
  − Storage: Rely in OS’s file system rather than use raw disk
  − Storage Fault Tolerance: multiple replicas, active monitoring
  − Single NameNode Master
          Persistent state: multiple copies + checkpoints
          Restart on failure
• How well did it work?
  − Lost 19 out of 329 Million blocks on 10 clusters with 20K nodes in 2009
          7-9’s of reliability
          Fixed in 20 and 21.
  − 18 months Study: 22 failures on 25 clusters - 0.58 failures per year per cluster
          Only 8 would have benefitted from HA failover!! (0.23 failures per cluster year)
  − NN is very robust and can take a lot of abuse
          NN is resilient against overload caused by misbehaving apps

                                              3
HA NameNode
Active work has started on HA NameNode (Failover)
• HA NameNode
  − Detailed design and sub tasks in HDFS-1623


• HA: Related work
  − Backup NN (0.21)
  − Avatar NN (Facebook)
  − HA NN prototype using Linux HA (Yahoo!)
  − HA NN prototype with Backup NN and block report replicator (eBay)


                      HA is the highest priority


                                       4
Approach and Terminology
• Initial goal is Active-Standby
  − With Federation each namespace volume has a NameNode
         Single active NN for any namespace volume
• Terminology
  − Active NN – actively serves the read/write operations from the clients
  − Standby NN - waits, becomes active when Active dies or is unhealthy
         Could serve read operations
  − Standby’s State may be cold, warm or hot
         Cold : Standby has zero state (e.g. started after the Active is declared dead.
         Warm: Standby has partial state:
            • has loaded fsImage & editLogs but has not received any block reports
            • has loaded fsImage and rolled logs and all block reports
         Hot Standby: Standby has all most of the Active’s state and start
          immediately


                                            5
High Level Use Cases
• Planned downtime                Supported failures
 − Upgrades                       • Single hardware failure
 − Config changes
                                    − Double hardware failure not
 − Main reason for downtime           supported
                                  • Some software failures
• Unplanned downtime
                                    − Same software failure affects
 − Hardware failure                   both active and standby
 − Server unresponsive
 − Software failures
 − Occurs infrequently




                              6
Use Cases
• Deployment models
 − Single NN configuration; no failover
 − Active and Standby with manual failover
        Standby could be cold/warm/hot
        Addresses downtime during upgrades – main cause of unavailability
 − Active and Standby with automatic failover
        Hot standby
        Addresses downtime during upgrades and other failures




               See HDFS-1623 for detailed use cases



                                      7
Design
• Failover control outside NN
• Parallel Block reports to Active and Standby (Hot failover)
• Shared or non-shared NN state
• Fencing of shared resources/data
  − Datanodes
  − Shared NN state (if any)
• Client failover
  − IP Failover
  − Smart clients (e.g configuration, or ZooKeeper for coordination)




                                      8
Failover Control Outside NN

                                                        • HA Daemon outside NameNode
                                Quorum
                                Service
                                                        • Daemon manages resources
                                                          − All resources modeled uniformly
                                                          − Resources – OS, HW, Network etc.
                                          Resources
  HA
Daemon       Actions
         start, stop,
                                           Resources
                                            Resources     − NameNode is just another resource
                                                        • Heartbeat with other nodes
         failover, monitor, …




                                 Shared
                                                        • Quorum based leader election
                                Resources

                                                          − Zookeeper for coordination and Quorum
                                                        • Fencing during split brain
                                                          − Prevents data corruption
NN HA with Shared Storage and ZooKeeper
                                   ZK             ZK       ZK
                     Heartbeat                                      Heartbeat


       FailoverController                                               FailoverController
             Active                                                          Standby

                   Cmds
Monitor Health                                                             Monitor Health
of NN. OS, HW                                                              of NN. OS, HW
                      NN                Shared NN state     NN
                     Active               with single     Standby
                                             writer
                                           (fencing)


 Block Reports to Active & Standby
 DN fencing: Update cmds from one

                              DN          DN                DN
HA Design Details


                    11
Client Failover Design
• Smart clients
  − Users use one logical URI, client selects correct NN to connect to
• Implementing two options out of the box
  − Client Knows of multiple NNs
  − Use a coordination service (ZooKeeper)
• Common things between these
  − Which operations are idempotent, therefore safe to retry on a failover
  − Failover/retry strategies
• Some differences
  − Expected time for client failover
  − Ease of administration

                                        12
Ops/Admin: Shared Storage
• To share NN state, need shared storage
  − Needs to be HA itself to avoid just shifting SPOF
         BookKeeper, etc will likely take care of this in the future
  − Many come with IP fencing options
  − Recommended mount options:
         tcp,soft,intr,timeo=60,retrans=10
• Not all edits directories are created equal
  − Used to be all edits dirs were just a pool of redundant dirs
  − Can now configure some edits directories to be required
  − Can now configure number of tolerated failures
  − You want at least 2 for durability, 1 remote for HA



                                          13
Ops/Admin: NN fencing
• Client failover does not solve this problem
• Out of the box
  − RPC to active NN to tell it to go to standby (graceful failover)
  − SSH to active NN and `kill -9’ NN
• Pluggable options
  − Many filers have protocols for IP-based fencing options
  − Many PDUs have protocols for IP-based plug-pulling (STONITH)
         Nuke the node from orbit. It’s the only way to be sure.
• Configure extra options if available to you
  − Will be tried in order during a failover event
  − Escalate the aggressiveness of the method
  − Fencing is critical for correctness of NN metadata


                                         14
Ops/Admin: Monitoring
• New NN metrics
  − Size of pending DN message queues
  − Seconds since the standby NN last read from shared edit log
  − DN block report lag
  − All measurements of standby NN lag – monitor/alert on all of these
• Monitor shared storage solution
  − Volumes fill up, disks go bad, etc
  − Should configure paranoid edit log retention policy (default is 2)
• Canary-based monitoring of HDFS a good idea
  − Pinging both NNs not sufficient



                                       15
Ops/Admin: Hardware
• Active/Standby NNs should be on separate racks
• Shared storage system should be on separate rack
• Active/Standby NNs should have close to the same hardware
  − Same amount of RAM – need to store the same things
  − Same # of processors - need to serve same number of clients
• All the same recommendations still apply for NN
  − ECC memory, 48GB
  − Several separate disks for NN metadata directories
  − Redundant disks for OS drives, probably RAID 5 or mirroring
  − Redundant power



                                     16
Future Work
• Other options to share NN metadata
  − BookKeeper
  − Multiple, potentially non-HA filers
  − Entirely different metadata system
• More advanced client failover/load shedding
  − Serve stale reads from the standby NN
  − Speculative RPC
  − Non-RPC clients (IP failover, DNS failover, proxy, etc.)
• Even Higher HA
  − Multiple standby NNs



                                        17
QA

• Detailed design (HDFS-1623)
 −Community effort
 −HDFS-1971, 1972, 1973, 1974, 1975, 2005,
  2064, 1073




                     18

Más contenido relacionado

La actualidad más candente

Hive + Tez: A Performance Deep Dive
Hive + Tez: A Performance Deep DiveHive + Tez: A Performance Deep Dive
Hive + Tez: A Performance Deep Dive
DataWorks Summit
 
Dreaming Infrastructure
Dreaming InfrastructureDreaming Infrastructure
Dreaming Infrastructure
kyhpudding
 
Apache Tez - A New Chapter in Hadoop Data Processing
Apache Tez - A New Chapter in Hadoop Data ProcessingApache Tez - A New Chapter in Hadoop Data Processing
Apache Tez - A New Chapter in Hadoop Data Processing
DataWorks Summit
 

La actualidad más candente (20)

The Impala Cookbook
The Impala CookbookThe Impala Cookbook
The Impala Cookbook
 
HBase Low Latency
HBase Low LatencyHBase Low Latency
HBase Low Latency
 
Impala presentation
Impala presentationImpala presentation
Impala presentation
 
Hive + Tez: A Performance Deep Dive
Hive + Tez: A Performance Deep DiveHive + Tez: A Performance Deep Dive
Hive + Tez: A Performance Deep Dive
 
Upgrading HDFS to 3.3.0 and deploying RBF in production #LINE_DM
Upgrading HDFS to 3.3.0 and deploying RBF in production #LINE_DMUpgrading HDFS to 3.3.0 and deploying RBF in production #LINE_DM
Upgrading HDFS to 3.3.0 and deploying RBF in production #LINE_DM
 
OracleStore: A Highly Performant RawStore Implementation for Hive Metastore
OracleStore: A Highly Performant RawStore Implementation for Hive MetastoreOracleStore: A Highly Performant RawStore Implementation for Hive Metastore
OracleStore: A Highly Performant RawStore Implementation for Hive Metastore
 
Introduction to memcached
Introduction to memcachedIntroduction to memcached
Introduction to memcached
 
HBase Sizing Guide
HBase Sizing GuideHBase Sizing Guide
HBase Sizing Guide
 
YARN High Availability
YARN High AvailabilityYARN High Availability
YARN High Availability
 
Apache Phoenix and HBase: Past, Present and Future of SQL over HBase
Apache Phoenix and HBase: Past, Present and Future of SQL over HBaseApache Phoenix and HBase: Past, Present and Future of SQL over HBase
Apache Phoenix and HBase: Past, Present and Future of SQL over HBase
 
What's New in Apache Hive
What's New in Apache HiveWhat's New in Apache Hive
What's New in Apache Hive
 
Dreaming Infrastructure
Dreaming InfrastructureDreaming Infrastructure
Dreaming Infrastructure
 
DNS Security Presentation ISSA
DNS Security Presentation ISSADNS Security Presentation ISSA
DNS Security Presentation ISSA
 
Apache Tez - A New Chapter in Hadoop Data Processing
Apache Tez - A New Chapter in Hadoop Data ProcessingApache Tez - A New Chapter in Hadoop Data Processing
Apache Tez - A New Chapter in Hadoop Data Processing
 
HBase Advanced - Lars George
HBase Advanced - Lars GeorgeHBase Advanced - Lars George
HBase Advanced - Lars George
 
HBase Storage Internals
HBase Storage InternalsHBase Storage Internals
HBase Storage Internals
 
Big Data Analytics with Hadoop
Big Data Analytics with HadoopBig Data Analytics with Hadoop
Big Data Analytics with Hadoop
 
Introduction to Hadoop and Cloudera, Louisville BI & Big Data Analytics Meetup
Introduction to Hadoop and Cloudera, Louisville BI & Big Data Analytics MeetupIntroduction to Hadoop and Cloudera, Louisville BI & Big Data Analytics Meetup
Introduction to Hadoop and Cloudera, Louisville BI & Big Data Analytics Meetup
 
Cassandra at Instagram 2016 (Dikang Gu, Facebook) | Cassandra Summit 2016
Cassandra at Instagram 2016 (Dikang Gu, Facebook) | Cassandra Summit 2016Cassandra at Instagram 2016 (Dikang Gu, Facebook) | Cassandra Summit 2016
Cassandra at Instagram 2016 (Dikang Gu, Facebook) | Cassandra Summit 2016
 
Securing Hadoop with Apache Ranger
Securing Hadoop with Apache RangerSecuring Hadoop with Apache Ranger
Securing Hadoop with Apache Ranger
 

Destacado

Hadoop 2.0 Architecture | HDFS Federation | NameNode High Availability |
Hadoop 2.0 Architecture | HDFS Federation | NameNode High Availability | Hadoop 2.0 Architecture | HDFS Federation | NameNode High Availability |
Hadoop 2.0 Architecture | HDFS Federation | NameNode High Availability |
Edureka!
 
Ambari Meetup: NameNode HA
Ambari Meetup: NameNode HAAmbari Meetup: NameNode HA
Ambari Meetup: NameNode HA
Hortonworks
 
Hdfs ha using journal nodes
Hdfs ha using journal nodesHdfs ha using journal nodes
Hdfs ha using journal nodes
Evans Ye
 
Hadoop Administration pdf
Hadoop Administration pdfHadoop Administration pdf
Hadoop Administration pdf
Edureka!
 
Seminar Presentation Hadoop
Seminar Presentation HadoopSeminar Presentation Hadoop
Seminar Presentation Hadoop
Varun Narang
 

Destacado (20)

HDFS NameNode High Availability
HDFS NameNode High AvailabilityHDFS NameNode High Availability
HDFS NameNode High Availability
 
Hadoop 2.0 Architecture | HDFS Federation | NameNode High Availability |
Hadoop 2.0 Architecture | HDFS Federation | NameNode High Availability | Hadoop 2.0 Architecture | HDFS Federation | NameNode High Availability |
Hadoop 2.0 Architecture | HDFS Federation | NameNode High Availability |
 
Setting High Availability in Hadoop Cluster
Setting High Availability in Hadoop ClusterSetting High Availability in Hadoop Cluster
Setting High Availability in Hadoop Cluster
 
Ambari Meetup: NameNode HA
Ambari Meetup: NameNode HAAmbari Meetup: NameNode HA
Ambari Meetup: NameNode HA
 
Strata + Hadoop World 2012: HDFS: Now and Future
Strata + Hadoop World 2012: HDFS: Now and FutureStrata + Hadoop World 2012: HDFS: Now and Future
Strata + Hadoop World 2012: HDFS: Now and Future
 
Hdfs ha using journal nodes
Hdfs ha using journal nodesHdfs ha using journal nodes
Hdfs ha using journal nodes
 
Introduction to Cloudera's Administrator Training for Apache Hadoop
Introduction to Cloudera's Administrator Training for Apache HadoopIntroduction to Cloudera's Administrator Training for Apache Hadoop
Introduction to Cloudera's Administrator Training for Apache Hadoop
 
Apache Hadoop YARN, NameNode HA, HDFS Federation
Apache Hadoop YARN, NameNode HA, HDFS FederationApache Hadoop YARN, NameNode HA, HDFS Federation
Apache Hadoop YARN, NameNode HA, HDFS Federation
 
Hadoop Operations Powered By ... Hadoop (Hadoop Summit 2014 Amsterdam)
Hadoop Operations Powered By ... Hadoop (Hadoop Summit 2014 Amsterdam)Hadoop Operations Powered By ... Hadoop (Hadoop Summit 2014 Amsterdam)
Hadoop Operations Powered By ... Hadoop (Hadoop Summit 2014 Amsterdam)
 
Hadoop Operations - Best Practices from the Field
Hadoop Operations - Best Practices from the FieldHadoop Operations - Best Practices from the Field
Hadoop Operations - Best Practices from the Field
 
HDFS Federation
HDFS FederationHDFS Federation
HDFS Federation
 
Learn Hadoop Administration
Learn Hadoop AdministrationLearn Hadoop Administration
Learn Hadoop Administration
 
Hadoop Administration pdf
Hadoop Administration pdfHadoop Administration pdf
Hadoop Administration pdf
 
Hadoop HDFS Detailed Introduction
Hadoop HDFS Detailed IntroductionHadoop HDFS Detailed Introduction
Hadoop HDFS Detailed Introduction
 
Seminar Presentation Hadoop
Seminar Presentation HadoopSeminar Presentation Hadoop
Seminar Presentation Hadoop
 
Hadoop and Spark – Perfect Together
Hadoop and Spark – Perfect TogetherHadoop and Spark – Perfect Together
Hadoop and Spark – Perfect Together
 
Zookeeper-aware application server
Zookeeper-aware application serverZookeeper-aware application server
Zookeeper-aware application server
 
Hdfs high availability
Hdfs high availabilityHdfs high availability
Hdfs high availability
 
Writing Yarn Applications Hadoop Summit 2012
Writing Yarn Applications Hadoop Summit 2012Writing Yarn Applications Hadoop Summit 2012
Writing Yarn Applications Hadoop Summit 2012
 
Strata + Hadoop World 2012: High Availability for the HDFS NameNode Phase 2
Strata + Hadoop World 2012: High Availability for the HDFS NameNode Phase 2Strata + Hadoop World 2012: High Availability for the HDFS NameNode Phase 2
Strata + Hadoop World 2012: High Availability for the HDFS NameNode Phase 2
 

Similar a HDFS Namenode High Availability

Infrastructure Around Hadoop
Infrastructure Around HadoopInfrastructure Around Hadoop
Infrastructure Around Hadoop
DataWorks Summit
 
Petabyte scale on commodity infrastructure
Petabyte scale on commodity infrastructurePetabyte scale on commodity infrastructure
Petabyte scale on commodity infrastructure
elliando dias
 
Performance Whack-a-Mole Tutorial (pgCon 2009)
Performance Whack-a-Mole Tutorial (pgCon 2009) Performance Whack-a-Mole Tutorial (pgCon 2009)
Performance Whack-a-Mole Tutorial (pgCon 2009)
PostgreSQL Experts, Inc.
 
Hadoop - Disk Fail In Place (DFIP)
Hadoop - Disk Fail In Place (DFIP)Hadoop - Disk Fail In Place (DFIP)
Hadoop - Disk Fail In Place (DFIP)
mundlapudi
 
SAP Virtualization Week 2012 - The Lego Cloud
SAP Virtualization Week 2012 - The Lego CloudSAP Virtualization Week 2012 - The Lego Cloud
SAP Virtualization Week 2012 - The Lego Cloud
aidanshribman
 
Private cloud virtual reality to reality a partner story daniel mar_technicom
Private cloud virtual reality to reality a partner story daniel mar_technicomPrivate cloud virtual reality to reality a partner story daniel mar_technicom
Private cloud virtual reality to reality a partner story daniel mar_technicom
Microsoft Singapore
 

Similar a HDFS Namenode High Availability (20)

Hadoop Summit 2012 | HDFS High Availability
Hadoop Summit 2012 | HDFS High AvailabilityHadoop Summit 2012 | HDFS High Availability
Hadoop Summit 2012 | HDFS High Availability
 
Hadoop World 2011: HDFS Name Node High Availablity - Aaron Myers, Cloudera & ...
Hadoop World 2011: HDFS Name Node High Availablity - Aaron Myers, Cloudera & ...Hadoop World 2011: HDFS Name Node High Availablity - Aaron Myers, Cloudera & ...
Hadoop World 2011: HDFS Name Node High Availablity - Aaron Myers, Cloudera & ...
 
HDFS - What's New and Future
HDFS - What's New and FutureHDFS - What's New and Future
HDFS - What's New and Future
 
Hadoop Backup and Disaster Recovery
Hadoop Backup and Disaster RecoveryHadoop Backup and Disaster Recovery
Hadoop Backup and Disaster Recovery
 
Lect17
Lect17Lect17
Lect17
 
Linux on System z – disk I/O performance
Linux on System z – disk I/O performanceLinux on System z – disk I/O performance
Linux on System z – disk I/O performance
 
Infrastructure Around Hadoop
Infrastructure Around HadoopInfrastructure Around Hadoop
Infrastructure Around Hadoop
 
Kudu austin oct 2015.pptx
Kudu austin oct 2015.pptxKudu austin oct 2015.pptx
Kudu austin oct 2015.pptx
 
Petabyte scale on commodity infrastructure
Petabyte scale on commodity infrastructurePetabyte scale on commodity infrastructure
Petabyte scale on commodity infrastructure
 
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
 
HDFS- What is New and Future
HDFS- What is New and FutureHDFS- What is New and Future
HDFS- What is New and Future
 
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix BarbeiraBackup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
 
Setting up a big data platform at kelkoo
Setting up a big data platform at kelkooSetting up a big data platform at kelkoo
Setting up a big data platform at kelkoo
 
Considerations when implementing_ha_in_dmf
Considerations when implementing_ha_in_dmfConsiderations when implementing_ha_in_dmf
Considerations when implementing_ha_in_dmf
 
Performance Whack-a-Mole Tutorial (pgCon 2009)
Performance Whack-a-Mole Tutorial (pgCon 2009) Performance Whack-a-Mole Tutorial (pgCon 2009)
Performance Whack-a-Mole Tutorial (pgCon 2009)
 
Track B-3 解構大數據架構 - 大數據系統的伺服器與網路資源規劃
Track B-3 解構大數據架構 - 大數據系統的伺服器與網路資源規劃Track B-3 解構大數據架構 - 大數據系統的伺服器與網路資源規劃
Track B-3 解構大數據架構 - 大數據系統的伺服器與網路資源規劃
 
Hadoop - Disk Fail In Place (DFIP)
Hadoop - Disk Fail In Place (DFIP)Hadoop - Disk Fail In Place (DFIP)
Hadoop - Disk Fail In Place (DFIP)
 
SAP Virtualization Week 2012 - The Lego Cloud
SAP Virtualization Week 2012 - The Lego CloudSAP Virtualization Week 2012 - The Lego Cloud
SAP Virtualization Week 2012 - The Lego Cloud
 
Apache Performance Tuning: Scaling Out
Apache Performance Tuning: Scaling OutApache Performance Tuning: Scaling Out
Apache Performance Tuning: Scaling Out
 
Private cloud virtual reality to reality a partner story daniel mar_technicom
Private cloud virtual reality to reality a partner story daniel mar_technicomPrivate cloud virtual reality to reality a partner story daniel mar_technicom
Private cloud virtual reality to reality a partner story daniel mar_technicom
 

Más de Hortonworks

Más de Hortonworks (20)

Hortonworks DataFlow (HDF) 3.3 - Taking Stream Processing to the Next Level
Hortonworks DataFlow (HDF) 3.3 - Taking Stream Processing to the Next LevelHortonworks DataFlow (HDF) 3.3 - Taking Stream Processing to the Next Level
Hortonworks DataFlow (HDF) 3.3 - Taking Stream Processing to the Next Level
 
IoT Predictions for 2019 and Beyond: Data at the Heart of Your IoT Strategy
IoT Predictions for 2019 and Beyond: Data at the Heart of Your IoT StrategyIoT Predictions for 2019 and Beyond: Data at the Heart of Your IoT Strategy
IoT Predictions for 2019 and Beyond: Data at the Heart of Your IoT Strategy
 
Getting the Most Out of Your Data in the Cloud with Cloudbreak
Getting the Most Out of Your Data in the Cloud with CloudbreakGetting the Most Out of Your Data in the Cloud with Cloudbreak
Getting the Most Out of Your Data in the Cloud with Cloudbreak
 
Johns Hopkins - Using Hadoop to Secure Access Log Events
Johns Hopkins - Using Hadoop to Secure Access Log EventsJohns Hopkins - Using Hadoop to Secure Access Log Events
Johns Hopkins - Using Hadoop to Secure Access Log Events
 
Catch a Hacker in Real-Time: Live Visuals of Bots and Bad Guys
Catch a Hacker in Real-Time: Live Visuals of Bots and Bad GuysCatch a Hacker in Real-Time: Live Visuals of Bots and Bad Guys
Catch a Hacker in Real-Time: Live Visuals of Bots and Bad Guys
 
HDF 3.2 - What's New
HDF 3.2 - What's NewHDF 3.2 - What's New
HDF 3.2 - What's New
 
Curing Kafka Blindness with Hortonworks Streams Messaging Manager
Curing Kafka Blindness with Hortonworks Streams Messaging ManagerCuring Kafka Blindness with Hortonworks Streams Messaging Manager
Curing Kafka Blindness with Hortonworks Streams Messaging Manager
 
Interpretation Tool for Genomic Sequencing Data in Clinical Environments
Interpretation Tool for Genomic Sequencing Data in Clinical EnvironmentsInterpretation Tool for Genomic Sequencing Data in Clinical Environments
Interpretation Tool for Genomic Sequencing Data in Clinical Environments
 
IBM+Hortonworks = Transformation of the Big Data Landscape
IBM+Hortonworks = Transformation of the Big Data LandscapeIBM+Hortonworks = Transformation of the Big Data Landscape
IBM+Hortonworks = Transformation of the Big Data Landscape
 
Premier Inside-Out: Apache Druid
Premier Inside-Out: Apache DruidPremier Inside-Out: Apache Druid
Premier Inside-Out: Apache Druid
 
Accelerating Data Science and Real Time Analytics at Scale
Accelerating Data Science and Real Time Analytics at ScaleAccelerating Data Science and Real Time Analytics at Scale
Accelerating Data Science and Real Time Analytics at Scale
 
TIME SERIES: APPLYING ADVANCED ANALYTICS TO INDUSTRIAL PROCESS DATA
TIME SERIES: APPLYING ADVANCED ANALYTICS TO INDUSTRIAL PROCESS DATATIME SERIES: APPLYING ADVANCED ANALYTICS TO INDUSTRIAL PROCESS DATA
TIME SERIES: APPLYING ADVANCED ANALYTICS TO INDUSTRIAL PROCESS DATA
 
Blockchain with Machine Learning Powered by Big Data: Trimble Transportation ...
Blockchain with Machine Learning Powered by Big Data: Trimble Transportation ...Blockchain with Machine Learning Powered by Big Data: Trimble Transportation ...
Blockchain with Machine Learning Powered by Big Data: Trimble Transportation ...
 
Delivering Real-Time Streaming Data for Healthcare Customers: Clearsense
Delivering Real-Time Streaming Data for Healthcare Customers: ClearsenseDelivering Real-Time Streaming Data for Healthcare Customers: Clearsense
Delivering Real-Time Streaming Data for Healthcare Customers: Clearsense
 
Making Enterprise Big Data Small with Ease
Making Enterprise Big Data Small with EaseMaking Enterprise Big Data Small with Ease
Making Enterprise Big Data Small with Ease
 
Webinewbie to Webinerd in 30 Days - Webinar World Presentation
Webinewbie to Webinerd in 30 Days - Webinar World PresentationWebinewbie to Webinerd in 30 Days - Webinar World Presentation
Webinewbie to Webinerd in 30 Days - Webinar World Presentation
 
Driving Digital Transformation Through Global Data Management
Driving Digital Transformation Through Global Data ManagementDriving Digital Transformation Through Global Data Management
Driving Digital Transformation Through Global Data Management
 
HDF 3.1 pt. 2: A Technical Deep-Dive on New Streaming Features
HDF 3.1 pt. 2: A Technical Deep-Dive on New Streaming FeaturesHDF 3.1 pt. 2: A Technical Deep-Dive on New Streaming Features
HDF 3.1 pt. 2: A Technical Deep-Dive on New Streaming Features
 
Hortonworks DataFlow (HDF) 3.1 - Redefining Data-In-Motion with Modern Data A...
Hortonworks DataFlow (HDF) 3.1 - Redefining Data-In-Motion with Modern Data A...Hortonworks DataFlow (HDF) 3.1 - Redefining Data-In-Motion with Modern Data A...
Hortonworks DataFlow (HDF) 3.1 - Redefining Data-In-Motion with Modern Data A...
 
Unlock Value from Big Data with Apache NiFi and Streaming CDC
Unlock Value from Big Data with Apache NiFi and Streaming CDCUnlock Value from Big Data with Apache NiFi and Streaming CDC
Unlock Value from Big Data with Apache NiFi and Streaming CDC
 

Último

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Victor Rentea
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 

Último (20)

Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 

HDFS Namenode High Availability

  • 1. NameNode HA Suresh Srinivas- Hortonworks Aaron T. Myers - Cloudera
  • 2. Overview • Part 1 – Suresh Srinivas(Hortonworks) − HDFS Availability and Data Integrity – what is the record? − NN HA Design • Part 2 – Aaron T. Myers (Cloudera) − NN HA Design continued  Client-NN Connection failover − Operations and Admin of HA − Future Work 2
  • 3. Current HDFS Availability & Data Integrity • Simple design, storage fault tolerance − Storage: Rely in OS’s file system rather than use raw disk − Storage Fault Tolerance: multiple replicas, active monitoring − Single NameNode Master  Persistent state: multiple copies + checkpoints  Restart on failure • How well did it work? − Lost 19 out of 329 Million blocks on 10 clusters with 20K nodes in 2009  7-9’s of reliability  Fixed in 20 and 21. − 18 months Study: 22 failures on 25 clusters - 0.58 failures per year per cluster  Only 8 would have benefitted from HA failover!! (0.23 failures per cluster year) − NN is very robust and can take a lot of abuse  NN is resilient against overload caused by misbehaving apps 3
  • 4. HA NameNode Active work has started on HA NameNode (Failover) • HA NameNode − Detailed design and sub tasks in HDFS-1623 • HA: Related work − Backup NN (0.21) − Avatar NN (Facebook) − HA NN prototype using Linux HA (Yahoo!) − HA NN prototype with Backup NN and block report replicator (eBay) HA is the highest priority 4
  • 5. Approach and Terminology • Initial goal is Active-Standby − With Federation each namespace volume has a NameNode  Single active NN for any namespace volume • Terminology − Active NN – actively serves the read/write operations from the clients − Standby NN - waits, becomes active when Active dies or is unhealthy  Could serve read operations − Standby’s State may be cold, warm or hot  Cold : Standby has zero state (e.g. started after the Active is declared dead.  Warm: Standby has partial state: • has loaded fsImage & editLogs but has not received any block reports • has loaded fsImage and rolled logs and all block reports  Hot Standby: Standby has all most of the Active’s state and start immediately 5
  • 6. High Level Use Cases • Planned downtime Supported failures − Upgrades • Single hardware failure − Config changes − Double hardware failure not − Main reason for downtime supported • Some software failures • Unplanned downtime − Same software failure affects − Hardware failure both active and standby − Server unresponsive − Software failures − Occurs infrequently 6
  • 7. Use Cases • Deployment models − Single NN configuration; no failover − Active and Standby with manual failover  Standby could be cold/warm/hot  Addresses downtime during upgrades – main cause of unavailability − Active and Standby with automatic failover  Hot standby  Addresses downtime during upgrades and other failures See HDFS-1623 for detailed use cases 7
  • 8. Design • Failover control outside NN • Parallel Block reports to Active and Standby (Hot failover) • Shared or non-shared NN state • Fencing of shared resources/data − Datanodes − Shared NN state (if any) • Client failover − IP Failover − Smart clients (e.g configuration, or ZooKeeper for coordination) 8
  • 9. Failover Control Outside NN • HA Daemon outside NameNode Quorum Service • Daemon manages resources − All resources modeled uniformly − Resources – OS, HW, Network etc. Resources HA Daemon Actions start, stop, Resources Resources − NameNode is just another resource • Heartbeat with other nodes failover, monitor, … Shared • Quorum based leader election Resources − Zookeeper for coordination and Quorum • Fencing during split brain − Prevents data corruption
  • 10. NN HA with Shared Storage and ZooKeeper ZK ZK ZK Heartbeat Heartbeat FailoverController FailoverController Active Standby Cmds Monitor Health Monitor Health of NN. OS, HW of NN. OS, HW NN Shared NN state NN Active with single Standby writer (fencing) Block Reports to Active & Standby DN fencing: Update cmds from one DN DN DN
  • 12. Client Failover Design • Smart clients − Users use one logical URI, client selects correct NN to connect to • Implementing two options out of the box − Client Knows of multiple NNs − Use a coordination service (ZooKeeper) • Common things between these − Which operations are idempotent, therefore safe to retry on a failover − Failover/retry strategies • Some differences − Expected time for client failover − Ease of administration 12
  • 13. Ops/Admin: Shared Storage • To share NN state, need shared storage − Needs to be HA itself to avoid just shifting SPOF  BookKeeper, etc will likely take care of this in the future − Many come with IP fencing options − Recommended mount options:  tcp,soft,intr,timeo=60,retrans=10 • Not all edits directories are created equal − Used to be all edits dirs were just a pool of redundant dirs − Can now configure some edits directories to be required − Can now configure number of tolerated failures − You want at least 2 for durability, 1 remote for HA 13
  • 14. Ops/Admin: NN fencing • Client failover does not solve this problem • Out of the box − RPC to active NN to tell it to go to standby (graceful failover) − SSH to active NN and `kill -9’ NN • Pluggable options − Many filers have protocols for IP-based fencing options − Many PDUs have protocols for IP-based plug-pulling (STONITH)  Nuke the node from orbit. It’s the only way to be sure. • Configure extra options if available to you − Will be tried in order during a failover event − Escalate the aggressiveness of the method − Fencing is critical for correctness of NN metadata 14
  • 15. Ops/Admin: Monitoring • New NN metrics − Size of pending DN message queues − Seconds since the standby NN last read from shared edit log − DN block report lag − All measurements of standby NN lag – monitor/alert on all of these • Monitor shared storage solution − Volumes fill up, disks go bad, etc − Should configure paranoid edit log retention policy (default is 2) • Canary-based monitoring of HDFS a good idea − Pinging both NNs not sufficient 15
  • 16. Ops/Admin: Hardware • Active/Standby NNs should be on separate racks • Shared storage system should be on separate rack • Active/Standby NNs should have close to the same hardware − Same amount of RAM – need to store the same things − Same # of processors - need to serve same number of clients • All the same recommendations still apply for NN − ECC memory, 48GB − Several separate disks for NN metadata directories − Redundant disks for OS drives, probably RAID 5 or mirroring − Redundant power 16
  • 17. Future Work • Other options to share NN metadata − BookKeeper − Multiple, potentially non-HA filers − Entirely different metadata system • More advanced client failover/load shedding − Serve stale reads from the standby NN − Speculative RPC − Non-RPC clients (IP failover, DNS failover, proxy, etc.) • Even Higher HA − Multiple standby NNs 17
  • 18. QA • Detailed design (HDFS-1623) −Community effort −HDFS-1971, 1972, 1973, 1974, 1975, 2005, 2064, 1073 18

Notas del editor

  1. Data – can I read what I wrote, is the service availableWhen I asked one of the original authors of of GFS if there were any decisions they would revist – random writersSimplicity is keyRaw disk – fs take time to stabilize – we can take advantage of ext4, xfs or zfs