SlideShare a Scribd company logo
1 of 64
Download to read offline
HBASE SCHEMA DESIGN
and Cluster Sizing Notes
ApacheCon Europe, November 2012

Lars George
Director EMEA Services
About Me

•  Director EMEA Services @ Cloudera
    •  Consulting on Hadoop projects (everywhere)
•  Apache Committer
    •  HBase and Whirr
•  O’Reilly Author
    •  HBase – The Definitive Guide
      •  Now in Japanese!

•  Contact
    •  lars@cloudera.com                      日本語版も出ました!	
  
    •  @larsgeorge
Agenda

•  HBase Architecture
•  Schema Design
•  Cluster Sizing Notes
HBASE ARCHITECTURE
HBase Tables
HBase Tables
HBase Tables
HBase Tables and Regions

•  Table is made up of any number if regions
•  Region is specified by its startKey and endKey
    •  Empty table: (Table, NULL, NULL)
    •  Two-region table: (Table, NULL, “com.cloudera.www”)
       and (Table, “com.cloudera.www”, NULL)
•  Each region may live on a different node and is
 made up of several HDFS files and blocks, each
 of which is replicated by Hadoop
Distribution
HBase Tables

•  Tables are sorted by Row in lexicographical order
•  Table schema only defines its column families
    •  Each family consists of any number of columns
    •  Each column consists of any number of versions
    •  Columns only exist when inserted, NULLs are free
    •  Columns within a family are sorted and stored
       together
    •  Everything except table names are byte[]


(Table, Row, Family:Column, Timestamp) -> Value
HBase Architecture
HBase Architecture (cont.)

•  HBase uses HDFS (or similar) as its reliable
 storage layer
  •  Handles checksums, replication, failover
•  Native Java API, Gateway for REST, Thrift, Avro
•  Master manages cluster
•  RegionServer manage data
•  ZooKeeper is used the “neural network”
    •  Crucial for HBase
    •  Bootstraps and coordinates cluster
HBase Architecture (cont.)

•  Based on Log-Structured Merge-Trees (LSM-Trees)
•  Inserts are done in write-ahead log first
•  Data is stored in memory and flushed to disk on
   regular intervals or based on size
•  Small flushes are merged in the background to keep
   number of files small
•  Reads read memory stores first and then disk based
   files second
•  Deletes are handled with “tombstone” markers
•  Atomicity on row level no matter how many columns
   •  keeps locking model easy
MemStores
•  After data is written to the WAL the RegionServer
   saves KeyValues in memory store
•  Flush to disk based on size, see
   hbase.hregion.memstore.flush.size
•  Default size is 64MB
•  Uses snapshot mechanism to write flush to disk
   while still serving from it and accepting new data
   at the same time
•  Snapshots are released when flush has
   succeeded
Compactions
•  General Concepts
    •  Two types: Minor and Major Compactions
    •  Asynchronous and transparent to client
    •  Manage file bloat from MemStore flushes
•  Minor Compactions
    •  Combine last “few” flushes
    •  Triggered by number of storage files
•  Major Compactions
    •  Rewrite all storage files
    •  Drop deleted data and those values exceeding TTL and/or number of
       versions
    •  Triggered by time threshold
    •  Cannot be scheduled automatically starting at a specific time (bummer!)
    •  May (most definitely) tax overall HDFS IO performance

Tip: Disable major compactions and schedule to run manually (e.g.
  cron) at off-peak times
Block Cache
•  Acts as very large, in-memory distributed cache
•  Assigned a large part of the JVM heap in the RegionServer process,
   see hfile.block.cache.size
•  Optimizes reads on subsequent columns and rows
•  Has priority to keep “in-memory” column families in cache
    if(inMemory) {
           this.priority = BlockPriority.MEMORY;
    } else {
           this.priority = BlockPriority.SINGLE;
    }

•  Cache needs to be used properly to get best read performance
    •  Turn off block cache on operations that cause large churn
    •  Store related data “close” to each other
•  Uses LRU cache with threaded (asynchronous) evictions based on
  priorities
Region Splits
•  Triggered by configured maximum file size of any
 store file
   •  This is checked directly after the compaction call to
     ensure store files are actually approaching the
     threshold
•  Runs as asynchronous thread on RegionServer
•  Splits are fast and nearly instant
    •  Reference files point to original region files and
       represent each half of the split
•  Compactions take care of splitting original files
 into new region directories
Auto Sharding
Auto Sharding and Distribution

•  Unit of scalability in HBase is the Region
•  Sorted, contiguous range of rows
•  Spread “randomly” across RegionServer
•  Moved around for load balancing and failover
•  Split automatically or manually to scale with
   growing data
•  Capacity is solely a factor of cluster nodes vs.
   regions per node
Column Family vs. Column

•  Use only a few column families
    •  Causes many files that need to stay open per region
       plus class overhead per family
•  Best used when logical separation between data
   and meta columns
•  Sorting per family can be used to convey
   application logic or access pattern
Storage Separation

•  Column Families allow for separation of data
    •  Used by Columnar Databases for fast analytical
       queries, but on column level only
    •  Allows different or no compression depending on the
       content type
•  Segregate information based on access pattern
•  Data is stored in one or more storage file, called
 HFiles
Column Families
SCHEMA DESIGN
Key Cardinality
Key Cardinality

•  The best performance is gained from using row
   keys
•  Time range bound reads can skip store files
   •  So can Bloom Filters
•  Selecting column families reduces the amount of
   data to be scanned
•  Pure value based filtering is a full table scan
   •  Filters often are too, but reduce network traffic
Fold, Store, and Shift
Fold, Store, and Shift

•  Logical layout does not match physical one
•  All values are stored with the full coordinates,
   including: Row Key, Column Family, Column
   Qualifier, and Timestamp
•  Folds columns into “row per column”
•  NULLs are cost free as nothing is stored
•  Versions are multiple “rows” in folded table
Key/Table Design

•  Crucial to gain best performance
    •  Why do I need to know? Well, you also need to know
       that RDBMS is only working well when columns are
       indexed and query plan is OK
•  Absence of secondary indexes forces use of row
   key or column name sorting
•  Transfer multiple indexes into one
   •  Generate large table -> Good since fits architecture
     and spreads across cluster
DDI

•  Stands for Denormalization, Duplication and
   Intelligent Keys
•  Needed to overcome shortcomings of
   architecture
•  Denormalization -> Replacement for JOINs
•  Duplication -> Design for reads
•  Intelligent Keys -> Implement indexing and
   sorting, optimize reads
Pre-materialize Everything

•  Achieve one read per customer request if
   possible
•  Otherwise keep at lowest number
•  Reads between 10ms (cache miss) and 1ms
   (cache hit)
•  Use MapReduce to compute exacts in batch
•  Store and merge updates live
•  Use incrementColumnValue


            Motto: “Design for Reads”
Tall-Narrow vs. Flat-Wide Tables

•  Rows do not split
    •  Might end up with one row per region
•  Same storage footprint
•  Put more details into the row key
    •  Sometimes dummy column only
    •  Make use of partial key scans
•  Tall with Scans, Wide with Gets
    •  Atomicity only on row level
•  Example: Large graphs, stored as adjacency
 matrix
Example: Mail Inbox

        <userId> : <colfam> : <messageId> : <timestamp> : <email-message>

12345   :   data   :   5fc38314-e290-ae5da5fc375d       :   1307097848   :   "Hi Lars, ..."
12345   :   data   :   725aae5f-d72e-f90f3f070419       :   1307099848   :   "Welcome, and ..."
12345   :   data   :   cc6775b3-f249-c6dd2b1a7467       :   1307101848   :   "To Whom It ..."
12345   :   data   :   dcbee495-6d5e-6ed48124632c       :   1307103848   :   "Hi, how are ..."


                                               or
12345-5fc38314-e290-ae5da5fc375d         :   data   :   :   1307097848   :   "Hi Lars, ..."
12345-725aae5f-d72e-f90f3f070419         :   data   :   :   1307099848   :   "Welcome, and ..."
12345-cc6775b3-f249-c6dd2b1a7467         :   data   :   :   1307101848   :   "To Whom It ..."
12345-dcbee495-6d5e-6ed48124632c         :   data   :   :   1307103848   :   "Hi, how are ..."


                           è   Same Storage Requirements
Partial Key Scans
Key	
                                        Descrip+on	
  
<userId>                                     Scan	
  over	
  all	
  messages	
  
                                             for	
  a	
  given	
  user	
  ID	
  
<userId>-<date>                              Scan	
  over	
  all	
  messages	
  
                                             on	
  a	
  given	
  date	
  for	
  the	
  
                                             given	
  user	
  ID	
  
<userId>-<date>-<messageId>                  Scan	
  over	
  all	
  parts	
  of	
  a	
  
                                             message	
  for	
  a	
  given	
  user	
  
                                             ID	
  and	
  date	
  
<userId>-<date>-<messageId>-<attachmentId>   Scan	
  over	
  all	
  
                                             a8achments	
  of	
  a	
  
                                             message	
  for	
  a	
  given	
  user	
  
                                             ID	
  and	
  date	
  
Sequential Keys

  <timestamp><more key>: {CF: {CQ: {TS : Val}}}

•  Hotspotting on Regions: bad!
•  Instead do one of the following:
     •  Salting
      •  Prefix   <timestamp> with distributed value
      •  Binning or bucketing rows across regions
   •  Key field swap/promotion
       •  Move <more key> before the timestamp (see OpenTSDB
          later)
   •  Randomization
       •  Move <timestamp> out of key
Salting

•    Prefix row keys to gain spread
•    Use well known or numbered prefixes
•    Use modulo to spread across servers
•    Enforce common data stay close to each other for
     subsequent scanning or MapReduce processing
      0_rowkey1, 1_rowkey2, 2_rowkey3
      0_rowkey4, 1_rowkey5, 2_rowkey6

•  Sorted by prefix first
    0_rowkey1
    0_rowkey4
    1_rowkey2
    1_rowkey5
    …
Hashing vs. Sequential Keys

•  Uses hashes for best spread
    •  Use for example MD5 to be able to recreate key
      •  Key = MD5(customerID)
   •  Counter productive for range scans


•  Use sequential keys for locality
    •  Makes use of block caches
    •  May tax one server overly, may be avoided by salting
       or splitting regions while keeping them small
Key Design
Key Design Summary

•  Based on access pattern, either use sequential or
   random keys
•  Often a combination of both is needed
   •  Overcome architectural limitations
•  Neither is necessarily bad
    •  Use bulk import for sequential keys and reads
    •  Random keys are good for random access patterns
Example: Facebook Insights

•  > 20B Events per Day
•  1M Counter Updates per Second
    •  100 Nodes Cluster
    •  10K OPS per Node
•  ”Like” button triggers AJAX request
•  Event written to log file
•  30mins current for website owner


     Web	
  ➜	
  Scribe	
  ➜	
  Ptail	
  ➜	
  Puma	
  ➜	
  HBase	
  
HBase Counters

•  Store counters per Domain and per URL
    •  Leverage HBase increment (atomic read-modify-
       write) feature
•  Each row is one specific Domain or URL
•  The columns are the counters for specific metrics
•  Column families are used to group counters by
 time range
   •  Set time-to-live on CF level to auto-expire counters by
     age to save space, e.g., 2 weeks on “Daily Counters”
     family
Key Design
•  Reversed Domains
    •  Examples: “com.cloudera.www”, “com.cloudera.blog”
    •  Helps keeping pages per site close, as HBase efficiently
       scans blocks of sorted keys
•  Domain Row Key =
 MD5(Reversed Domain) + Reversed Domain
   •  Leading MD5 hash spreads keys randomly across all regions
      for load balancing reasons
   •  Only hashing the domain groups per site (and per subdomain
      if needed)
•  URL Row Key =
 MD5(Reversed Domain) + Reversed Domain + URL ID
   •  Unique ID per URL already available, make use of it
Insights Schema
Summary

•  Design for Use-Case
    •  Read, Write, or Both?
•  Avoid Hotspotting
•  Consider using IDs instead of full text
•  Leverage Column Family to HFile relation
•  Shift details to appropriate position
    •  Composite Keys
    •  Column Qualifiers
Summary (cont.)

•  Schema design is a combination of
    •  Designing the keys (row and column)
    •  Segregate data into column families
    •  Choose compression and block sizes
•  Similar techniques are needed to scale most
 systems
   •  Add indexes, partition data, consistent hashing
•  Denormalization, Duplication, and Intelligent
 Keys (DDI)
CLUSTER SIZING
Competing Resources

 •  Reads and Writes compete for the same low-
  level resources
  •  Disk (HDFS) and Network I/O
  •  RPC Handlers and Threads
 •  Otherwise the do exercise completely separate
  code paths
Memory Sharing

 •  By default every region server is dividing its
  memory (i.e. given maximum heap) into
  •  40% for in-memory stores (write ops)
  •  20% for block caching (reads ops)
  •  remaining space (here 40%) go towards usual Java
    heap usage (objects etc.)
 •  Share of memory needs to be tweaked
Reads

 •  Locate and route request to appropriate region
  server
  •  Client caches information for faster lookups ➜
    consider prefetching option for fast warmups
 •  Eliminate store files if possible using time
    ranges or Bloom filter
 •  Try block cache, if block is missing then load
    from disk
Block Cache

 •  Use exported metrics to see effectiveness of
  block cache
  •  Check fill and eviction rate, as well as hit
    ratios ➜ random reads are not ideal
 •  Tweak up or down as needed, but watch
  overall heap usage
 •  You absolutely need the block cache

  •  Set to 10% at least for short term benefits
Writes

 •  The cluster size is often determined by the write
   performance
 •  Log structured merge trees like
  •  Store mutation in in-memory store and write-
    ahead log
  •  Flush out aggregated, sorted maps at specified
    threshold - or - when under pressure
  •  Discard logs with no pending edits
  •  Perform regular compactions of store files
Write Performance

 •  There are many factors to the overall write
  performance of a cluster
  •  Key Distribution ➜ Avoid region hotspot
  •  Handlers ➜ Do not pile up too early
  •  Write-ahead log ➜ Bottleneck #1
  •  Compactions ➜ Badly tuned can cause ever
    increasing background noise
Write-Ahead Log

 •  Currently only one per region server
    •  Shared across all stores (i.e. column families)
    •  Synchronized on file append calls
 •  Work being done on mitigating this
    •  WAL Compression
    •  Multiple WAL’s per region server ➜ Start more than
       one region server per node?
Write-Ahead Log (cont.)

 •  Size set to 95% of default block size
    •  64MB or 128MB, but check config!
 •  Keep number low to reduce recovery time
    •  Limit set to 32, but can be increased
 •  Increase size of logs - and/or - increase the
    number of logs before blocking
 •  Compute number based on fill distribution and
    flush frequencies
Write-Ahead Log (cont.)

 •  Writes are synchronized across all stores
  •  A large cell in one family can stop all writes
    of another
  •  In this case the RPC handlers go binary, i.e.
    either work or all block
 •  Can be bypassed on writes, but means no
   real durability and no replication
  •  Maybe use coprocessor to restore
    dependent data sets (preWALRestore)
Flushes

 •  Every mutation call (put, delete etc.) causes a
    check for a flush
 •  If threshold is met, flush file to disk and
    schedule a compaction
  •  Try to compact newly flushed files quickly
 •  The compaction returns - if necessary - where
  a region should be split
Compaction Storms

 •  Premature flushing because of # of logs or
  memory pressure
  •  Files will be smaller than the configured flush size
 •  The background compactions are hard at work
  merging small flush files into the existing,
  larger store files
  •  Rewrite hundreds of MB over and over
Dependencies

   •  Flushes happen across all stores/column
      families, even if just one triggers it
   •  The flush size is compared to the size of
      all stores combined
    •  Many column families dilute the size
    •  Example: 55MB + 5MB + 4MB
Some Numbers

     •  Typical write performance of HDFS is
         35-50MB/s


                      Cell Size	

                                                OPS	

                         0.5MB	

                                                70-100 	

                         100KB	

                                               350-500	

                          10KB	

                                             3500-5000 ??	

                            1KB	

                                       35000-50000 ????	

  This	
  is	
  way	
  to	
  high	
  in	
  pracAce	
  -­‐	
  ContenAon!	
  
Some More Numbers

   •  Under real world conditions the rate is
    less, more like 15MB/s or less
    •  Thread contention is cause for massive slow
      down

         Cell Size	

              OPS	

          0.5MB	

                  10	

          100KB	

                  100	

           10KB	

                  800	

             1KB	

                6000
Notes

 •  Compute memstore sizes based on number of
    regions x flush size
 •  Compute number of logs to keep based on fill
    and flush rate
 •  Ultimately the capacity is driven by
  •  Java Heap
  •  Region Count and Size
  •  Key Distribution
Cheat Sheet #1

 •  Ensure you have enough or large enough
    write-ahead logs
 •  Ensure you do not oversubscribe available
    memstore space
 •  Ensure to set flush size large enough but not
    too large
 •  Check write-ahead log usage carefully
Cheat Sheet #2

 •  Enable compression to store more data per
    node
 •  Tweak compaction algorithm to peg
    background I/O at some level
 •  Consider putting uneven column families in
    separate tables
 •  Check metrics carefully for block cache,
    memstore, and all queues
Example

 •  Java Xmx heap at 10GB
 •  Memstore share at 40% (default)
  •  10GB Heap x 0.4 = 4GB
 •  Desired flush size at 128MB
  •  4GB / 128MB = 32 regions max!
 •  For WAL size of 128MB x 0.95%
  •  4GB / (128MB x 0.95) = ~33 partially uncommitted logs to
     keep around
 •  Region size at 20GB
  •  20GB x 32 regions = 640GB raw storage used
Ques+ons?	
  

More Related Content

What's hot

Storage Systems for big data - HDFS, HBase, and intro to KV Store - Redis
Storage Systems for big data - HDFS, HBase, and intro to KV Store - RedisStorage Systems for big data - HDFS, HBase, and intro to KV Store - Redis
Storage Systems for big data - HDFS, HBase, and intro to KV Store - RedisSameer Tiwari
 
HBase: Extreme Makeover
HBase: Extreme MakeoverHBase: Extreme Makeover
HBase: Extreme MakeoverHBaseCon
 
Cross-Site BigTable using HBase
Cross-Site BigTable using HBaseCross-Site BigTable using HBase
Cross-Site BigTable using HBaseHBaseCon
 
HBase Applications - Atlanta HUG - May 2014
HBase Applications - Atlanta HUG - May 2014HBase Applications - Atlanta HUG - May 2014
HBase Applications - Atlanta HUG - May 2014larsgeorge
 
Benchmarking Top NoSQL Databases: Apache Cassandra, Apache HBase and MongoDB
Benchmarking Top NoSQL Databases: Apache Cassandra, Apache HBase and MongoDBBenchmarking Top NoSQL Databases: Apache Cassandra, Apache HBase and MongoDB
Benchmarking Top NoSQL Databases: Apache Cassandra, Apache HBase and MongoDBAthiq Ahamed
 
Evaluating NoSQL Performance: Time for Benchmarking
Evaluating NoSQL Performance: Time for BenchmarkingEvaluating NoSQL Performance: Time for Benchmarking
Evaluating NoSQL Performance: Time for BenchmarkingSergey Bushik
 
Hadoop World 2011: Advanced HBase Schema Design
Hadoop World 2011: Advanced HBase Schema DesignHadoop World 2011: Advanced HBase Schema Design
Hadoop World 2011: Advanced HBase Schema DesignCloudera, Inc.
 
HBase Sizing Guide
HBase Sizing GuideHBase Sizing Guide
HBase Sizing Guidelarsgeorge
 
HBase and HDFS: Understanding FileSystem Usage in HBase
HBase and HDFS: Understanding FileSystem Usage in HBaseHBase and HDFS: Understanding FileSystem Usage in HBase
HBase and HDFS: Understanding FileSystem Usage in HBaseenissoz
 
Cassandra from the trenches: migrating Netflix
Cassandra from the trenches: migrating NetflixCassandra from the trenches: migrating Netflix
Cassandra from the trenches: migrating NetflixJason Brown
 
HBase: Where Online Meets Low Latency
HBase: Where Online Meets Low LatencyHBase: Where Online Meets Low Latency
HBase: Where Online Meets Low LatencyHBaseCon
 
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]Malin Weiss
 
Cassandra in production
Cassandra in productionCassandra in production
Cassandra in productionvalstadsve
 
Storage Enhancements in Windows 2012 R2
Storage Enhancements in Windows 2012 R2Storage Enhancements in Windows 2012 R2
Storage Enhancements in Windows 2012 R2Michael Rüefli
 
Meet HBase 1.0
Meet HBase 1.0Meet HBase 1.0
Meet HBase 1.0enissoz
 

What's hot (16)

Storage Systems for big data - HDFS, HBase, and intro to KV Store - Redis
Storage Systems for big data - HDFS, HBase, and intro to KV Store - RedisStorage Systems for big data - HDFS, HBase, and intro to KV Store - Redis
Storage Systems for big data - HDFS, HBase, and intro to KV Store - Redis
 
HBase: Extreme Makeover
HBase: Extreme MakeoverHBase: Extreme Makeover
HBase: Extreme Makeover
 
Cross-Site BigTable using HBase
Cross-Site BigTable using HBaseCross-Site BigTable using HBase
Cross-Site BigTable using HBase
 
HBase Applications - Atlanta HUG - May 2014
HBase Applications - Atlanta HUG - May 2014HBase Applications - Atlanta HUG - May 2014
HBase Applications - Atlanta HUG - May 2014
 
Benchmarking Top NoSQL Databases: Apache Cassandra, Apache HBase and MongoDB
Benchmarking Top NoSQL Databases: Apache Cassandra, Apache HBase and MongoDBBenchmarking Top NoSQL Databases: Apache Cassandra, Apache HBase and MongoDB
Benchmarking Top NoSQL Databases: Apache Cassandra, Apache HBase and MongoDB
 
Evaluating NoSQL Performance: Time for Benchmarking
Evaluating NoSQL Performance: Time for BenchmarkingEvaluating NoSQL Performance: Time for Benchmarking
Evaluating NoSQL Performance: Time for Benchmarking
 
Hadoop World 2011: Advanced HBase Schema Design
Hadoop World 2011: Advanced HBase Schema DesignHadoop World 2011: Advanced HBase Schema Design
Hadoop World 2011: Advanced HBase Schema Design
 
HBase Sizing Guide
HBase Sizing GuideHBase Sizing Guide
HBase Sizing Guide
 
HBase and HDFS: Understanding FileSystem Usage in HBase
HBase and HDFS: Understanding FileSystem Usage in HBaseHBase and HDFS: Understanding FileSystem Usage in HBase
HBase and HDFS: Understanding FileSystem Usage in HBase
 
Hbase: an introduction
Hbase: an introductionHbase: an introduction
Hbase: an introduction
 
Cassandra from the trenches: migrating Netflix
Cassandra from the trenches: migrating NetflixCassandra from the trenches: migrating Netflix
Cassandra from the trenches: migrating Netflix
 
HBase: Where Online Meets Low Latency
HBase: Where Online Meets Low LatencyHBase: Where Online Meets Low Latency
HBase: Where Online Meets Low Latency
 
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]
 
Cassandra in production
Cassandra in productionCassandra in production
Cassandra in production
 
Storage Enhancements in Windows 2012 R2
Storage Enhancements in Windows 2012 R2Storage Enhancements in Windows 2012 R2
Storage Enhancements in Windows 2012 R2
 
Meet HBase 1.0
Meet HBase 1.0Meet HBase 1.0
Meet HBase 1.0
 

Viewers also liked

Hadoop operations-2014-strata-new-york-v5
Hadoop operations-2014-strata-new-york-v5Hadoop operations-2014-strata-new-york-v5
Hadoop operations-2014-strata-new-york-v5Chris Nauroth
 
Hadoop Hardware @Twitter: Size does matter.
Hadoop Hardware @Twitter: Size does matter.Hadoop Hardware @Twitter: Size does matter.
Hadoop Hardware @Twitter: Size does matter.Michael Zhang
 
Data Engineering Quick Guide
Data Engineering Quick GuideData Engineering Quick Guide
Data Engineering Quick GuideAsim Jalis
 
Successes, Challenges, and Pitfalls Migrating a SAAS business to Hadoop
Successes, Challenges, and Pitfalls Migrating a SAAS business to HadoopSuccesses, Challenges, and Pitfalls Migrating a SAAS business to Hadoop
Successes, Challenges, and Pitfalls Migrating a SAAS business to HadoopDataWorks Summit/Hadoop Summit
 
Hadoop and BigData - July 2016
Hadoop and BigData - July 2016Hadoop and BigData - July 2016
Hadoop and BigData - July 2016Ranjith Sekar
 
Spark Summit EU talk by Shaun Klopfenstein and Neelesh Shastry
Spark Summit EU talk by Shaun Klopfenstein and Neelesh ShastrySpark Summit EU talk by Shaun Klopfenstein and Neelesh Shastry
Spark Summit EU talk by Shaun Klopfenstein and Neelesh ShastrySpark Summit
 
Apache Phoenix and Apache HBase: An Enterprise Grade Data Warehouse
Apache Phoenix and Apache HBase: An Enterprise Grade Data WarehouseApache Phoenix and Apache HBase: An Enterprise Grade Data Warehouse
Apache Phoenix and Apache HBase: An Enterprise Grade Data WarehouseJosh Elser
 
HBaseCon 2015: HBase Operations in a Flurry
HBaseCon 2015: HBase Operations in a FlurryHBaseCon 2015: HBase Operations in a Flurry
HBaseCon 2015: HBase Operations in a FlurryHBaseCon
 
Hadoop Hardware @Twitter: Size does matter!
Hadoop Hardware @Twitter: Size does matter!Hadoop Hardware @Twitter: Size does matter!
Hadoop Hardware @Twitter: Size does matter!DataWorks Summit
 
Hadoop Einführung @codecentric
Hadoop Einführung @codecentricHadoop Einführung @codecentric
Hadoop Einführung @codecentricimalik8088
 
Seminar Presentation Hadoop
Seminar Presentation HadoopSeminar Presentation Hadoop
Seminar Presentation HadoopVarun Narang
 
Hadoop introduction , Why and What is Hadoop ?
Hadoop introduction , Why and What is  Hadoop ?Hadoop introduction , Why and What is  Hadoop ?
Hadoop introduction , Why and What is Hadoop ?sudhakara st
 

Viewers also liked (15)

Hadoop operations-2014-strata-new-york-v5
Hadoop operations-2014-strata-new-york-v5Hadoop operations-2014-strata-new-york-v5
Hadoop operations-2014-strata-new-york-v5
 
Hadoop Hardware @Twitter: Size does matter.
Hadoop Hardware @Twitter: Size does matter.Hadoop Hardware @Twitter: Size does matter.
Hadoop Hardware @Twitter: Size does matter.
 
HBase Sizing Notes
HBase Sizing NotesHBase Sizing Notes
HBase Sizing Notes
 
Data Engineering Quick Guide
Data Engineering Quick GuideData Engineering Quick Guide
Data Engineering Quick Guide
 
Successes, Challenges, and Pitfalls Migrating a SAAS business to Hadoop
Successes, Challenges, and Pitfalls Migrating a SAAS business to HadoopSuccesses, Challenges, and Pitfalls Migrating a SAAS business to Hadoop
Successes, Challenges, and Pitfalls Migrating a SAAS business to Hadoop
 
Hadoop and BigData - July 2016
Hadoop and BigData - July 2016Hadoop and BigData - July 2016
Hadoop and BigData - July 2016
 
Spark Summit EU talk by Shaun Klopfenstein and Neelesh Shastry
Spark Summit EU talk by Shaun Klopfenstein and Neelesh ShastrySpark Summit EU talk by Shaun Klopfenstein and Neelesh Shastry
Spark Summit EU talk by Shaun Klopfenstein and Neelesh Shastry
 
Apache Phoenix and Apache HBase: An Enterprise Grade Data Warehouse
Apache Phoenix and Apache HBase: An Enterprise Grade Data WarehouseApache Phoenix and Apache HBase: An Enterprise Grade Data Warehouse
Apache Phoenix and Apache HBase: An Enterprise Grade Data Warehouse
 
HBaseCon 2015: HBase Operations in a Flurry
HBaseCon 2015: HBase Operations in a FlurryHBaseCon 2015: HBase Operations in a Flurry
HBaseCon 2015: HBase Operations in a Flurry
 
Hadoop Hardware @Twitter: Size does matter!
Hadoop Hardware @Twitter: Size does matter!Hadoop Hardware @Twitter: Size does matter!
Hadoop Hardware @Twitter: Size does matter!
 
Big Data mit Apache Hadoop
Big Data mit Apache HadoopBig Data mit Apache Hadoop
Big Data mit Apache Hadoop
 
Hadoop Einführung @codecentric
Hadoop Einführung @codecentricHadoop Einführung @codecentric
Hadoop Einführung @codecentric
 
Yahoo compares Storm and Spark
Yahoo compares Storm and SparkYahoo compares Storm and Spark
Yahoo compares Storm and Spark
 
Seminar Presentation Hadoop
Seminar Presentation HadoopSeminar Presentation Hadoop
Seminar Presentation Hadoop
 
Hadoop introduction , Why and What is Hadoop ?
Hadoop introduction , Why and What is  Hadoop ?Hadoop introduction , Why and What is  Hadoop ?
Hadoop introduction , Why and What is Hadoop ?
 

Similar to Hbase schema design and sizing apache-con europe - nov 2012

HBase in Practice
HBase in PracticeHBase in Practice
HBase in Practicelarsgeorge
 
HBase Low Latency, StrataNYC 2014
HBase Low Latency, StrataNYC 2014HBase Low Latency, StrataNYC 2014
HBase Low Latency, StrataNYC 2014Nick Dimiduk
 
Cassandra an overview
Cassandra an overviewCassandra an overview
Cassandra an overviewPritamKathar
 
HBase Advanced Schema Design - Berlin Buzzwords - June 2012
HBase Advanced Schema Design - Berlin Buzzwords - June 2012HBase Advanced Schema Design - Berlin Buzzwords - June 2012
HBase Advanced Schema Design - Berlin Buzzwords - June 2012larsgeorge
 
UNIT I Introduction to NoSQL.pptx
UNIT I Introduction to NoSQL.pptxUNIT I Introduction to NoSQL.pptx
UNIT I Introduction to NoSQL.pptxRahul Borate
 
Big Data Architecture Workshop - Vahid Amiri
Big Data Architecture Workshop -  Vahid AmiriBig Data Architecture Workshop -  Vahid Amiri
Big Data Architecture Workshop - Vahid Amiridatastack
 
UNIT I Introduction to NoSQL.pptx
UNIT I Introduction to NoSQL.pptxUNIT I Introduction to NoSQL.pptx
UNIT I Introduction to NoSQL.pptxRahul Borate
 
Scaling the Web: Databases & NoSQL
Scaling the Web: Databases & NoSQLScaling the Web: Databases & NoSQL
Scaling the Web: Databases & NoSQLRichard Schneeman
 
HBaseCon 2015: HBase @ Flipboard
HBaseCon 2015: HBase @ FlipboardHBaseCon 2015: HBase @ Flipboard
HBaseCon 2015: HBase @ FlipboardHBaseCon
 
HBaseCon 2015- HBase @ Flipboard
HBaseCon 2015- HBase @ FlipboardHBaseCon 2015- HBase @ Flipboard
HBaseCon 2015- HBase @ FlipboardMatthew Blair
 
Cassandra
CassandraCassandra
Cassandraexsuns
 
A Closer Look at Apache Kudu
A Closer Look at Apache KuduA Closer Look at Apache Kudu
A Closer Look at Apache KuduAndriy Zabavskyy
 
AWS re:Invent 2016: Streaming ETL for RDS and DynamoDB (DAT315)
AWS re:Invent 2016: Streaming ETL for RDS and DynamoDB (DAT315)AWS re:Invent 2016: Streaming ETL for RDS and DynamoDB (DAT315)
AWS re:Invent 2016: Streaming ETL for RDS and DynamoDB (DAT315)Amazon Web Services
 
HDFS_architecture.ppt
HDFS_architecture.pptHDFS_architecture.ppt
HDFS_architecture.pptvijayapraba1
 

Similar to Hbase schema design and sizing apache-con europe - nov 2012 (20)

HBase in Practice
HBase in Practice HBase in Practice
HBase in Practice
 
HBase in Practice
HBase in PracticeHBase in Practice
HBase in Practice
 
L6.sp17.pptx
L6.sp17.pptxL6.sp17.pptx
L6.sp17.pptx
 
01 hbase
01 hbase01 hbase
01 hbase
 
No sql databases
No sql databasesNo sql databases
No sql databases
 
HBase Low Latency, StrataNYC 2014
HBase Low Latency, StrataNYC 2014HBase Low Latency, StrataNYC 2014
HBase Low Latency, StrataNYC 2014
 
NoSql
NoSqlNoSql
NoSql
 
Cassandra an overview
Cassandra an overviewCassandra an overview
Cassandra an overview
 
HBase Advanced Schema Design - Berlin Buzzwords - June 2012
HBase Advanced Schema Design - Berlin Buzzwords - June 2012HBase Advanced Schema Design - Berlin Buzzwords - June 2012
HBase Advanced Schema Design - Berlin Buzzwords - June 2012
 
UNIT I Introduction to NoSQL.pptx
UNIT I Introduction to NoSQL.pptxUNIT I Introduction to NoSQL.pptx
UNIT I Introduction to NoSQL.pptx
 
Big Data Architecture Workshop - Vahid Amiri
Big Data Architecture Workshop -  Vahid AmiriBig Data Architecture Workshop -  Vahid Amiri
Big Data Architecture Workshop - Vahid Amiri
 
UNIT I Introduction to NoSQL.pptx
UNIT I Introduction to NoSQL.pptxUNIT I Introduction to NoSQL.pptx
UNIT I Introduction to NoSQL.pptx
 
Scaling the Web: Databases & NoSQL
Scaling the Web: Databases & NoSQLScaling the Web: Databases & NoSQL
Scaling the Web: Databases & NoSQL
 
NoSQL.pptx
NoSQL.pptxNoSQL.pptx
NoSQL.pptx
 
HBaseCon 2015: HBase @ Flipboard
HBaseCon 2015: HBase @ FlipboardHBaseCon 2015: HBase @ Flipboard
HBaseCon 2015: HBase @ Flipboard
 
HBaseCon 2015- HBase @ Flipboard
HBaseCon 2015- HBase @ FlipboardHBaseCon 2015- HBase @ Flipboard
HBaseCon 2015- HBase @ Flipboard
 
Cassandra
CassandraCassandra
Cassandra
 
A Closer Look at Apache Kudu
A Closer Look at Apache KuduA Closer Look at Apache Kudu
A Closer Look at Apache Kudu
 
AWS re:Invent 2016: Streaming ETL for RDS and DynamoDB (DAT315)
AWS re:Invent 2016: Streaming ETL for RDS and DynamoDB (DAT315)AWS re:Invent 2016: Streaming ETL for RDS and DynamoDB (DAT315)
AWS re:Invent 2016: Streaming ETL for RDS and DynamoDB (DAT315)
 
HDFS_architecture.ppt
HDFS_architecture.pptHDFS_architecture.ppt
HDFS_architecture.ppt
 

More from Chris Huang

Data compression, data security, and machine learning
Data compression, data security, and machine learningData compression, data security, and machine learning
Data compression, data security, and machine learningChris Huang
 
Kks sre book_ch10
Kks sre book_ch10Kks sre book_ch10
Kks sre book_ch10Chris Huang
 
Kks sre book_ch1,2
Kks sre book_ch1,2Kks sre book_ch1,2
Kks sre book_ch1,2Chris Huang
 
Real time big data applications with hadoop ecosystem
Real time big data applications with hadoop ecosystemReal time big data applications with hadoop ecosystem
Real time big data applications with hadoop ecosystemChris Huang
 
A Graph Service for Global Web Entities Traversal and Reputation Evaluation B...
A Graph Service for Global Web Entities Traversal and Reputation Evaluation B...A Graph Service for Global Web Entities Traversal and Reputation Evaluation B...
A Graph Service for Global Web Entities Traversal and Reputation Evaluation B...Chris Huang
 
Approaching real-time-hadoop
Approaching real-time-hadoopApproaching real-time-hadoop
Approaching real-time-hadoopChris Huang
 
20130310 solr tuorial
20130310 solr tuorial20130310 solr tuorial
20130310 solr tuorialChris Huang
 
Scaling big-data-mining-infra2
Scaling big-data-mining-infra2Scaling big-data-mining-infra2
Scaling big-data-mining-infra2Chris Huang
 
Applying Media Content Analysis to the Production of Musical Videos as Summar...
Applying Media Content Analysis to the Production of Musical Videos as Summar...Applying Media Content Analysis to the Production of Musical Videos as Summar...
Applying Media Content Analysis to the Production of Musical Videos as Summar...Chris Huang
 
Hbase status quo apache-con europe - nov 2012
Hbase status quo   apache-con europe - nov 2012Hbase status quo   apache-con europe - nov 2012
Hbase status quo apache-con europe - nov 2012Chris Huang
 
重構—改善既有程式的設計(chapter 12,13)
重構—改善既有程式的設計(chapter 12,13)重構—改善既有程式的設計(chapter 12,13)
重構—改善既有程式的設計(chapter 12,13)Chris Huang
 
重構—改善既有程式的設計(chapter 10)
重構—改善既有程式的設計(chapter 10)重構—改善既有程式的設計(chapter 10)
重構—改善既有程式的設計(chapter 10)Chris Huang
 
重構—改善既有程式的設計(chapter 9)
重構—改善既有程式的設計(chapter 9)重構—改善既有程式的設計(chapter 9)
重構—改善既有程式的設計(chapter 9)Chris Huang
 
重構—改善既有程式的設計(chapter 8)part 2
重構—改善既有程式的設計(chapter 8)part 2重構—改善既有程式的設計(chapter 8)part 2
重構—改善既有程式的設計(chapter 8)part 2Chris Huang
 
重構—改善既有程式的設計(chapter 8)part 1
重構—改善既有程式的設計(chapter 8)part 1重構—改善既有程式的設計(chapter 8)part 1
重構—改善既有程式的設計(chapter 8)part 1Chris Huang
 
重構—改善既有程式的設計(chapter 7)
重構—改善既有程式的設計(chapter 7)重構—改善既有程式的設計(chapter 7)
重構—改善既有程式的設計(chapter 7)Chris Huang
 
重構—改善既有程式的設計(chapter 6)
重構—改善既有程式的設計(chapter 6)重構—改善既有程式的設計(chapter 6)
重構—改善既有程式的設計(chapter 6)Chris Huang
 
重構—改善既有程式的設計(chapter 4,5)
重構—改善既有程式的設計(chapter 4,5)重構—改善既有程式的設計(chapter 4,5)
重構—改善既有程式的設計(chapter 4,5)Chris Huang
 
重構—改善既有程式的設計(chapter 2,3)
重構—改善既有程式的設計(chapter 2,3)重構—改善既有程式的設計(chapter 2,3)
重構—改善既有程式的設計(chapter 2,3)Chris Huang
 

More from Chris Huang (20)

Data compression, data security, and machine learning
Data compression, data security, and machine learningData compression, data security, and machine learning
Data compression, data security, and machine learning
 
Kks sre book_ch10
Kks sre book_ch10Kks sre book_ch10
Kks sre book_ch10
 
Kks sre book_ch1,2
Kks sre book_ch1,2Kks sre book_ch1,2
Kks sre book_ch1,2
 
Real time big data applications with hadoop ecosystem
Real time big data applications with hadoop ecosystemReal time big data applications with hadoop ecosystem
Real time big data applications with hadoop ecosystem
 
A Graph Service for Global Web Entities Traversal and Reputation Evaluation B...
A Graph Service for Global Web Entities Traversal and Reputation Evaluation B...A Graph Service for Global Web Entities Traversal and Reputation Evaluation B...
A Graph Service for Global Web Entities Traversal and Reputation Evaluation B...
 
Approaching real-time-hadoop
Approaching real-time-hadoopApproaching real-time-hadoop
Approaching real-time-hadoop
 
20130310 solr tuorial
20130310 solr tuorial20130310 solr tuorial
20130310 solr tuorial
 
Scaling big-data-mining-infra2
Scaling big-data-mining-infra2Scaling big-data-mining-infra2
Scaling big-data-mining-infra2
 
Applying Media Content Analysis to the Production of Musical Videos as Summar...
Applying Media Content Analysis to the Production of Musical Videos as Summar...Applying Media Content Analysis to the Production of Musical Videos as Summar...
Applying Media Content Analysis to the Production of Musical Videos as Summar...
 
Wissbi osdc pdf
Wissbi osdc pdfWissbi osdc pdf
Wissbi osdc pdf
 
Hbase status quo apache-con europe - nov 2012
Hbase status quo   apache-con europe - nov 2012Hbase status quo   apache-con europe - nov 2012
Hbase status quo apache-con europe - nov 2012
 
重構—改善既有程式的設計(chapter 12,13)
重構—改善既有程式的設計(chapter 12,13)重構—改善既有程式的設計(chapter 12,13)
重構—改善既有程式的設計(chapter 12,13)
 
重構—改善既有程式的設計(chapter 10)
重構—改善既有程式的設計(chapter 10)重構—改善既有程式的設計(chapter 10)
重構—改善既有程式的設計(chapter 10)
 
重構—改善既有程式的設計(chapter 9)
重構—改善既有程式的設計(chapter 9)重構—改善既有程式的設計(chapter 9)
重構—改善既有程式的設計(chapter 9)
 
重構—改善既有程式的設計(chapter 8)part 2
重構—改善既有程式的設計(chapter 8)part 2重構—改善既有程式的設計(chapter 8)part 2
重構—改善既有程式的設計(chapter 8)part 2
 
重構—改善既有程式的設計(chapter 8)part 1
重構—改善既有程式的設計(chapter 8)part 1重構—改善既有程式的設計(chapter 8)part 1
重構—改善既有程式的設計(chapter 8)part 1
 
重構—改善既有程式的設計(chapter 7)
重構—改善既有程式的設計(chapter 7)重構—改善既有程式的設計(chapter 7)
重構—改善既有程式的設計(chapter 7)
 
重構—改善既有程式的設計(chapter 6)
重構—改善既有程式的設計(chapter 6)重構—改善既有程式的設計(chapter 6)
重構—改善既有程式的設計(chapter 6)
 
重構—改善既有程式的設計(chapter 4,5)
重構—改善既有程式的設計(chapter 4,5)重構—改善既有程式的設計(chapter 4,5)
重構—改善既有程式的設計(chapter 4,5)
 
重構—改善既有程式的設計(chapter 2,3)
重構—改善既有程式的設計(chapter 2,3)重構—改善既有程式的設計(chapter 2,3)
重構—改善既有程式的設計(chapter 2,3)
 

Hbase schema design and sizing apache-con europe - nov 2012

  • 1. HBASE SCHEMA DESIGN and Cluster Sizing Notes ApacheCon Europe, November 2012 Lars George Director EMEA Services
  • 2. About Me •  Director EMEA Services @ Cloudera •  Consulting on Hadoop projects (everywhere) •  Apache Committer •  HBase and Whirr •  O’Reilly Author •  HBase – The Definitive Guide •  Now in Japanese! •  Contact •  lars@cloudera.com 日本語版も出ました!   •  @larsgeorge
  • 3. Agenda •  HBase Architecture •  Schema Design •  Cluster Sizing Notes
  • 8. HBase Tables and Regions •  Table is made up of any number if regions •  Region is specified by its startKey and endKey •  Empty table: (Table, NULL, NULL) •  Two-region table: (Table, NULL, “com.cloudera.www”) and (Table, “com.cloudera.www”, NULL) •  Each region may live on a different node and is made up of several HDFS files and blocks, each of which is replicated by Hadoop
  • 10. HBase Tables •  Tables are sorted by Row in lexicographical order •  Table schema only defines its column families •  Each family consists of any number of columns •  Each column consists of any number of versions •  Columns only exist when inserted, NULLs are free •  Columns within a family are sorted and stored together •  Everything except table names are byte[] (Table, Row, Family:Column, Timestamp) -> Value
  • 12. HBase Architecture (cont.) •  HBase uses HDFS (or similar) as its reliable storage layer •  Handles checksums, replication, failover •  Native Java API, Gateway for REST, Thrift, Avro •  Master manages cluster •  RegionServer manage data •  ZooKeeper is used the “neural network” •  Crucial for HBase •  Bootstraps and coordinates cluster
  • 13. HBase Architecture (cont.) •  Based on Log-Structured Merge-Trees (LSM-Trees) •  Inserts are done in write-ahead log first •  Data is stored in memory and flushed to disk on regular intervals or based on size •  Small flushes are merged in the background to keep number of files small •  Reads read memory stores first and then disk based files second •  Deletes are handled with “tombstone” markers •  Atomicity on row level no matter how many columns •  keeps locking model easy
  • 14. MemStores •  After data is written to the WAL the RegionServer saves KeyValues in memory store •  Flush to disk based on size, see hbase.hregion.memstore.flush.size •  Default size is 64MB •  Uses snapshot mechanism to write flush to disk while still serving from it and accepting new data at the same time •  Snapshots are released when flush has succeeded
  • 15. Compactions •  General Concepts •  Two types: Minor and Major Compactions •  Asynchronous and transparent to client •  Manage file bloat from MemStore flushes •  Minor Compactions •  Combine last “few” flushes •  Triggered by number of storage files •  Major Compactions •  Rewrite all storage files •  Drop deleted data and those values exceeding TTL and/or number of versions •  Triggered by time threshold •  Cannot be scheduled automatically starting at a specific time (bummer!) •  May (most definitely) tax overall HDFS IO performance Tip: Disable major compactions and schedule to run manually (e.g. cron) at off-peak times
  • 16. Block Cache •  Acts as very large, in-memory distributed cache •  Assigned a large part of the JVM heap in the RegionServer process, see hfile.block.cache.size •  Optimizes reads on subsequent columns and rows •  Has priority to keep “in-memory” column families in cache if(inMemory) { this.priority = BlockPriority.MEMORY; } else { this.priority = BlockPriority.SINGLE; } •  Cache needs to be used properly to get best read performance •  Turn off block cache on operations that cause large churn •  Store related data “close” to each other •  Uses LRU cache with threaded (asynchronous) evictions based on priorities
  • 17. Region Splits •  Triggered by configured maximum file size of any store file •  This is checked directly after the compaction call to ensure store files are actually approaching the threshold •  Runs as asynchronous thread on RegionServer •  Splits are fast and nearly instant •  Reference files point to original region files and represent each half of the split •  Compactions take care of splitting original files into new region directories
  • 19. Auto Sharding and Distribution •  Unit of scalability in HBase is the Region •  Sorted, contiguous range of rows •  Spread “randomly” across RegionServer •  Moved around for load balancing and failover •  Split automatically or manually to scale with growing data •  Capacity is solely a factor of cluster nodes vs. regions per node
  • 20. Column Family vs. Column •  Use only a few column families •  Causes many files that need to stay open per region plus class overhead per family •  Best used when logical separation between data and meta columns •  Sorting per family can be used to convey application logic or access pattern
  • 21. Storage Separation •  Column Families allow for separation of data •  Used by Columnar Databases for fast analytical queries, but on column level only •  Allows different or no compression depending on the content type •  Segregate information based on access pattern •  Data is stored in one or more storage file, called HFiles
  • 25. Key Cardinality •  The best performance is gained from using row keys •  Time range bound reads can skip store files •  So can Bloom Filters •  Selecting column families reduces the amount of data to be scanned •  Pure value based filtering is a full table scan •  Filters often are too, but reduce network traffic
  • 27. Fold, Store, and Shift •  Logical layout does not match physical one •  All values are stored with the full coordinates, including: Row Key, Column Family, Column Qualifier, and Timestamp •  Folds columns into “row per column” •  NULLs are cost free as nothing is stored •  Versions are multiple “rows” in folded table
  • 28. Key/Table Design •  Crucial to gain best performance •  Why do I need to know? Well, you also need to know that RDBMS is only working well when columns are indexed and query plan is OK •  Absence of secondary indexes forces use of row key or column name sorting •  Transfer multiple indexes into one •  Generate large table -> Good since fits architecture and spreads across cluster
  • 29. DDI •  Stands for Denormalization, Duplication and Intelligent Keys •  Needed to overcome shortcomings of architecture •  Denormalization -> Replacement for JOINs •  Duplication -> Design for reads •  Intelligent Keys -> Implement indexing and sorting, optimize reads
  • 30. Pre-materialize Everything •  Achieve one read per customer request if possible •  Otherwise keep at lowest number •  Reads between 10ms (cache miss) and 1ms (cache hit) •  Use MapReduce to compute exacts in batch •  Store and merge updates live •  Use incrementColumnValue Motto: “Design for Reads”
  • 31. Tall-Narrow vs. Flat-Wide Tables •  Rows do not split •  Might end up with one row per region •  Same storage footprint •  Put more details into the row key •  Sometimes dummy column only •  Make use of partial key scans •  Tall with Scans, Wide with Gets •  Atomicity only on row level •  Example: Large graphs, stored as adjacency matrix
  • 32. Example: Mail Inbox <userId> : <colfam> : <messageId> : <timestamp> : <email-message> 12345 : data : 5fc38314-e290-ae5da5fc375d : 1307097848 : "Hi Lars, ..." 12345 : data : 725aae5f-d72e-f90f3f070419 : 1307099848 : "Welcome, and ..." 12345 : data : cc6775b3-f249-c6dd2b1a7467 : 1307101848 : "To Whom It ..." 12345 : data : dcbee495-6d5e-6ed48124632c : 1307103848 : "Hi, how are ..." or 12345-5fc38314-e290-ae5da5fc375d : data : : 1307097848 : "Hi Lars, ..." 12345-725aae5f-d72e-f90f3f070419 : data : : 1307099848 : "Welcome, and ..." 12345-cc6775b3-f249-c6dd2b1a7467 : data : : 1307101848 : "To Whom It ..." 12345-dcbee495-6d5e-6ed48124632c : data : : 1307103848 : "Hi, how are ..." è Same Storage Requirements
  • 33. Partial Key Scans Key   Descrip+on   <userId> Scan  over  all  messages   for  a  given  user  ID   <userId>-<date> Scan  over  all  messages   on  a  given  date  for  the   given  user  ID   <userId>-<date>-<messageId> Scan  over  all  parts  of  a   message  for  a  given  user   ID  and  date   <userId>-<date>-<messageId>-<attachmentId> Scan  over  all   a8achments  of  a   message  for  a  given  user   ID  and  date  
  • 34. Sequential Keys <timestamp><more key>: {CF: {CQ: {TS : Val}}} •  Hotspotting on Regions: bad! •  Instead do one of the following: •  Salting •  Prefix <timestamp> with distributed value •  Binning or bucketing rows across regions •  Key field swap/promotion •  Move <more key> before the timestamp (see OpenTSDB later) •  Randomization •  Move <timestamp> out of key
  • 35. Salting •  Prefix row keys to gain spread •  Use well known or numbered prefixes •  Use modulo to spread across servers •  Enforce common data stay close to each other for subsequent scanning or MapReduce processing 0_rowkey1, 1_rowkey2, 2_rowkey3 0_rowkey4, 1_rowkey5, 2_rowkey6 •  Sorted by prefix first 0_rowkey1 0_rowkey4 1_rowkey2 1_rowkey5 …
  • 36. Hashing vs. Sequential Keys •  Uses hashes for best spread •  Use for example MD5 to be able to recreate key •  Key = MD5(customerID) •  Counter productive for range scans •  Use sequential keys for locality •  Makes use of block caches •  May tax one server overly, may be avoided by salting or splitting regions while keeping them small
  • 38. Key Design Summary •  Based on access pattern, either use sequential or random keys •  Often a combination of both is needed •  Overcome architectural limitations •  Neither is necessarily bad •  Use bulk import for sequential keys and reads •  Random keys are good for random access patterns
  • 39. Example: Facebook Insights •  > 20B Events per Day •  1M Counter Updates per Second •  100 Nodes Cluster •  10K OPS per Node •  ”Like” button triggers AJAX request •  Event written to log file •  30mins current for website owner Web  ➜  Scribe  ➜  Ptail  ➜  Puma  ➜  HBase  
  • 40. HBase Counters •  Store counters per Domain and per URL •  Leverage HBase increment (atomic read-modify- write) feature •  Each row is one specific Domain or URL •  The columns are the counters for specific metrics •  Column families are used to group counters by time range •  Set time-to-live on CF level to auto-expire counters by age to save space, e.g., 2 weeks on “Daily Counters” family
  • 41. Key Design •  Reversed Domains •  Examples: “com.cloudera.www”, “com.cloudera.blog” •  Helps keeping pages per site close, as HBase efficiently scans blocks of sorted keys •  Domain Row Key = MD5(Reversed Domain) + Reversed Domain •  Leading MD5 hash spreads keys randomly across all regions for load balancing reasons •  Only hashing the domain groups per site (and per subdomain if needed) •  URL Row Key = MD5(Reversed Domain) + Reversed Domain + URL ID •  Unique ID per URL already available, make use of it
  • 43. Summary •  Design for Use-Case •  Read, Write, or Both? •  Avoid Hotspotting •  Consider using IDs instead of full text •  Leverage Column Family to HFile relation •  Shift details to appropriate position •  Composite Keys •  Column Qualifiers
  • 44. Summary (cont.) •  Schema design is a combination of •  Designing the keys (row and column) •  Segregate data into column families •  Choose compression and block sizes •  Similar techniques are needed to scale most systems •  Add indexes, partition data, consistent hashing •  Denormalization, Duplication, and Intelligent Keys (DDI)
  • 46. Competing Resources •  Reads and Writes compete for the same low- level resources •  Disk (HDFS) and Network I/O •  RPC Handlers and Threads •  Otherwise the do exercise completely separate code paths
  • 47. Memory Sharing •  By default every region server is dividing its memory (i.e. given maximum heap) into •  40% for in-memory stores (write ops) •  20% for block caching (reads ops) •  remaining space (here 40%) go towards usual Java heap usage (objects etc.) •  Share of memory needs to be tweaked
  • 48. Reads •  Locate and route request to appropriate region server •  Client caches information for faster lookups ➜ consider prefetching option for fast warmups •  Eliminate store files if possible using time ranges or Bloom filter •  Try block cache, if block is missing then load from disk
  • 49. Block Cache •  Use exported metrics to see effectiveness of block cache •  Check fill and eviction rate, as well as hit ratios ➜ random reads are not ideal •  Tweak up or down as needed, but watch overall heap usage •  You absolutely need the block cache •  Set to 10% at least for short term benefits
  • 50. Writes •  The cluster size is often determined by the write performance •  Log structured merge trees like •  Store mutation in in-memory store and write- ahead log •  Flush out aggregated, sorted maps at specified threshold - or - when under pressure •  Discard logs with no pending edits •  Perform regular compactions of store files
  • 51. Write Performance •  There are many factors to the overall write performance of a cluster •  Key Distribution ➜ Avoid region hotspot •  Handlers ➜ Do not pile up too early •  Write-ahead log ➜ Bottleneck #1 •  Compactions ➜ Badly tuned can cause ever increasing background noise
  • 52. Write-Ahead Log •  Currently only one per region server •  Shared across all stores (i.e. column families) •  Synchronized on file append calls •  Work being done on mitigating this •  WAL Compression •  Multiple WAL’s per region server ➜ Start more than one region server per node?
  • 53. Write-Ahead Log (cont.) •  Size set to 95% of default block size •  64MB or 128MB, but check config! •  Keep number low to reduce recovery time •  Limit set to 32, but can be increased •  Increase size of logs - and/or - increase the number of logs before blocking •  Compute number based on fill distribution and flush frequencies
  • 54. Write-Ahead Log (cont.) •  Writes are synchronized across all stores •  A large cell in one family can stop all writes of another •  In this case the RPC handlers go binary, i.e. either work or all block •  Can be bypassed on writes, but means no real durability and no replication •  Maybe use coprocessor to restore dependent data sets (preWALRestore)
  • 55. Flushes •  Every mutation call (put, delete etc.) causes a check for a flush •  If threshold is met, flush file to disk and schedule a compaction •  Try to compact newly flushed files quickly •  The compaction returns - if necessary - where a region should be split
  • 56. Compaction Storms •  Premature flushing because of # of logs or memory pressure •  Files will be smaller than the configured flush size •  The background compactions are hard at work merging small flush files into the existing, larger store files •  Rewrite hundreds of MB over and over
  • 57. Dependencies •  Flushes happen across all stores/column families, even if just one triggers it •  The flush size is compared to the size of all stores combined •  Many column families dilute the size •  Example: 55MB + 5MB + 4MB
  • 58. Some Numbers •  Typical write performance of HDFS is 35-50MB/s Cell Size OPS 0.5MB 70-100 100KB 350-500 10KB 3500-5000 ?? 1KB 35000-50000 ???? This  is  way  to  high  in  pracAce  -­‐  ContenAon!  
  • 59. Some More Numbers •  Under real world conditions the rate is less, more like 15MB/s or less •  Thread contention is cause for massive slow down Cell Size OPS 0.5MB 10 100KB 100 10KB 800 1KB 6000
  • 60. Notes •  Compute memstore sizes based on number of regions x flush size •  Compute number of logs to keep based on fill and flush rate •  Ultimately the capacity is driven by •  Java Heap •  Region Count and Size •  Key Distribution
  • 61. Cheat Sheet #1 •  Ensure you have enough or large enough write-ahead logs •  Ensure you do not oversubscribe available memstore space •  Ensure to set flush size large enough but not too large •  Check write-ahead log usage carefully
  • 62. Cheat Sheet #2 •  Enable compression to store more data per node •  Tweak compaction algorithm to peg background I/O at some level •  Consider putting uneven column families in separate tables •  Check metrics carefully for block cache, memstore, and all queues
  • 63. Example •  Java Xmx heap at 10GB •  Memstore share at 40% (default) •  10GB Heap x 0.4 = 4GB •  Desired flush size at 128MB •  4GB / 128MB = 32 regions max! •  For WAL size of 128MB x 0.95% •  4GB / (128MB x 0.95) = ~33 partially uncommitted logs to keep around •  Region size at 20GB •  20GB x 32 regions = 640GB raw storage used