SlideShare una empresa de Scribd logo
1 de 32
Descargar para leer sin conexión
Five major tips to maximize performance on
a 200+ SQL HBase/Phoenix cluster
Masayasu “Mas” Suzuki
Shinji Nagasaka
Takanari Tamesue
Sony Corporation
2
Who we are, and why we chose HBase/Phoenix
 We are DevOps members from
Sony’s News Suite team
– http://socialife.sony.net/
 HBase/Phoenix was chosen
because of
– Scalability,
– SQL compatibility, and
– secondary indexing support
3
Our use case
Internet
Sony News Suite Server Architecture
Application Server
HBase
Phoenix
EventHandler
HTTP
SQL (READ)
SQL (WRITE)
Fetcher
HTTP
End user
Outside content
providers
Main use case is caching
contents temporarily
4
Basic test design
 Query response time is measured as shown in red
 Query read/write ratio is 6 to 1
 12 different types of queries using eight separate indexes
Application Server
HBase
Phoenix
EventHandler
SQL (READ)
SQL (WRITE)
Fetcher
5
Table schema
 A table with 1.2 billion records were created
 Each record is around 1.0 Kbytes
– Raw data is around 1.7 KBytes each
– Gzip is used to compress column pt and hence the total comes out to be around 1.0 Kbytes
 id is the primary key
– Two MD5 hashed values are concatenated to create id
• Example: df461a2bda4002aaaa8117d4e43ee737_cfcd208495d565ef66e7dff9f98764da
CHAR(65)
id
VARCHAR
ai
VARCHAR
ao
DECIMAL
b
DECIMAL
c
CHAR(5)
cl
CHAR(2)
lg
DECIMAL
lw
DECIMAL
u
VARBINARY
pt
1adf… TR DSATE... 82122... 9071.9 true es 823.199 0.1243 (binary)
9d0a… FB Adad... 54011… 122114.5 true ja 23.632 5.22 (binary)
c5ae... KW 4 of … 20011… 3253.55 false fr 0.343 2.77 (binary)
ea4a... AB p7mj… 67691… 8901.0 true en 76.21 23.11 (binary)
1.2
billion
records
6
Split points
 Because it was impossible to store all 1.2 billion records on one single node, we
manually split the tables by defining the split points
 Split points were set so that each divided block, or region file, would be nearly equal
in size
– This was possible because we knew
a. the exact range of our primary keys, and
b. the hashed values of our primary keys would be uniformly distributed
CREATE TABLE IF NOT EXISTS TBL_1200M_IDX_LZ4_VER1_SPLT200_PTBIN_INT2DEC (
id CHAR(65) NOT NULL,
ai VARCHAR, ao VARCHAR,
b DECIMAL, c DECIMAL,
cl CHAR(5), lg CHAR(2),
lw DECIMAL, u DECIMAL,
p_t VARBINARY,
CONSTRAINT my_pk PRIMARY KEY ( id )
) COMPRESSION='LZ4', VERSIONS='1', MAX_FILESIZE=26843545600 SPLIT ON
( '0148','0290','03d8','0520','0668','07b0','08f8','0a40', …,'fef8' );
7
Distribution of region file per RegionServer
 If split points can be evenly set, then data allocation can be evened out
Different color denotes
different tables
200 RegionServer
Total
size
per
node
8
Queries
 Ratio of R/W queries is 6 to 1
 Sample READ queries
 Sample WRITE queries
 Constants (ex. in the above example, 228343239, or the value of b) were randomly
generated to simulate current production environment
SELECT id FROM TBL_1200M_IDX_LZ4_VER1_SPLT200_PTBIN_INT2DEC WHERE b=228343239 AND
cl='false';
SELECT id FROM TBL_1200M_IDX_LZ4_VER1_SPLT200_PTBIN_INT2DEC WHERE ai=‘AB' AND cl='false'
AND c>0 AND c<1417648603068;
/* Written as a Java PreparedStatement */
UPSERT INTO TBL_1200M_IDX_LZ4_VER1_SPLT200_PTBIN_INT2DEC (id,p_t,c,lw,u) VALUES (?,?,?,?,?)
9
Queries – Details
Query
No.
Name Read/Write Percentage
generated
Description Randomly
generated part
1 Id READ 25% Search using primary key Id (primary key)
2 IdCnt READ 10% Count using primary key Id (primary key)
3 IdOr READ 10% Search using “OR” of ten primary keys Id (primary key)
4 AiAoU READ 5% Search using columns Ai, Ao, and U Ai, Ao, U
5 AiCCl READ 5% Search using columns Ai, C, and Cl Ai, C, Cl
6 AiLwCl READ 5% Search using columns Ai, Lw, and Cl Ai, Lw, Cl
7 AiULg READ 5% Search using columns Ai, U, and Lg Ai, U, Lg
8 BCl READ 5% Search using columns B and Cl B, Cl
9 BLg READ 5% Search using columns B and Lg B, Lg
10 CLg READ 5% Search using columns C and Lg C, Lg
11 LwLg READ 5% Search using columns Lw and Lg Lw, Lg
12 PtCLwU WRITE 15% Upsert binary data Pt and upsert columns C, Lw, and U Id (primary key), Pt, C, Lw, U
10
Secondary indexes
 Following eight indexes were created
 Eight indexes are designed to be orthogonal indexes
 Split points were manually set for index tables so that each region file would be
similar in size
Index
No.
Name Index type Description
1 AiAoU CHAR/CHAR/DECIMAL For use in search using columns Ai, Ao, and U
2 AiCCl CHAR/DECIMAL/CHAR For use in search using columns Ai, C, and Cl
3 AiLwCl CHAR/DECIMAL/CHAR For use in search using columns Ai, Lw, and Cl
4 AiULg CHAR/DECIMAL/CHAR For use in search using columns Ai, U, and Lg
5 BCl DECIMAL/CHAR For use in search using columns B and Cl
6 BLg DECIMAL/CHAR For use in search using columns B and Lg
7 CLg DECIMAL/CHAR For use in search using columns C and Lg
8 LwLg DECIMAL/CHAR For use in search using columns Lw and Lg
11
Test environment
HBase Clusters
Zookeepers
HMasters
Zookeeper 1
Zookeeper 2
Zookeeper 3
HMaster
(Main)
HMaster
(Secondary)
HMaster
(Secondary Backup)
RegionServer
sRegionServer 1
Clients
Client 1
Client 2
Client 100
・・・
disk
RegionServer 1 disk
RegionServer 200 disk
SYSTEM.CATALOG
(Meta data for Phoenix Plug-in)
・・・
100 clients
(100 x c4.xlarge)
3 Zookeepers
(3 x m3.xlarge)
3 HMasters
(3 x m3.xlarge)
200 RegionServers
(199 x r3.xlarge)
(1 x c4.8xlarge)
12
Tools used
 Tools were especially useful for
– Pinpointing the bottlenecks in resource usage
– Determining when and where an error occurred within the cluster
– Verifying the effect of solutions applied
– Managing multiple nodes seamlessly without having to manage them separately
Tools used Purpose
Analysis of resource usage per AWS instance
(ex. CPU usage, network traffic, disk utilization, Java stats)
Analysis of status of HBase and Hadoop layers
(ex. number of regions, store files, requests)
Analysis of distribution of each HBase table over the cluster
(ex. number and size of region files per node)
Fabric
Remotely control multiple nodes via SSH
13
Performance test apparatus & results
 Test apparatus
 Test results
Specs
Number of records 1.2 billion records (1KB each)
Number of indexes 8 orthogonal indexes
Servers
3 Zookeepers (Zookeeper 3.4.5, m3.xlarge x 3)
3 HMaster servers (hadoop 2.5.0, hbase 0.98.6, Phoenix 4.3.0, m3.xlarge x 3)
200 RegionServers
(hadoop 2.5.0, hbase 0.98.6, Phoenix 4.3.0, r3.xlarge x 199, c4.8xlarge x 1)
Clients 100 x c4.xlarge
Results
Number of queries 51,053 queries/sec
Response time (average) 46 ms
14
Cost
 Total: $325,236 (per year, “All Upfront” pricing)
 This is a preliminary setup!
– There is room for further spec/cost optimization
Node Type Instance Type Quantity Cost (per year)
HBase:ZooKeeper m3.xlarge 3 $ 4,284
Hadoop:Name Node
HBase:Hmaster
m3.xlarge 3 $ 4,284
Hadoop:Data Node
HBase:RegionServer
r3.xlarge 199 $ 307,455
HBase:RegionServer
(for housing meta table
SYSTEM.CATALOG)
c4.8xlarge 1 $ 9,213
15
Five major tips to maximize performance
using HBase/Phoenix
Ordered by effectiveness
16
Tips 1 – Use SQL hint clause when using an index
 Response without hint clause  Response with hint clause
0
50
100
150
200
250
300
350
400
Id
IdCnt
IdOr
AiAoU
AiCCl
AiLwCl
AiULg
BCl
BLg
CLg
LwLg
PtCLwU
0.08
0.5
1
1.5
2
2.5
[ms]
Queries using
primary key
Write
query
Queries using index
Elapsed
time
[hours]
Performance improved by 6 times
0
50
100
150
200
250
300
350
400
Id
IdCnt
IdOr
AiAoU
AiCCl
AiLwCl
AiULg
BCl
BLg
CLg
LwLg
PtCLwU
0.08
0.5
1
1.5
2
2.5
[ms]
Queries using
primary key
Write
query
Queries using index
Elapsed
time
[hours]
17
Tips 1 – Use SQL hint clause when using an index
 Major possible cause (yet to be verified)
– When the index is used, an extra RPC is issued to verify latest meta/statistics
– Using hint clause may reduce this RPC (still hypothesis)
 Other possible solutions
– Changing “UPDATE_CACHE_FREQUENCY” (available from Phoenix 4.7) may
resolve this issue (we have not tried this yet)
From Phoenix website …
https://phoenix.apache.org/#Altering
“When a SQL statement is run which references a table, Phoenix will by default check with the server to
ensure it has the most up to date table metadata and statistics. This RPC may not be necessary when you
know in advance that the structure of a table may never change.”
18
Tips 2 – Use memories aggressively
 In early stages of our testing, disk utilization and iowait of RegionServers
were extremely high
Test period Test period
iowait
19
Tips 2 – Use memories aggressively
 Issue was most critical during major compaction and index creation
 Initially, we thought we had enough memory
– Total size of data (includes all tables/indexes and mirrored data in Hadoop layer)
• More than 1,360 GB
– Total available memory combined on RegionServers (then)
• Around 1,500 GB (m3.2xlarge(30GiB) x 50 nodes)
 But this left very little margin for computation intensive tasks
 We decided to allocate memory at least 3 times the size of data for added
protection and performance (has worked thus far)
20
Tips 3 – Manually split the region file but don’t over split them
 A single table is too big to be placed and managed by one single node
 We wanted to know whether we should split in a “more finer” way or in a
“more coarser” way
21
Tips 3 – Manually split the region file but don’t over split them
 Comparison between 200 and 4002 split points
– 200 RegionServers were used in both cases
Don’t over split region files
0
2000
4000
6000
8000
10000
12000
14000
16000
18000
0 2 4 6 8 10 12 14 16
Volumeprocessed[queries/sec]
Elapsed time [H]
SplitPoint = 200 SplitPoint = 4002
0
100
200
300
400
500
600
700
0 2 4 6 8 10 12 14 16
Responsetime[ms]
Elapsed time [H]
SplitPoint = 200 SplitPoint = 4002
22
Tips 4 – Scale-out instead of scale-up
 Comparison of RegionServers running c3.4xlarge and c3.8xlarge
– c3.8xlarge is twice the spec of c3.4xlarge
– Combined computing power of “100 nodes of c3.4xlarge” is equal to “50 nodes
of c3.8xlarge”, but former scores better
0
2000
4000
6000
8000
10000
12000
14000
16000
18000
20000
0 2 4 6 8
Volumeprocessed[queries/sec]
Elapsed time [H]
c3.4xlarge x 100 c3.8xlarge x 50
0
20
40
60
80
100
120
140
160
180
200
0 2 4 6 8
Responsetime[ms]
Elapsed time [H]
c3.4xlarge x 100 c3.8xlarge x 50
Scale-out!
23
Tips 5 – Avoid running power intense tasks simultaneously
 For example, do not run major compaction together with index creation
 Also, performance impact from major compaction can be lessened by
running them in smaller units
26,142
29,980
24000
25000
26000
27000
28000
29000
30000
31000
Volumeprocessed[queries/sec]
91 ms
80 ms
72
74
76
78
80
82
84
86
88
90
92
94
Responsetime[ms]
Major compaction
for nine tables done
simultaneously
Major compaction
for nine tables done
separately
Major compaction
for nine tables done
simultaneously
Major compaction
for nine tables done
separately
13% increase in
volume processed 9% faster
24
Items of very limited or no success
25
First and foremost
 Please understand that these are lessons learned through our tests on
our environment
 Any one or all of these items may prove useful in your environment
26
Items of limited success – Changing GC algorithm
 RegionServers’ GC algorithm were changed and tested
 Performance is more even with G1
 Performance of G1 is on average, 2% lower than CMS
0
2000
4000
6000
8000
10000
12000
14000
16000
18000
20000
0 2 4 6 8 10
Volumeprocessed[queries/sec]
Elapsed time [H]
CMS G1
0
20
40
60
80
100
120
140
160
180
200
0 2 4 6 8 10
Responsetime[ms]
Elapsed time [H]
CMS G1
27
Items of limited success – Changing Java heap size
 RegionServers’ Java heap size were changed and tested
 Maximum physical memory is 30.5 GiB (r3.xlarge)
 When heap was set to 26.0 GB, system crashed after five hours
0
2000
4000
6000
8000
10000
12000
14000
16000
18000
20000
0 4 8 12 16
Volumeprocessed[queries/sec]
Elapsed time [H]
JavaHeap = 20.5GB JavaHeap = 23.0GB JavaHeap = 26.0GB
0
20
40
60
80
100
120
140
160
180
200
0 4 8 12 16
Responsetime[ms]
Elapsed time [H]
JavaHeap = 20.5GB JavaHeap = 23.0GB JavaHeap = 26.0GB
28
Items of limited success – Changing disk file format
 RegionServers’ disk file format was changed and tested
 The newer xfs tend to score slightly better when compared at its highs
0
2000
4000
6000
8000
10000
12000
14000
16000
18000
20000
0 4 8 12 16
Volumeprocessed[queries/sec]
Elapsed time [H]
ext4 xfs
0
20
40
60
80
100
120
140
160
180
200
0 4 8 12 16
Responsetime[ms]
Elapsed time [H]
ext4 xfs
29
Closing comments
30
Five major tips to maximize performance on HBase/Phoenix
Ordered by effectiveness (Most effective on the very top)
– An extra RPC is issued when the client runs a SQL statement that uses a secondary index
– Using SQL hint clause can mitigate this
– From Ver. 4.7, changing “UPDATE_CACHE_FREQUENCY” may also work (we have yet to test this)
– A memory rich node should be selected for use in RegionServers so as to minimize disk access
– More nodes running in parallel yield better results than fewer but powerful nodes running in parallel
– As an example, running major compaction and index creation simultaneously should be avoided
Tips 1. Use SQL hint clause when using a secondary index
Tips 2. Use memories aggressively
Tips 3. Manually split the region file if you can but never over split them
Tips 4. Scale-out instead of scale-up
Tips 5. Avoid running power intensive tasks simultaneously
31
Special Thanks
 Takafumi Suzuki
– Thank you very much for the countless and invaluable discussions
– We owe the success of this project to you!
 Thank you very much!
“Sony” is a registered trademark of Sony Corporation.
Names of Sony products and services are the registered trademarks and/or trademarks of Sony Corporation or its Group companies.
Other company names and product names are the registered trademarks and/or trademarks of the respective companies.

Más contenido relacionado

La actualidad más candente

The Evolution of a Relational Database Layer over HBase
The Evolution of a Relational Database Layer over HBaseThe Evolution of a Relational Database Layer over HBase
The Evolution of a Relational Database Layer over HBaseDataWorks Summit
 
HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...
HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...
HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...HBaseCon
 
Hadoop hbase mapreduce
Hadoop hbase mapreduceHadoop hbase mapreduce
Hadoop hbase mapreduceFARUK BERKSÖZ
 
HBase state of the union
HBase   state of the unionHBase   state of the union
HBase state of the unionenissoz
 
Hortonworks Technical Workshop: HBase and Apache Phoenix
Hortonworks Technical Workshop: HBase and Apache Phoenix Hortonworks Technical Workshop: HBase and Apache Phoenix
Hortonworks Technical Workshop: HBase and Apache Phoenix Hortonworks
 
Apache HBase 1.0 Release
Apache HBase 1.0 ReleaseApache HBase 1.0 Release
Apache HBase 1.0 ReleaseNick Dimiduk
 
HBaseCon 2012 | Mignify: A Big Data Refinery Built on HBase - Internet Memory...
HBaseCon 2012 | Mignify: A Big Data Refinery Built on HBase - Internet Memory...HBaseCon 2012 | Mignify: A Big Data Refinery Built on HBase - Internet Memory...
HBaseCon 2012 | Mignify: A Big Data Refinery Built on HBase - Internet Memory...Cloudera, Inc.
 
HBase Data Modeling and Access Patterns with Kite SDK
HBase Data Modeling and Access Patterns with Kite SDKHBase Data Modeling and Access Patterns with Kite SDK
HBase Data Modeling and Access Patterns with Kite SDKHBaseCon
 
Batch is Back: Critical for Agile Application Adoption
Batch is Back: Critical for Agile Application AdoptionBatch is Back: Critical for Agile Application Adoption
Batch is Back: Critical for Agile Application AdoptionDataWorks Summit/Hadoop Summit
 
HBase for Architects
HBase for ArchitectsHBase for Architects
HBase for ArchitectsNick Dimiduk
 
Meet hbase 2.0
Meet hbase 2.0Meet hbase 2.0
Meet hbase 2.0enissoz
 
Meet HBase 2.0 and Phoenix 5.0
Meet HBase 2.0 and Phoenix 5.0Meet HBase 2.0 and Phoenix 5.0
Meet HBase 2.0 and Phoenix 5.0DataWorks Summit
 
HBaseConEast2016: HBase and Spark, State of the Art
HBaseConEast2016: HBase and Spark, State of the ArtHBaseConEast2016: HBase and Spark, State of the Art
HBaseConEast2016: HBase and Spark, State of the ArtMichael Stack
 
HBaseCon 2013: Apache HBase Table Snapshots
HBaseCon 2013: Apache HBase Table SnapshotsHBaseCon 2013: Apache HBase Table Snapshots
HBaseCon 2013: Apache HBase Table SnapshotsCloudera, Inc.
 
Apache HBase Internals you hoped you Never Needed to Understand
Apache HBase Internals you hoped you Never Needed to UnderstandApache HBase Internals you hoped you Never Needed to Understand
Apache HBase Internals you hoped you Never Needed to UnderstandJosh Elser
 
HBaseCon 2015: HBase Operations at Xiaomi
HBaseCon 2015: HBase Operations at XiaomiHBaseCon 2015: HBase Operations at Xiaomi
HBaseCon 2015: HBase Operations at XiaomiHBaseCon
 
HBase Read High Availability Using Timeline-Consistent Region Replicas
HBase Read High Availability Using Timeline-Consistent Region ReplicasHBase Read High Availability Using Timeline-Consistent Region Replicas
HBase Read High Availability Using Timeline-Consistent Region ReplicasHBaseCon
 

La actualidad más candente (20)

The Evolution of a Relational Database Layer over HBase
The Evolution of a Relational Database Layer over HBaseThe Evolution of a Relational Database Layer over HBase
The Evolution of a Relational Database Layer over HBase
 
HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...
HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...
HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...
 
Hadoop hbase mapreduce
Hadoop hbase mapreduceHadoop hbase mapreduce
Hadoop hbase mapreduce
 
HBase state of the union
HBase   state of the unionHBase   state of the union
HBase state of the union
 
Apache phoenix
Apache phoenixApache phoenix
Apache phoenix
 
Hortonworks Technical Workshop: HBase and Apache Phoenix
Hortonworks Technical Workshop: HBase and Apache Phoenix Hortonworks Technical Workshop: HBase and Apache Phoenix
Hortonworks Technical Workshop: HBase and Apache Phoenix
 
Apache HBase 1.0 Release
Apache HBase 1.0 ReleaseApache HBase 1.0 Release
Apache HBase 1.0 Release
 
HBaseCon 2012 | Mignify: A Big Data Refinery Built on HBase - Internet Memory...
HBaseCon 2012 | Mignify: A Big Data Refinery Built on HBase - Internet Memory...HBaseCon 2012 | Mignify: A Big Data Refinery Built on HBase - Internet Memory...
HBaseCon 2012 | Mignify: A Big Data Refinery Built on HBase - Internet Memory...
 
HBase Data Modeling and Access Patterns with Kite SDK
HBase Data Modeling and Access Patterns with Kite SDKHBase Data Modeling and Access Patterns with Kite SDK
HBase Data Modeling and Access Patterns with Kite SDK
 
Batch is Back: Critical for Agile Application Adoption
Batch is Back: Critical for Agile Application AdoptionBatch is Back: Critical for Agile Application Adoption
Batch is Back: Critical for Agile Application Adoption
 
Apache phoenix
Apache phoenixApache phoenix
Apache phoenix
 
Apache Phoenix + Apache HBase
Apache Phoenix + Apache HBaseApache Phoenix + Apache HBase
Apache Phoenix + Apache HBase
 
HBase for Architects
HBase for ArchitectsHBase for Architects
HBase for Architects
 
Meet hbase 2.0
Meet hbase 2.0Meet hbase 2.0
Meet hbase 2.0
 
Meet HBase 2.0 and Phoenix 5.0
Meet HBase 2.0 and Phoenix 5.0Meet HBase 2.0 and Phoenix 5.0
Meet HBase 2.0 and Phoenix 5.0
 
HBaseConEast2016: HBase and Spark, State of the Art
HBaseConEast2016: HBase and Spark, State of the ArtHBaseConEast2016: HBase and Spark, State of the Art
HBaseConEast2016: HBase and Spark, State of the Art
 
HBaseCon 2013: Apache HBase Table Snapshots
HBaseCon 2013: Apache HBase Table SnapshotsHBaseCon 2013: Apache HBase Table Snapshots
HBaseCon 2013: Apache HBase Table Snapshots
 
Apache HBase Internals you hoped you Never Needed to Understand
Apache HBase Internals you hoped you Never Needed to UnderstandApache HBase Internals you hoped you Never Needed to Understand
Apache HBase Internals you hoped you Never Needed to Understand
 
HBaseCon 2015: HBase Operations at Xiaomi
HBaseCon 2015: HBase Operations at XiaomiHBaseCon 2015: HBase Operations at Xiaomi
HBaseCon 2015: HBase Operations at Xiaomi
 
HBase Read High Availability Using Timeline-Consistent Region Replicas
HBase Read High Availability Using Timeline-Consistent Region ReplicasHBase Read High Availability Using Timeline-Consistent Region Replicas
HBase Read High Availability Using Timeline-Consistent Region Replicas
 

Destacado

Stores behind the Doors @eHarmony
Stores behind the Doors @eHarmonyStores behind the Doors @eHarmony
Stores behind the Doors @eHarmonyreachprateek
 
Phoenix Secondary Indexing - LA HUG Sept 9th, 2013
Phoenix Secondary Indexing - LA HUG Sept 9th, 2013Phoenix Secondary Indexing - LA HUG Sept 9th, 2013
Phoenix Secondary Indexing - LA HUG Sept 9th, 2013Jesse Yates
 
Transactions Over Apache HBase
Transactions Over Apache HBaseTransactions Over Apache HBase
Transactions Over Apache HBasealexbaranau
 
Transactions Over Apache HBase
Transactions Over Apache HBaseTransactions Over Apache HBase
Transactions Over Apache HBaseCask Data
 
Apache Phoenix: Transforming HBase into a SQL Database
Apache Phoenix: Transforming HBase into a SQL DatabaseApache Phoenix: Transforming HBase into a SQL Database
Apache Phoenix: Transforming HBase into a SQL DatabaseDataWorks Summit
 
HBaseCon 2013: How (and Why) Phoenix Puts the SQL Back into NoSQL
HBaseCon 2013: How (and Why) Phoenix Puts the SQL Back into NoSQLHBaseCon 2013: How (and Why) Phoenix Puts the SQL Back into NoSQL
HBaseCon 2013: How (and Why) Phoenix Puts the SQL Back into NoSQLCloudera, Inc.
 
Taming HBase with Apache Phoenix and SQL
Taming HBase with Apache Phoenix and SQLTaming HBase with Apache Phoenix and SQL
Taming HBase with Apache Phoenix and SQLHBaseCon
 
Apache HBase Performance Tuning
Apache HBase Performance TuningApache HBase Performance Tuning
Apache HBase Performance TuningLars Hofhansl
 

Destacado (9)

eHarmony @ Phoenix Con 2016
eHarmony @ Phoenix Con 2016eHarmony @ Phoenix Con 2016
eHarmony @ Phoenix Con 2016
 
Stores behind the Doors @eHarmony
Stores behind the Doors @eHarmonyStores behind the Doors @eHarmony
Stores behind the Doors @eHarmony
 
Phoenix Secondary Indexing - LA HUG Sept 9th, 2013
Phoenix Secondary Indexing - LA HUG Sept 9th, 2013Phoenix Secondary Indexing - LA HUG Sept 9th, 2013
Phoenix Secondary Indexing - LA HUG Sept 9th, 2013
 
Transactions Over Apache HBase
Transactions Over Apache HBaseTransactions Over Apache HBase
Transactions Over Apache HBase
 
Transactions Over Apache HBase
Transactions Over Apache HBaseTransactions Over Apache HBase
Transactions Over Apache HBase
 
Apache Phoenix: Transforming HBase into a SQL Database
Apache Phoenix: Transforming HBase into a SQL DatabaseApache Phoenix: Transforming HBase into a SQL Database
Apache Phoenix: Transforming HBase into a SQL Database
 
HBaseCon 2013: How (and Why) Phoenix Puts the SQL Back into NoSQL
HBaseCon 2013: How (and Why) Phoenix Puts the SQL Back into NoSQLHBaseCon 2013: How (and Why) Phoenix Puts the SQL Back into NoSQL
HBaseCon 2013: How (and Why) Phoenix Puts the SQL Back into NoSQL
 
Taming HBase with Apache Phoenix and SQL
Taming HBase with Apache Phoenix and SQLTaming HBase with Apache Phoenix and SQL
Taming HBase with Apache Phoenix and SQL
 
Apache HBase Performance Tuning
Apache HBase Performance TuningApache HBase Performance Tuning
Apache HBase Performance Tuning
 

Similar a Five major tips to maximize performance on a 200+ SQL HBase/Phoenix cluster

Best Practices for Migrating your Data Warehouse to Amazon Redshift
Best Practices for Migrating your Data Warehouse to Amazon RedshiftBest Practices for Migrating your Data Warehouse to Amazon Redshift
Best Practices for Migrating your Data Warehouse to Amazon RedshiftAmazon Web Services
 
Avoiding Chaos: Methodology for Managing Performance in a Shared Storage A...
Avoiding Chaos:  Methodology for Managing Performance in a Shared Storage A...Avoiding Chaos:  Methodology for Managing Performance in a Shared Storage A...
Avoiding Chaos: Methodology for Managing Performance in a Shared Storage A...brettallison
 
Best Practices for Migrating your Data Warehouse to Amazon Redshift
Best Practices for Migrating your Data Warehouse to Amazon RedshiftBest Practices for Migrating your Data Warehouse to Amazon Redshift
Best Practices for Migrating your Data Warehouse to Amazon RedshiftAmazon Web Services
 
Best Practices for Migrating your Data Warehouse to Amazon Redshift
Best Practices for Migrating your Data Warehouse to Amazon Redshift Best Practices for Migrating your Data Warehouse to Amazon Redshift
Best Practices for Migrating your Data Warehouse to Amazon Redshift Amazon Web Services
 
Getting started with Amazon Redshift
Getting started with Amazon RedshiftGetting started with Amazon Redshift
Getting started with Amazon RedshiftAmazon Web Services
 
User-space Network Processing
User-space Network ProcessingUser-space Network Processing
User-space Network ProcessingRyousei Takano
 
Performance Optimizations in Apache Impala
Performance Optimizations in Apache ImpalaPerformance Optimizations in Apache Impala
Performance Optimizations in Apache ImpalaCloudera, Inc.
 
Mapping Data Flows Perf Tuning April 2021
Mapping Data Flows Perf Tuning April 2021Mapping Data Flows Perf Tuning April 2021
Mapping Data Flows Perf Tuning April 2021Mark Kromer
 
UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...
UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...
UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...Louis Göhl
 
hbaseconasia2019 Phoenix Improvements and Practices on Cloud HBase at Alibaba
hbaseconasia2019 Phoenix Improvements and Practices on Cloud HBase at Alibabahbaseconasia2019 Phoenix Improvements and Practices on Cloud HBase at Alibaba
hbaseconasia2019 Phoenix Improvements and Practices on Cloud HBase at AlibabaMichael Stack
 
Productionalizing ML : Real Experience
Productionalizing ML : Real ExperienceProductionalizing ML : Real Experience
Productionalizing ML : Real ExperienceIhor Bobak
 
In-Memory and TimeSeries Technology to Accelerate NoSQL Analytics
In-Memory and TimeSeries Technology to Accelerate NoSQL AnalyticsIn-Memory and TimeSeries Technology to Accelerate NoSQL Analytics
In-Memory and TimeSeries Technology to Accelerate NoSQL Analyticssandor szabo
 
Sql introduction
Sql introductionSql introduction
Sql introductionvimal_guru
 
[ACNA2022] Hadoop Vectored IO_ your data just got faster!.pdf
[ACNA2022] Hadoop Vectored IO_ your data just got faster!.pdf[ACNA2022] Hadoop Vectored IO_ your data just got faster!.pdf
[ACNA2022] Hadoop Vectored IO_ your data just got faster!.pdfMukundThakur22
 
Leveraging Open Source to Manage SAN Performance
Leveraging Open Source to Manage SAN PerformanceLeveraging Open Source to Manage SAN Performance
Leveraging Open Source to Manage SAN Performancebrettallison
 

Similar a Five major tips to maximize performance on a 200+ SQL HBase/Phoenix cluster (20)

The Cell Processor
The Cell ProcessorThe Cell Processor
The Cell Processor
 
Best Practices for Migrating your Data Warehouse to Amazon Redshift
Best Practices for Migrating your Data Warehouse to Amazon RedshiftBest Practices for Migrating your Data Warehouse to Amazon Redshift
Best Practices for Migrating your Data Warehouse to Amazon Redshift
 
Avoiding Chaos: Methodology for Managing Performance in a Shared Storage A...
Avoiding Chaos:  Methodology for Managing Performance in a Shared Storage A...Avoiding Chaos:  Methodology for Managing Performance in a Shared Storage A...
Avoiding Chaos: Methodology for Managing Performance in a Shared Storage A...
 
Deep Dive on Amazon Aurora
Deep Dive on Amazon AuroraDeep Dive on Amazon Aurora
Deep Dive on Amazon Aurora
 
Best Practices for Migrating your Data Warehouse to Amazon Redshift
Best Practices for Migrating your Data Warehouse to Amazon RedshiftBest Practices for Migrating your Data Warehouse to Amazon Redshift
Best Practices for Migrating your Data Warehouse to Amazon Redshift
 
Best Practices for Migrating your Data Warehouse to Amazon Redshift
Best Practices for Migrating your Data Warehouse to Amazon Redshift Best Practices for Migrating your Data Warehouse to Amazon Redshift
Best Practices for Migrating your Data Warehouse to Amazon Redshift
 
Getting started with Amazon Redshift
Getting started with Amazon RedshiftGetting started with Amazon Redshift
Getting started with Amazon Redshift
 
User-space Network Processing
User-space Network ProcessingUser-space Network Processing
User-space Network Processing
 
Deep Dive on Amazon Aurora
Deep Dive on Amazon AuroraDeep Dive on Amazon Aurora
Deep Dive on Amazon Aurora
 
Performance Optimizations in Apache Impala
Performance Optimizations in Apache ImpalaPerformance Optimizations in Apache Impala
Performance Optimizations in Apache Impala
 
Mapping Data Flows Perf Tuning April 2021
Mapping Data Flows Perf Tuning April 2021Mapping Data Flows Perf Tuning April 2021
Mapping Data Flows Perf Tuning April 2021
 
UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...
UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...
UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...
 
hbaseconasia2019 Phoenix Improvements and Practices on Cloud HBase at Alibaba
hbaseconasia2019 Phoenix Improvements and Practices on Cloud HBase at Alibabahbaseconasia2019 Phoenix Improvements and Practices on Cloud HBase at Alibaba
hbaseconasia2019 Phoenix Improvements and Practices on Cloud HBase at Alibaba
 
Productionalizing ML : Real Experience
Productionalizing ML : Real ExperienceProductionalizing ML : Real Experience
Productionalizing ML : Real Experience
 
In-Memory and TimeSeries Technology to Accelerate NoSQL Analytics
In-Memory and TimeSeries Technology to Accelerate NoSQL AnalyticsIn-Memory and TimeSeries Technology to Accelerate NoSQL Analytics
In-Memory and TimeSeries Technology to Accelerate NoSQL Analytics
 
Sql introduction
Sql introductionSql introduction
Sql introduction
 
The state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the CloudThe state of SQL-on-Hadoop in the Cloud
The state of SQL-on-Hadoop in the Cloud
 
[ACNA2022] Hadoop Vectored IO_ your data just got faster!.pdf
[ACNA2022] Hadoop Vectored IO_ your data just got faster!.pdf[ACNA2022] Hadoop Vectored IO_ your data just got faster!.pdf
[ACNA2022] Hadoop Vectored IO_ your data just got faster!.pdf
 
Deep Dive on Amazon Aurora
Deep Dive on Amazon AuroraDeep Dive on Amazon Aurora
Deep Dive on Amazon Aurora
 
Leveraging Open Source to Manage SAN Performance
Leveraging Open Source to Manage SAN PerformanceLeveraging Open Source to Manage SAN Performance
Leveraging Open Source to Manage SAN Performance
 

Último

The Essentials of Digital Experience Monitoring_ A Comprehensive Guide.pdf
The Essentials of Digital Experience Monitoring_ A Comprehensive Guide.pdfThe Essentials of Digital Experience Monitoring_ A Comprehensive Guide.pdf
The Essentials of Digital Experience Monitoring_ A Comprehensive Guide.pdfkalichargn70th171
 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfkalichargn70th171
 
EY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityEY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityNeo4j
 
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsUnveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsAlberto González Trastoy
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...harshavardhanraghave
 
Asset Management Software - Infographic
Asset Management Software - InfographicAsset Management Software - Infographic
Asset Management Software - InfographicHr365.us smith
 
Salesforce Certified Field Service Consultant
Salesforce Certified Field Service ConsultantSalesforce Certified Field Service Consultant
Salesforce Certified Field Service ConsultantAxelRicardoTrocheRiq
 
Cloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStackCloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStackVICTOR MAESTRE RAMIREZ
 
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdfThe Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdfkalichargn70th171
 
chapter--4-software-project-planning.ppt
chapter--4-software-project-planning.pptchapter--4-software-project-planning.ppt
chapter--4-software-project-planning.pptkotipi9215
 
Hand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxHand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxbodapatigopi8531
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providermohitmore19
 
Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...OnePlan Solutions
 
Unit 1.1 Excite Part 1, class 9, cbse...
Unit 1.1 Excite Part 1, class 9, cbse...Unit 1.1 Excite Part 1, class 9, cbse...
Unit 1.1 Excite Part 1, class 9, cbse...aditisharan08
 
5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdf5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdfWave PLM
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Modelsaagamshah0812
 
Introduction to Decentralized Applications (dApps)
Introduction to Decentralized Applications (dApps)Introduction to Decentralized Applications (dApps)
Introduction to Decentralized Applications (dApps)Intelisync
 
A Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxA Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxComplianceQuest1
 
Der Spagat zwischen BIAS und FAIRNESS (2024)
Der Spagat zwischen BIAS und FAIRNESS (2024)Der Spagat zwischen BIAS und FAIRNESS (2024)
Der Spagat zwischen BIAS und FAIRNESS (2024)OPEN KNOWLEDGE GmbH
 
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...soniya singh
 

Último (20)

The Essentials of Digital Experience Monitoring_ A Comprehensive Guide.pdf
The Essentials of Digital Experience Monitoring_ A Comprehensive Guide.pdfThe Essentials of Digital Experience Monitoring_ A Comprehensive Guide.pdf
The Essentials of Digital Experience Monitoring_ A Comprehensive Guide.pdf
 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
 
EY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityEY_Graph Database Powered Sustainability
EY_Graph Database Powered Sustainability
 
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsUnveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
 
Asset Management Software - Infographic
Asset Management Software - InfographicAsset Management Software - Infographic
Asset Management Software - Infographic
 
Salesforce Certified Field Service Consultant
Salesforce Certified Field Service ConsultantSalesforce Certified Field Service Consultant
Salesforce Certified Field Service Consultant
 
Cloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStackCloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStack
 
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdfThe Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
 
chapter--4-software-project-planning.ppt
chapter--4-software-project-planning.pptchapter--4-software-project-planning.ppt
chapter--4-software-project-planning.ppt
 
Hand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxHand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptx
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
 
Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...
 
Unit 1.1 Excite Part 1, class 9, cbse...
Unit 1.1 Excite Part 1, class 9, cbse...Unit 1.1 Excite Part 1, class 9, cbse...
Unit 1.1 Excite Part 1, class 9, cbse...
 
5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdf5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdf
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Models
 
Introduction to Decentralized Applications (dApps)
Introduction to Decentralized Applications (dApps)Introduction to Decentralized Applications (dApps)
Introduction to Decentralized Applications (dApps)
 
A Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxA Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docx
 
Der Spagat zwischen BIAS und FAIRNESS (2024)
Der Spagat zwischen BIAS und FAIRNESS (2024)Der Spagat zwischen BIAS und FAIRNESS (2024)
Der Spagat zwischen BIAS und FAIRNESS (2024)
 
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
 

Five major tips to maximize performance on a 200+ SQL HBase/Phoenix cluster

  • 1. Five major tips to maximize performance on a 200+ SQL HBase/Phoenix cluster Masayasu “Mas” Suzuki Shinji Nagasaka Takanari Tamesue Sony Corporation
  • 2. 2 Who we are, and why we chose HBase/Phoenix  We are DevOps members from Sony’s News Suite team – http://socialife.sony.net/  HBase/Phoenix was chosen because of – Scalability, – SQL compatibility, and – secondary indexing support
  • 3. 3 Our use case Internet Sony News Suite Server Architecture Application Server HBase Phoenix EventHandler HTTP SQL (READ) SQL (WRITE) Fetcher HTTP End user Outside content providers Main use case is caching contents temporarily
  • 4. 4 Basic test design  Query response time is measured as shown in red  Query read/write ratio is 6 to 1  12 different types of queries using eight separate indexes Application Server HBase Phoenix EventHandler SQL (READ) SQL (WRITE) Fetcher
  • 5. 5 Table schema  A table with 1.2 billion records were created  Each record is around 1.0 Kbytes – Raw data is around 1.7 KBytes each – Gzip is used to compress column pt and hence the total comes out to be around 1.0 Kbytes  id is the primary key – Two MD5 hashed values are concatenated to create id • Example: df461a2bda4002aaaa8117d4e43ee737_cfcd208495d565ef66e7dff9f98764da CHAR(65) id VARCHAR ai VARCHAR ao DECIMAL b DECIMAL c CHAR(5) cl CHAR(2) lg DECIMAL lw DECIMAL u VARBINARY pt 1adf… TR DSATE... 82122... 9071.9 true es 823.199 0.1243 (binary) 9d0a… FB Adad... 54011… 122114.5 true ja 23.632 5.22 (binary) c5ae... KW 4 of … 20011… 3253.55 false fr 0.343 2.77 (binary) ea4a... AB p7mj… 67691… 8901.0 true en 76.21 23.11 (binary) 1.2 billion records
  • 6. 6 Split points  Because it was impossible to store all 1.2 billion records on one single node, we manually split the tables by defining the split points  Split points were set so that each divided block, or region file, would be nearly equal in size – This was possible because we knew a. the exact range of our primary keys, and b. the hashed values of our primary keys would be uniformly distributed CREATE TABLE IF NOT EXISTS TBL_1200M_IDX_LZ4_VER1_SPLT200_PTBIN_INT2DEC ( id CHAR(65) NOT NULL, ai VARCHAR, ao VARCHAR, b DECIMAL, c DECIMAL, cl CHAR(5), lg CHAR(2), lw DECIMAL, u DECIMAL, p_t VARBINARY, CONSTRAINT my_pk PRIMARY KEY ( id ) ) COMPRESSION='LZ4', VERSIONS='1', MAX_FILESIZE=26843545600 SPLIT ON ( '0148','0290','03d8','0520','0668','07b0','08f8','0a40', …,'fef8' );
  • 7. 7 Distribution of region file per RegionServer  If split points can be evenly set, then data allocation can be evened out Different color denotes different tables 200 RegionServer Total size per node
  • 8. 8 Queries  Ratio of R/W queries is 6 to 1  Sample READ queries  Sample WRITE queries  Constants (ex. in the above example, 228343239, or the value of b) were randomly generated to simulate current production environment SELECT id FROM TBL_1200M_IDX_LZ4_VER1_SPLT200_PTBIN_INT2DEC WHERE b=228343239 AND cl='false'; SELECT id FROM TBL_1200M_IDX_LZ4_VER1_SPLT200_PTBIN_INT2DEC WHERE ai=‘AB' AND cl='false' AND c>0 AND c<1417648603068; /* Written as a Java PreparedStatement */ UPSERT INTO TBL_1200M_IDX_LZ4_VER1_SPLT200_PTBIN_INT2DEC (id,p_t,c,lw,u) VALUES (?,?,?,?,?)
  • 9. 9 Queries – Details Query No. Name Read/Write Percentage generated Description Randomly generated part 1 Id READ 25% Search using primary key Id (primary key) 2 IdCnt READ 10% Count using primary key Id (primary key) 3 IdOr READ 10% Search using “OR” of ten primary keys Id (primary key) 4 AiAoU READ 5% Search using columns Ai, Ao, and U Ai, Ao, U 5 AiCCl READ 5% Search using columns Ai, C, and Cl Ai, C, Cl 6 AiLwCl READ 5% Search using columns Ai, Lw, and Cl Ai, Lw, Cl 7 AiULg READ 5% Search using columns Ai, U, and Lg Ai, U, Lg 8 BCl READ 5% Search using columns B and Cl B, Cl 9 BLg READ 5% Search using columns B and Lg B, Lg 10 CLg READ 5% Search using columns C and Lg C, Lg 11 LwLg READ 5% Search using columns Lw and Lg Lw, Lg 12 PtCLwU WRITE 15% Upsert binary data Pt and upsert columns C, Lw, and U Id (primary key), Pt, C, Lw, U
  • 10. 10 Secondary indexes  Following eight indexes were created  Eight indexes are designed to be orthogonal indexes  Split points were manually set for index tables so that each region file would be similar in size Index No. Name Index type Description 1 AiAoU CHAR/CHAR/DECIMAL For use in search using columns Ai, Ao, and U 2 AiCCl CHAR/DECIMAL/CHAR For use in search using columns Ai, C, and Cl 3 AiLwCl CHAR/DECIMAL/CHAR For use in search using columns Ai, Lw, and Cl 4 AiULg CHAR/DECIMAL/CHAR For use in search using columns Ai, U, and Lg 5 BCl DECIMAL/CHAR For use in search using columns B and Cl 6 BLg DECIMAL/CHAR For use in search using columns B and Lg 7 CLg DECIMAL/CHAR For use in search using columns C and Lg 8 LwLg DECIMAL/CHAR For use in search using columns Lw and Lg
  • 11. 11 Test environment HBase Clusters Zookeepers HMasters Zookeeper 1 Zookeeper 2 Zookeeper 3 HMaster (Main) HMaster (Secondary) HMaster (Secondary Backup) RegionServer sRegionServer 1 Clients Client 1 Client 2 Client 100 ・・・ disk RegionServer 1 disk RegionServer 200 disk SYSTEM.CATALOG (Meta data for Phoenix Plug-in) ・・・ 100 clients (100 x c4.xlarge) 3 Zookeepers (3 x m3.xlarge) 3 HMasters (3 x m3.xlarge) 200 RegionServers (199 x r3.xlarge) (1 x c4.8xlarge)
  • 12. 12 Tools used  Tools were especially useful for – Pinpointing the bottlenecks in resource usage – Determining when and where an error occurred within the cluster – Verifying the effect of solutions applied – Managing multiple nodes seamlessly without having to manage them separately Tools used Purpose Analysis of resource usage per AWS instance (ex. CPU usage, network traffic, disk utilization, Java stats) Analysis of status of HBase and Hadoop layers (ex. number of regions, store files, requests) Analysis of distribution of each HBase table over the cluster (ex. number and size of region files per node) Fabric Remotely control multiple nodes via SSH
  • 13. 13 Performance test apparatus & results  Test apparatus  Test results Specs Number of records 1.2 billion records (1KB each) Number of indexes 8 orthogonal indexes Servers 3 Zookeepers (Zookeeper 3.4.5, m3.xlarge x 3) 3 HMaster servers (hadoop 2.5.0, hbase 0.98.6, Phoenix 4.3.0, m3.xlarge x 3) 200 RegionServers (hadoop 2.5.0, hbase 0.98.6, Phoenix 4.3.0, r3.xlarge x 199, c4.8xlarge x 1) Clients 100 x c4.xlarge Results Number of queries 51,053 queries/sec Response time (average) 46 ms
  • 14. 14 Cost  Total: $325,236 (per year, “All Upfront” pricing)  This is a preliminary setup! – There is room for further spec/cost optimization Node Type Instance Type Quantity Cost (per year) HBase:ZooKeeper m3.xlarge 3 $ 4,284 Hadoop:Name Node HBase:Hmaster m3.xlarge 3 $ 4,284 Hadoop:Data Node HBase:RegionServer r3.xlarge 199 $ 307,455 HBase:RegionServer (for housing meta table SYSTEM.CATALOG) c4.8xlarge 1 $ 9,213
  • 15. 15 Five major tips to maximize performance using HBase/Phoenix Ordered by effectiveness
  • 16. 16 Tips 1 – Use SQL hint clause when using an index  Response without hint clause  Response with hint clause 0 50 100 150 200 250 300 350 400 Id IdCnt IdOr AiAoU AiCCl AiLwCl AiULg BCl BLg CLg LwLg PtCLwU 0.08 0.5 1 1.5 2 2.5 [ms] Queries using primary key Write query Queries using index Elapsed time [hours] Performance improved by 6 times 0 50 100 150 200 250 300 350 400 Id IdCnt IdOr AiAoU AiCCl AiLwCl AiULg BCl BLg CLg LwLg PtCLwU 0.08 0.5 1 1.5 2 2.5 [ms] Queries using primary key Write query Queries using index Elapsed time [hours]
  • 17. 17 Tips 1 – Use SQL hint clause when using an index  Major possible cause (yet to be verified) – When the index is used, an extra RPC is issued to verify latest meta/statistics – Using hint clause may reduce this RPC (still hypothesis)  Other possible solutions – Changing “UPDATE_CACHE_FREQUENCY” (available from Phoenix 4.7) may resolve this issue (we have not tried this yet) From Phoenix website … https://phoenix.apache.org/#Altering “When a SQL statement is run which references a table, Phoenix will by default check with the server to ensure it has the most up to date table metadata and statistics. This RPC may not be necessary when you know in advance that the structure of a table may never change.”
  • 18. 18 Tips 2 – Use memories aggressively  In early stages of our testing, disk utilization and iowait of RegionServers were extremely high Test period Test period iowait
  • 19. 19 Tips 2 – Use memories aggressively  Issue was most critical during major compaction and index creation  Initially, we thought we had enough memory – Total size of data (includes all tables/indexes and mirrored data in Hadoop layer) • More than 1,360 GB – Total available memory combined on RegionServers (then) • Around 1,500 GB (m3.2xlarge(30GiB) x 50 nodes)  But this left very little margin for computation intensive tasks  We decided to allocate memory at least 3 times the size of data for added protection and performance (has worked thus far)
  • 20. 20 Tips 3 – Manually split the region file but don’t over split them  A single table is too big to be placed and managed by one single node  We wanted to know whether we should split in a “more finer” way or in a “more coarser” way
  • 21. 21 Tips 3 – Manually split the region file but don’t over split them  Comparison between 200 and 4002 split points – 200 RegionServers were used in both cases Don’t over split region files 0 2000 4000 6000 8000 10000 12000 14000 16000 18000 0 2 4 6 8 10 12 14 16 Volumeprocessed[queries/sec] Elapsed time [H] SplitPoint = 200 SplitPoint = 4002 0 100 200 300 400 500 600 700 0 2 4 6 8 10 12 14 16 Responsetime[ms] Elapsed time [H] SplitPoint = 200 SplitPoint = 4002
  • 22. 22 Tips 4 – Scale-out instead of scale-up  Comparison of RegionServers running c3.4xlarge and c3.8xlarge – c3.8xlarge is twice the spec of c3.4xlarge – Combined computing power of “100 nodes of c3.4xlarge” is equal to “50 nodes of c3.8xlarge”, but former scores better 0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000 0 2 4 6 8 Volumeprocessed[queries/sec] Elapsed time [H] c3.4xlarge x 100 c3.8xlarge x 50 0 20 40 60 80 100 120 140 160 180 200 0 2 4 6 8 Responsetime[ms] Elapsed time [H] c3.4xlarge x 100 c3.8xlarge x 50 Scale-out!
  • 23. 23 Tips 5 – Avoid running power intense tasks simultaneously  For example, do not run major compaction together with index creation  Also, performance impact from major compaction can be lessened by running them in smaller units 26,142 29,980 24000 25000 26000 27000 28000 29000 30000 31000 Volumeprocessed[queries/sec] 91 ms 80 ms 72 74 76 78 80 82 84 86 88 90 92 94 Responsetime[ms] Major compaction for nine tables done simultaneously Major compaction for nine tables done separately Major compaction for nine tables done simultaneously Major compaction for nine tables done separately 13% increase in volume processed 9% faster
  • 24. 24 Items of very limited or no success
  • 25. 25 First and foremost  Please understand that these are lessons learned through our tests on our environment  Any one or all of these items may prove useful in your environment
  • 26. 26 Items of limited success – Changing GC algorithm  RegionServers’ GC algorithm were changed and tested  Performance is more even with G1  Performance of G1 is on average, 2% lower than CMS 0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000 0 2 4 6 8 10 Volumeprocessed[queries/sec] Elapsed time [H] CMS G1 0 20 40 60 80 100 120 140 160 180 200 0 2 4 6 8 10 Responsetime[ms] Elapsed time [H] CMS G1
  • 27. 27 Items of limited success – Changing Java heap size  RegionServers’ Java heap size were changed and tested  Maximum physical memory is 30.5 GiB (r3.xlarge)  When heap was set to 26.0 GB, system crashed after five hours 0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000 0 4 8 12 16 Volumeprocessed[queries/sec] Elapsed time [H] JavaHeap = 20.5GB JavaHeap = 23.0GB JavaHeap = 26.0GB 0 20 40 60 80 100 120 140 160 180 200 0 4 8 12 16 Responsetime[ms] Elapsed time [H] JavaHeap = 20.5GB JavaHeap = 23.0GB JavaHeap = 26.0GB
  • 28. 28 Items of limited success – Changing disk file format  RegionServers’ disk file format was changed and tested  The newer xfs tend to score slightly better when compared at its highs 0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000 0 4 8 12 16 Volumeprocessed[queries/sec] Elapsed time [H] ext4 xfs 0 20 40 60 80 100 120 140 160 180 200 0 4 8 12 16 Responsetime[ms] Elapsed time [H] ext4 xfs
  • 30. 30 Five major tips to maximize performance on HBase/Phoenix Ordered by effectiveness (Most effective on the very top) – An extra RPC is issued when the client runs a SQL statement that uses a secondary index – Using SQL hint clause can mitigate this – From Ver. 4.7, changing “UPDATE_CACHE_FREQUENCY” may also work (we have yet to test this) – A memory rich node should be selected for use in RegionServers so as to minimize disk access – More nodes running in parallel yield better results than fewer but powerful nodes running in parallel – As an example, running major compaction and index creation simultaneously should be avoided Tips 1. Use SQL hint clause when using a secondary index Tips 2. Use memories aggressively Tips 3. Manually split the region file if you can but never over split them Tips 4. Scale-out instead of scale-up Tips 5. Avoid running power intensive tasks simultaneously
  • 31. 31 Special Thanks  Takafumi Suzuki – Thank you very much for the countless and invaluable discussions – We owe the success of this project to you!  Thank you very much!
  • 32. “Sony” is a registered trademark of Sony Corporation. Names of Sony products and services are the registered trademarks and/or trademarks of Sony Corporation or its Group companies. Other company names and product names are the registered trademarks and/or trademarks of the respective companies.