SlideShare una empresa de Scribd logo
1 de 24
Descargar para leer sin conexión
Vinay Raj Malla
Oracle DBA
Miracle Software Systems
Oracle Automatic Data Optimization
using Heat maps
Our Agenda
• Automatic Data Optimization: Why Bother?
• ADO: Space-Based vs. Time-Based
• Compression: Saving Space and I/O
• How Hot is My Data: Heat Maps
• ADO: How It All Works
• ADO Policies: Results
• Q+A
Questions for you !
• What do you do if your storage is full in your laptops?
• What do you do if you want access some data regularly and catch up
the things in a very rapid manner?
• What do you do if you don’t access your data at all but you need the
data ?
• What do you do if one of your disk drive is full ?
Automatic Data Optimization: Why Bother?
• Flash storage is everywhere and getting cheaper
• Disk storage has never been cheaper
• Abundant CPU resources yields cheaper compression
• Compressed data accessed in fewer physical IOs
• Proper compression usage takes planning …
• Just because we can compress doesn’t mean we should
• For now, flash-based storage is still somewhat precious
• Improper compression negatively affects application performance
• … but compression can mean extreme performance!
Automatic Data Optimization (ADO)
• Moves or compresses data based on observed usage
patterns.
• Leverages heat maps to determine how often data has been
accessed
• Tracks exactly how data has been utilized
• DML versus query
• Random access versus table scan
• Usage patterns can be tracked at tablespace, segment, and
even row level
Covering the Space-Time Continuum
An ADO policy can be based on either space or time
• Space-based:
• Moves segment between tablespaces in different storage
tiers
• Movement is based on “fullness” of initial tablespace
• Time-based:
• Compresses data more tightly within same object
• Data compressible at three levels:
• ROW STORAGE (row level within a segment)
• SEGMENT (one database object at segment level)
• GROUP (multiple database objects)
Space-Based Migration: An Example
Tier 0 storage space monitoring detects either:
• Tablespace space in use exceeds 90%, or
• Tablespace free space drops below 25%
ADO automatically moves entire segment to
another tablespace on Tier 1 / 2 storage
Segment resides initially in tablespace on Tier 0 storage:
• Server-based flash (e.g. FusionIO card in PCIe slot)
• Exadata flash-based grid disk
• SSD
+FLASH
+COLD
Time-Based Compression: An Example
After 3 days of more limited access:
Enable ADVANCED compression
After 30 days of no modifications:
Enable HCC QUERY LOW* compression
Initially, heavy DML and query activity:
Leave data uncompressed
After 90 days of no access:
Enable HCC ARCHIVE HIGH* compression
* Requires Exadata, ZFS Appliance, or Pillar Axiom storage
ADO: A Simple Example
• Data source: 5M row fact table, 10 years of history
• Data will be loaded into two tables:
• AP.ROLLON_PARTED contains most recent three months’ data
• AP.RANDOMIZED_PARTED partitioned for historical data
storage:
P1_HOT : 12/2013 and later P3_COOL: 2009 - 2011
P2_WARM: 01/2012 – 11/2013 P4_COLD: Before 2009
• ADO Goals:
• Limit usage of tablespace ADO_HOT_DATA on flash storage
• Apply appropriate compression as data grows “colder” over
time
Leveraging Storage Tiers
+FLASH
ADO_HOT_DATA
ADO_HOT_IDX
+COLD
ADO_COLD_DATA
ADO_COLD_IDX
+WARM
ADO_WARM_DATA
ADO_WARM_IDX
+COOL
ADO_COOL_DATA
ADO_COOL_IDX
Tier 0:
SSD or Flash
Tier 1: Fast HDD
(SAS, Outer
Cylinders)
Tier 2: Slow HDD
(SATA)
Tier 1: Slower HDD
(SAS, Inner Cylinders)
Activating Heat Mapping
SQL> ALTER SYSTEM SET heat_map = ON;
Must be activated before any
ADO policies are created!
Hottest Tablespaces (from DBA_HEATMAP_TOP_TABLESPACES)
Alloc
Tablespace Segment Space Earliest Earliest Earliest
Name Count (MB) Write Time FTS Time Lookup Time
--------------- ------- ------- ------------------- ------------------- -------------------
ADO_COLD_DATA 1 57 2014-02-07.12:14:13 2014-02-07.00:07:55
ADO_COLD_IDX 0 1
ADO_COOL_DATA 1 41 2014-02-07.12:14:13 2014-02-07.00:07:55
ADO_COOL_IDX 0 1
ADO_HOT_DATA 1 9 2014-02-07.00:07:55 2014-02-07.12:14:13 2014-02-07.00:07:55
ADO_HOT_IDX 0 1 2014-02-07.18:03:35 2014-02-07.18:03:35
ADO_WARM_DATA 3 50 2014-02-07.12:14:13 2014-02-07.00:07:55
ADO_WARM_IDX 0 1
AP_DATA 4 131 2014-02-07.12:14:13
AP_IDX 7 130 2014-02-07.00:07:55 2014-02-07.12:14:13
Recently-Touched Segments (from DBA_HEAT_MAP_SEG_HISTOGRAM)
Object Segment Segment Segment
Owner Object Name Subobject Name Last Touched Wrtn To? FTS? LKP?
------ -------------------- -------------------- ------------------- -------- -------- --------
AP RANDOMIZED_PARTED P1_HOT 2014-02-07 11:27:05 NO YES NO
AP RANDOMIZED_PARTED P2_WARM 2014-02-07 11:27:05 NO YES NO
AP RANDOMIZED_PARTED P2_WARM 2014-02-07 11:27:05 NO YES NO
AP RANDOMIZED_PARTED P3_COOL 2014-02-07 11:27:05 NO YES NO
AP RANDOMIZED_PARTED P3_COOL 2014-02-07 11:27:05 NO YES NO
AP RANDOMIZED_PARTED P4_COLD 2014-02-07 11:27:05 NO YES NO
AP RANDOMIZED_PARTED P4_COLD 2014-02-07 11:27:05 NO YES NO
AP ROLLON_PARTED 2014-02-07 11:27:05 NO YES NO
AP ROLLON_PARTED_PK 2014-02-07 11:27:05 NO NO YES
Sample Objects
Partitioned:
CREATE TABLE ap.rollon_parted (
key_id NUMBER(8)
,key_date DATE
,key_desc VARCHAR2(32)
,key_sts NUMBER(2) NOT NULL
)
TABLESPACE ado_hot_data
NOLOGGING
PARALLEL 4
ILM ADD POLICY
TIER TO ado_warm_data;
Non-Partitioned:
CREATE TABLE ap.randomized_parted (
key_id NUMBER(8)
,key_date DATE
,key_desc VARCHAR2(32)
,key_sts NUMBER(2) NOT NULL
)
PARTITION BY RANGE(key_date) (
PARTITION P4_COLD
VALUES LESS THAN (TO_DATE('2009-01-01','yyyy-mm-
dd'))
TABLESPACE ado_cold_data
,PARTITION P3_COOL
VALUES LESS THAN (TO_DATE('2012-01-01','yyyy-mm-
dd'))
TABLESPACE ado_cool_data
,PARTITION P2_WARM
VALUES LESS THAN (TO_DATE('2013-12-01','yyyy-mm-
dd'))
TABLESPACE ado_warm_data
,PARTITION P1_HOT
VALUES LESS THAN (MAXVALUE)
TABLESPACE ado_hot_data NOCOMPRESS)
NOLOGGING PARALLEL 4;
Implementing ADO Time-Based Policies
ALTER TABLE ap.randomized_parted
MODIFY PARTITION p1_hot
ILM ADD POLICY
ROW STORE
COMPRESS ADVANCED
ROW
AFTER 180 DAYS OF NO MODIFICATION;
ALTER TABLE ap.randomized_parted
MODIFY PARTITION p2_warm
ILM ADD POLICY
COMPRESS FOR QUERY LOW
SEGMENT
AFTER 300 DAYS OF NO ACCESS;
ALTER TABLE ap.randomized_parted
MODIFY PARTITION p3_cool
ILM ADD POLICY
COMPRESS FOR QUERY HIGH
SEGMENT
AFTER 600 DAYS OF NO ACCESS;
ALTER TABLE ap.randomized_parted
MODIFY PARTITION p4_cold
ILM ADD POLICY
COMPRESS FOR ARCHIVE HIGH
SEGMENT
AFTER 900 DAYS OF NO ACCESS;
Row-level compression policy
Segment-level compression policies,
but leveraging HCC compression
Testing ADO Policies: Global Policy Attributes
BEGIN
DBMS_ILM_ADMIN.CUSTOMIZE_ILM(
parameter => DBMS_ILM_ADMIN.POLICY_TIME
,value =>
DBMS_ILM_ADMIN.ILM_POLICY_IN_SECONDS);
DBMS_ILM_ADMIN.CUSTOMIZE_ILM(
parameter => DBMS_ILM_ADMIN.EXECUTION_INTERVAL
,value => 3);
DBMS_ILM_ADMIN.CUSTOMIZE_ILM(
parameter => DBMS_ILM_ADMIN.TBS_PERCENT_USED
,value => 90);
DBMS_ILM_ADMIN.CUSTOMIZE_ILM(
parameter => DBMS_ILM_ADMIN.TBS_PERCENT_FREE
,value => 25);
END;
/
Adjusting ADO Policy Attributes for Faster Test Cycles
Treats days like seconds
Sets ILM execution
interval
Overrides tablespace
fullness defaults
Testing ADO Policies: Executing Workloads
DECLARE
cur_max NUMBER := 0;
new_max NUMBER := 0;
ctr NUMBER := 0;
BEGIN
SELECT MAX(key_id)
INTO cur_max
FROM ap.rollon_parted;
cur_max := cur_max + 1;
new_max := cur_max + 50000;
FOR ctr IN cur_max..new_max
LOOP
INSERT INTO ap.rollon_parted
VALUES(ctr
,(TO_DATE('12/31/2013','mm/dd/yyyy')
+ DBMS_RANDOM.VALUE(1,90))
,'NEWHOTROW‘ ,'N');
IF MOD(ctr, 5000) = 0 THEN
COMMIT;
END IF;
END LOOP;
COMMIT;
EXCEPTION
WHEN OTHERS THEN NULL;
END;
/
Inserts 50,000 new rows
into AP.ROLLON_PARTED
to test space-based ADO
policy
Testing ADO Policies: Executing Workloads
SELECT MAX(LENGTH(key_desc)), COUNT(key_sts)
FROM ap.randomized_parted
WHERE key_date BETWEEN TO_DATE('2011-03-15','YYYY-MM-DD')
AND TO_DATE('2011-04-14','YYYY-MM-DD')
AND ROWNUM < 10001;
Touches 10,000 rows in
“colder” table partition
SELECT MAX(LENGTH(key_desc)), COUNT(key_sts)
FROM ap.randomized_parted
WHERE key_date BETWEEN TO_DATE('2012-03-15','YYYY-MM-DD')
AND TO_DATE('2013-11-30','YYYY-MM-DD')
AND ROWNUM < 10001;
Touches 10,000 rows in
“warmer” table partition
UPDATE ap.randomized_parted
SET key_desc = 'Modified *** MODIFIED!!!'
WHERE key_desc <> 'Modified *** MODIFIED!!!'
AND key_date BETWEEN TO_DATE('2013-12-01','YYYY-MM-DD')
AND TO_DATE('2013-12-31','YYYY-MM-DD')
AND ROWNUM < 50001;
COMMIT;
Updates 50,000 rows in
“hottest” table partition
Testing ADO Policies: Forcing ILM Evaluation
DECLARE
tid NUMBER;
BEGIN
DBMS_ILM.EXECUTE_ILM(owner => 'AP', object_name => 'ROLLON_PARTED', task_id => tid);
DBMS_ILM.EXECUTE_ILM(owner => 'AP‘, object_name => 'RANDOMIZED_PARTED',task_id => tid);
END;
/
ILM Objects Most Recently Evaluated
(from DBA_ILMEVALUATIONDETAILS)
Task ILM Object Subobject
ID Policy Owner Object Name Name Object Type Reason Chosen Job Name
----- ------ ------ -------------------- ---------- --------------- ------------------------------ ----------
2905 P150 AP RANDOMIZED_PARTED P4_COLD TABLE PARTITION SELECTED FOR EXECUTION ILMJOB582
2905 P149 AP RANDOMIZED_PARTED P3_COOL TABLE PARTITION SELECTED FOR EXECUTION ILMJOB580
2905 P147 AP RANDOMIZED_PARTED P1_HOT TABLE PARTITION SELECTED FOR EXECUTION ILMJOB576
2905 P148 AP RANDOMIZED_PARTED P2_WARM TABLE PARTITION SELECTED FOR EXECUTION ILMJOB578
. . . . .
2895 P146 AP ROLLON_PARTED TABLE POLICY DISABLED
2889 P150 AP RANDOMIZED_PARTED P4_COLD TABLE PARTITION PRECONDITION NOT SATISFIED
2889 P148 AP RANDOMIZED_PARTED P2_WARM TABLE PARTITION PRECONDITION NOT SATISFIED
2889 P149 AP RANDOMIZED_PARTED P3_COOL TABLE PARTITION PRECONDITION NOT SATISFIED
2889 P147 AP RANDOMIZED_PARTED P1_HOT TABLE PARTITION SELECTED FOR EXECUTION ILMJOB564
2888 P146 AP ROLLON_PARTED TABLE SELECTED FOR EXECUTION ILMJOB562
ADO Space-Based Policies: Results
Free
Tablespace Space
Name (MB)
--------------- -------
ADO_COLD_DATA 87
ADO_COOL_DATA 135
ADO_HOT_DATA 14
ADO_WARM_DATA 151
Initially, tablespace
ADO_HOT_DATA
(sized at just 25MB)
is almost 50%
empty.
Next ILM evaluation
detects change, so
AP.ROLLON_PARTED
moves automatically
to ADO_WARM_DATA.
Free
Tablespace Space
Name (MB)
--------------- -------
ADO_COLD_DATA 87
ADO_COOL_DATA 135
ADO_HOT_DATA 16
ADO_WARM_DATA 127
Free
Tablespace Space
Name (MB)
--------------- -------
ADO_COLD_DATA 87
ADO_COOL_DATA 135
ADO_HOT_DATA 2
ADO_WARM_DATA 151
Data grows in table
AP.ROLLON_PARTED,
so ADO_HOT_DATA
falls below 25% ILM
free space limit.
ADO Time-Based Policies: Results
Results of Partitioned Table Loading (from
DBA_TAB_PARTITIONS)
Avg
Partition Compression Compress # of Row
Name Level For Row Count Blocks Len
---------- ------------ -------- ------------ -------- -----
P1_HOT DISABLED 39,801 232 34
P2_WARM DISABLED 958,717 5,255 34
P3_COOL DISABLED 1,500,592 8,185 34
P4_COLD DISABLED 2,500,890 13,587 34
Results of Partitioned Table Loading (from DBA_TAB_PARTITIONS)
Avg
Partition Compression Compress # of Row
Name Level For Row Count Blocks Len
---------- ------------ ---------- ------------ -------- -----
P1_HOT DISABLED 39,801 232 34
P2_WARM ENABLED 958,717 4,837 34
P3_COOL ENABLED QUERY HIGH 1,500,592 1,603 34
P4_COLD ENABLED ARCHIVE HIGH 2,500,890 1,879 34
Table partition sizes
and compression
before ADO policies
went into effect …
… and a few
moments after test
workloads were
applied and ILM
refresh requested.
ADO Compression: Performance Impacts
SELECT MAX(LENGTH(key_desc)), COUNT(key_sts)
FROM ap.randomized_parted
WHERE key_date BETWEEN TO_DATE('2005-03-15','YYYY-MM-DD')
AND TO_DATE('2008-04-14','YYYY-MM-DD');
SELECT MAX(LENGTH(key_desc)), COUNT(key_sts)
FROM ap.randomized_parted
WHERE key_date BETWEEN TO_DATE('2010-01-15','YYYY-MM-DD')
AND TO_DATE('2011-10-15','YYYY-MM-DD');
SELECT MAX(LENGTH(key_desc)), COUNT(key_sts)
FROM ap.randomized_parted
WHERE key_date BETWEEN TO_DATE('2012-01-15','YYYY-MM-DD')
AND TO_DATE('2013-10-15','YYYY-MM-DD');
Statistic
Before
Compression
After
Compression
Execution Time 3.61s 3.29s
Physical Reads 5,185 4,783
Optimizer Cost 398 366
Execution Time 4.46s 1.88s
Physical Reads 8,074 1,605
Optimizer Cost 619 124
Execution Time 5.77s 4.23s
Physical Reads 13,451 2,116
Optimizer Cost 1,027
147
Sample Queries:
Over To You …
How Hot Is My Data? Leveraging Automatic Database Optimization
(ADO) Features in Oracle 12c Database For Dramatic Performance
Improvements
If you have any questions or comments, feel free to:
 E-mail me at
vnairaj@gmail.com/vmalla1@miraclesoft.com
Connect with me on LinkedIn (Vinay Raj Malla)
Thank You For Your Kind Attention
References
• http://www.oracle.com/technetwork/database/automatic-data-optimization-wp-12c-1896120.pdf
• http://gavinsoorma.com/2013/09/oracle-12c-new-feature-heat-map-and-automatic-data-optimization/
• https://jimczuprynski.files.wordpress.com
Thank you

Más contenido relacionado

La actualidad más candente

Amazon Dynamo DB for Developers (김일호) - AWS DB Day
Amazon Dynamo DB for Developers (김일호) - AWS DB DayAmazon Dynamo DB for Developers (김일호) - AWS DB Day
Amazon Dynamo DB for Developers (김일호) - AWS DB DayAmazon Web Services Korea
 
Time series with Apache Cassandra - Long version
Time series with Apache Cassandra - Long versionTime series with Apache Cassandra - Long version
Time series with Apache Cassandra - Long versionPatrick McFadin
 
Making Nested Columns as First Citizen in Apache Spark SQL
Making Nested Columns as First Citizen in Apache Spark SQLMaking Nested Columns as First Citizen in Apache Spark SQL
Making Nested Columns as First Citizen in Apache Spark SQLDatabricks
 
AWS July Webinar Series - Getting Started with Amazon DynamoDB
AWS July Webinar Series - Getting Started with Amazon DynamoDBAWS July Webinar Series - Getting Started with Amazon DynamoDB
AWS July Webinar Series - Getting Started with Amazon DynamoDBAmazon Web Services
 
Cassandra Basics, Counters and Time Series Modeling
Cassandra Basics, Counters and Time Series ModelingCassandra Basics, Counters and Time Series Modeling
Cassandra Basics, Counters and Time Series ModelingVassilis Bekiaris
 
Storing time series data with Apache Cassandra
Storing time series data with Apache CassandraStoring time series data with Apache Cassandra
Storing time series data with Apache CassandraPatrick McFadin
 
DataStax: An Introduction to DataStax Enterprise Search
DataStax: An Introduction to DataStax Enterprise SearchDataStax: An Introduction to DataStax Enterprise Search
DataStax: An Introduction to DataStax Enterprise SearchDataStax Academy
 
Cassandra 2.0 and timeseries
Cassandra 2.0 and timeseriesCassandra 2.0 and timeseries
Cassandra 2.0 and timeseriesPatrick McFadin
 
To scale or not to scale: Key/Value, Document, SQL, JPA – What’s right for my...
To scale or not to scale: Key/Value, Document, SQL, JPA – What’s right for my...To scale or not to scale: Key/Value, Document, SQL, JPA – What’s right for my...
To scale or not to scale: Key/Value, Document, SQL, JPA – What’s right for my...Uri Cohen
 
Cost-Based Optimizer in Apache Spark 2.2
Cost-Based Optimizer in Apache Spark 2.2 Cost-Based Optimizer in Apache Spark 2.2
Cost-Based Optimizer in Apache Spark 2.2 Databricks
 
A Cassandra + Solr + Spark Love Triangle Using DataStax Enterprise
A Cassandra + Solr + Spark Love Triangle Using DataStax EnterpriseA Cassandra + Solr + Spark Love Triangle Using DataStax Enterprise
A Cassandra + Solr + Spark Love Triangle Using DataStax EnterprisePatrick McFadin
 
User Defined Aggregation in Apache Spark: A Love Story
User Defined Aggregation in Apache Spark: A Love StoryUser Defined Aggregation in Apache Spark: A Love Story
User Defined Aggregation in Apache Spark: A Love StoryDatabricks
 
Cassandra Materialized Views
Cassandra Materialized ViewsCassandra Materialized Views
Cassandra Materialized ViewsCarl Yeksigian
 
A Deeper Dive into EXPLAIN
A Deeper Dive into EXPLAINA Deeper Dive into EXPLAIN
A Deeper Dive into EXPLAINEDB
 
How We Used Cassandra/Solr to Build Real-Time Analytics Platform
How We Used Cassandra/Solr to Build Real-Time Analytics PlatformHow We Used Cassandra/Solr to Build Real-Time Analytics Platform
How We Used Cassandra/Solr to Build Real-Time Analytics PlatformDataStax Academy
 
Lambda Architecture with Cassandra (Vaibhav Puranik, GumGum) | C* Summit 2016
Lambda Architecture with Cassandra (Vaibhav Puranik, GumGum) | C* Summit 2016Lambda Architecture with Cassandra (Vaibhav Puranik, GumGum) | C* Summit 2016
Lambda Architecture with Cassandra (Vaibhav Puranik, GumGum) | C* Summit 2016DataStax
 
Building Operational Data Lake using Spark and SequoiaDB with Yang Peng
Building Operational Data Lake using Spark and SequoiaDB with Yang PengBuilding Operational Data Lake using Spark and SequoiaDB with Yang Peng
Building Operational Data Lake using Spark and SequoiaDB with Yang PengDatabricks
 
Streaming SQL with Apache Calcite
Streaming SQL with Apache CalciteStreaming SQL with Apache Calcite
Streaming SQL with Apache CalciteJulian Hyde
 

La actualidad más candente (20)

Deep Dive on Amazon DynamoDB
Deep Dive on Amazon DynamoDBDeep Dive on Amazon DynamoDB
Deep Dive on Amazon DynamoDB
 
Amazon Dynamo DB for Developers (김일호) - AWS DB Day
Amazon Dynamo DB for Developers (김일호) - AWS DB DayAmazon Dynamo DB for Developers (김일호) - AWS DB Day
Amazon Dynamo DB for Developers (김일호) - AWS DB Day
 
Time series with Apache Cassandra - Long version
Time series with Apache Cassandra - Long versionTime series with Apache Cassandra - Long version
Time series with Apache Cassandra - Long version
 
Making Nested Columns as First Citizen in Apache Spark SQL
Making Nested Columns as First Citizen in Apache Spark SQLMaking Nested Columns as First Citizen in Apache Spark SQL
Making Nested Columns as First Citizen in Apache Spark SQL
 
AWS July Webinar Series - Getting Started with Amazon DynamoDB
AWS July Webinar Series - Getting Started with Amazon DynamoDBAWS July Webinar Series - Getting Started with Amazon DynamoDB
AWS July Webinar Series - Getting Started with Amazon DynamoDB
 
Cassandra Basics, Counters and Time Series Modeling
Cassandra Basics, Counters and Time Series ModelingCassandra Basics, Counters and Time Series Modeling
Cassandra Basics, Counters and Time Series Modeling
 
Storing time series data with Apache Cassandra
Storing time series data with Apache CassandraStoring time series data with Apache Cassandra
Storing time series data with Apache Cassandra
 
DataStax: An Introduction to DataStax Enterprise Search
DataStax: An Introduction to DataStax Enterprise SearchDataStax: An Introduction to DataStax Enterprise Search
DataStax: An Introduction to DataStax Enterprise Search
 
Cassandra 2.0 and timeseries
Cassandra 2.0 and timeseriesCassandra 2.0 and timeseries
Cassandra 2.0 and timeseries
 
To scale or not to scale: Key/Value, Document, SQL, JPA – What’s right for my...
To scale or not to scale: Key/Value, Document, SQL, JPA – What’s right for my...To scale or not to scale: Key/Value, Document, SQL, JPA – What’s right for my...
To scale or not to scale: Key/Value, Document, SQL, JPA – What’s right for my...
 
Cost-Based Optimizer in Apache Spark 2.2
Cost-Based Optimizer in Apache Spark 2.2 Cost-Based Optimizer in Apache Spark 2.2
Cost-Based Optimizer in Apache Spark 2.2
 
A Cassandra + Solr + Spark Love Triangle Using DataStax Enterprise
A Cassandra + Solr + Spark Love Triangle Using DataStax EnterpriseA Cassandra + Solr + Spark Love Triangle Using DataStax Enterprise
A Cassandra + Solr + Spark Love Triangle Using DataStax Enterprise
 
User Defined Aggregation in Apache Spark: A Love Story
User Defined Aggregation in Apache Spark: A Love StoryUser Defined Aggregation in Apache Spark: A Love Story
User Defined Aggregation in Apache Spark: A Love Story
 
Cassandra Materialized Views
Cassandra Materialized ViewsCassandra Materialized Views
Cassandra Materialized Views
 
A Deeper Dive into EXPLAIN
A Deeper Dive into EXPLAINA Deeper Dive into EXPLAIN
A Deeper Dive into EXPLAIN
 
How We Used Cassandra/Solr to Build Real-Time Analytics Platform
How We Used Cassandra/Solr to Build Real-Time Analytics PlatformHow We Used Cassandra/Solr to Build Real-Time Analytics Platform
How We Used Cassandra/Solr to Build Real-Time Analytics Platform
 
Lambda Architecture with Cassandra (Vaibhav Puranik, GumGum) | C* Summit 2016
Lambda Architecture with Cassandra (Vaibhav Puranik, GumGum) | C* Summit 2016Lambda Architecture with Cassandra (Vaibhav Puranik, GumGum) | C* Summit 2016
Lambda Architecture with Cassandra (Vaibhav Puranik, GumGum) | C* Summit 2016
 
Deep Dive - DynamoDB
Deep Dive - DynamoDBDeep Dive - DynamoDB
Deep Dive - DynamoDB
 
Building Operational Data Lake using Spark and SequoiaDB with Yang Peng
Building Operational Data Lake using Spark and SequoiaDB with Yang PengBuilding Operational Data Lake using Spark and SequoiaDB with Yang Peng
Building Operational Data Lake using Spark and SequoiaDB with Yang Peng
 
Streaming SQL with Apache Calcite
Streaming SQL with Apache CalciteStreaming SQL with Apache Calcite
Streaming SQL with Apache Calcite
 

Destacado

VLDB Administration Strategies
VLDB Administration StrategiesVLDB Administration Strategies
VLDB Administration StrategiesMurilo Miranda
 
Presentation backup and recovery best practices for very large databases (v...
Presentation   backup and recovery best practices for very large databases (v...Presentation   backup and recovery best practices for very large databases (v...
Presentation backup and recovery best practices for very large databases (v...xKinAnx
 
Christmas Party - Presentation Layout
Christmas Party - Presentation LayoutChristmas Party - Presentation Layout
Christmas Party - Presentation LayoutThais Pastén
 
Clinical utility of serum ferritin
Clinical utility of serum ferritinClinical utility of serum ferritin
Clinical utility of serum ferritinSujay Iyer
 
Street art in Manchester
Street art in ManchesterStreet art in Manchester
Street art in ManchesterGabriela Sokol
 

Destacado (11)

VLDB Administration Strategies
VLDB Administration StrategiesVLDB Administration Strategies
VLDB Administration Strategies
 
Contabilidad
ContabilidadContabilidad
Contabilidad
 
Feedline power-products
Feedline power-productsFeedline power-products
Feedline power-products
 
SAI MEDHA
SAI MEDHASAI MEDHA
SAI MEDHA
 
400 comandos linux
400 comandos linux400 comandos linux
400 comandos linux
 
Presentation backup and recovery best practices for very large databases (v...
Presentation   backup and recovery best practices for very large databases (v...Presentation   backup and recovery best practices for very large databases (v...
Presentation backup and recovery best practices for very large databases (v...
 
Christmas Party - Presentation Layout
Christmas Party - Presentation LayoutChristmas Party - Presentation Layout
Christmas Party - Presentation Layout
 
Tarea de informática
Tarea de informáticaTarea de informática
Tarea de informática
 
1994 газета для женщин №9 1994
1994 газета для женщин №9 19941994 газета для женщин №9 1994
1994 газета для женщин №9 1994
 
Clinical utility of serum ferritin
Clinical utility of serum ferritinClinical utility of serum ferritin
Clinical utility of serum ferritin
 
Street art in Manchester
Street art in ManchesterStreet art in Manchester
Street art in Manchester
 

Similar a Aioug vizag ado_12c_aug20

Real World Performance - Data Warehouses
Real World Performance - Data WarehousesReal World Performance - Data Warehouses
Real World Performance - Data WarehousesConnor McDonald
 
Aioug vizag oracle12c_new_features
Aioug vizag oracle12c_new_featuresAioug vizag oracle12c_new_features
Aioug vizag oracle12c_new_featuresAiougVizagChapter
 
Getting Started with Amazon Redshift
Getting Started with Amazon RedshiftGetting Started with Amazon Redshift
Getting Started with Amazon RedshiftAmazon Web Services
 
Oracle Database : Addressing a performance issue the drilldown approach
Oracle Database : Addressing a performance issue the drilldown approachOracle Database : Addressing a performance issue the drilldown approach
Oracle Database : Addressing a performance issue the drilldown approachLaurent Leturgez
 
Getting Started with Amazon Redshift
Getting Started with Amazon RedshiftGetting Started with Amazon Redshift
Getting Started with Amazon RedshiftAmazon Web Services
 
COUG_AAbate_Oracle_Database_12c_New_Features
COUG_AAbate_Oracle_Database_12c_New_FeaturesCOUG_AAbate_Oracle_Database_12c_New_Features
COUG_AAbate_Oracle_Database_12c_New_FeaturesAlfredo Abate
 
Big data should be simple
Big data should be simpleBig data should be simple
Big data should be simpleDori Waldman
 
Getting started with amazon redshift - Toronto
Getting started with amazon redshift - TorontoGetting started with amazon redshift - Toronto
Getting started with amazon redshift - TorontoAmazon Web Services
 
Building an Amazon Datawarehouse and Using Business Intelligence Analytics Tools
Building an Amazon Datawarehouse and Using Business Intelligence Analytics ToolsBuilding an Amazon Datawarehouse and Using Business Intelligence Analytics Tools
Building an Amazon Datawarehouse and Using Business Intelligence Analytics ToolsAmazon Web Services
 
Autonomous Transaction Processing (ATP): In Heavy Traffic, Why Drive Stick?
Autonomous Transaction Processing (ATP): In Heavy Traffic, Why Drive Stick?Autonomous Transaction Processing (ATP): In Heavy Traffic, Why Drive Stick?
Autonomous Transaction Processing (ATP): In Heavy Traffic, Why Drive Stick?Jim Czuprynski
 
(DAT201) Introduction to Amazon Redshift
(DAT201) Introduction to Amazon Redshift(DAT201) Introduction to Amazon Redshift
(DAT201) Introduction to Amazon RedshiftAmazon Web Services
 
Introdução ao data warehouse Amazon Redshift
Introdução ao data warehouse Amazon RedshiftIntrodução ao data warehouse Amazon Redshift
Introdução ao data warehouse Amazon RedshiftAmazon Web Services LATAM
 
Investigate SQL Server Memory Like Sherlock Holmes
Investigate SQL Server Memory Like Sherlock HolmesInvestigate SQL Server Memory Like Sherlock Holmes
Investigate SQL Server Memory Like Sherlock HolmesRichard Douglas
 
Hotsos 2011: Mining the AWR repository for Capacity Planning, Visualization, ...
Hotsos 2011: Mining the AWR repository for Capacity Planning, Visualization, ...Hotsos 2011: Mining the AWR repository for Capacity Planning, Visualization, ...
Hotsos 2011: Mining the AWR repository for Capacity Planning, Visualization, ...Kristofferson A
 
How to Load Data, Revisited, UTOUG
How to Load Data, Revisited, UTOUGHow to Load Data, Revisited, UTOUG
How to Load Data, Revisited, UTOUGKaren Cannell
 
How The Weather Company Uses Apache Spark to Serve Weather Data Fast at Low Cost
How The Weather Company Uses Apache Spark to Serve Weather Data Fast at Low CostHow The Weather Company Uses Apache Spark to Serve Weather Data Fast at Low Cost
How The Weather Company Uses Apache Spark to Serve Weather Data Fast at Low CostDatabricks
 
PUT is the new rename()
PUT is the new rename()PUT is the new rename()
PUT is the new rename()Steve Loughran
 
Best Practices for Supercharging Cloud Analytics on Amazon Redshift
Best Practices for Supercharging Cloud Analytics on Amazon RedshiftBest Practices for Supercharging Cloud Analytics on Amazon Redshift
Best Practices for Supercharging Cloud Analytics on Amazon RedshiftSnapLogic
 
SQLDAY 2023 Chodkowski Adrian Databricks Performance Tuning
SQLDAY 2023 Chodkowski Adrian Databricks Performance TuningSQLDAY 2023 Chodkowski Adrian Databricks Performance Tuning
SQLDAY 2023 Chodkowski Adrian Databricks Performance TuningSeeQuality.net
 

Similar a Aioug vizag ado_12c_aug20 (20)

Real World Performance - Data Warehouses
Real World Performance - Data WarehousesReal World Performance - Data Warehouses
Real World Performance - Data Warehouses
 
Oracle SQL Tuning
Oracle SQL TuningOracle SQL Tuning
Oracle SQL Tuning
 
Aioug vizag oracle12c_new_features
Aioug vizag oracle12c_new_featuresAioug vizag oracle12c_new_features
Aioug vizag oracle12c_new_features
 
Getting Started with Amazon Redshift
Getting Started with Amazon RedshiftGetting Started with Amazon Redshift
Getting Started with Amazon Redshift
 
Oracle Database : Addressing a performance issue the drilldown approach
Oracle Database : Addressing a performance issue the drilldown approachOracle Database : Addressing a performance issue the drilldown approach
Oracle Database : Addressing a performance issue the drilldown approach
 
Getting Started with Amazon Redshift
Getting Started with Amazon RedshiftGetting Started with Amazon Redshift
Getting Started with Amazon Redshift
 
COUG_AAbate_Oracle_Database_12c_New_Features
COUG_AAbate_Oracle_Database_12c_New_FeaturesCOUG_AAbate_Oracle_Database_12c_New_Features
COUG_AAbate_Oracle_Database_12c_New_Features
 
Big data should be simple
Big data should be simpleBig data should be simple
Big data should be simple
 
Getting started with amazon redshift - Toronto
Getting started with amazon redshift - TorontoGetting started with amazon redshift - Toronto
Getting started with amazon redshift - Toronto
 
Building an Amazon Datawarehouse and Using Business Intelligence Analytics Tools
Building an Amazon Datawarehouse and Using Business Intelligence Analytics ToolsBuilding an Amazon Datawarehouse and Using Business Intelligence Analytics Tools
Building an Amazon Datawarehouse and Using Business Intelligence Analytics Tools
 
Autonomous Transaction Processing (ATP): In Heavy Traffic, Why Drive Stick?
Autonomous Transaction Processing (ATP): In Heavy Traffic, Why Drive Stick?Autonomous Transaction Processing (ATP): In Heavy Traffic, Why Drive Stick?
Autonomous Transaction Processing (ATP): In Heavy Traffic, Why Drive Stick?
 
(DAT201) Introduction to Amazon Redshift
(DAT201) Introduction to Amazon Redshift(DAT201) Introduction to Amazon Redshift
(DAT201) Introduction to Amazon Redshift
 
Introdução ao data warehouse Amazon Redshift
Introdução ao data warehouse Amazon RedshiftIntrodução ao data warehouse Amazon Redshift
Introdução ao data warehouse Amazon Redshift
 
Investigate SQL Server Memory Like Sherlock Holmes
Investigate SQL Server Memory Like Sherlock HolmesInvestigate SQL Server Memory Like Sherlock Holmes
Investigate SQL Server Memory Like Sherlock Holmes
 
Hotsos 2011: Mining the AWR repository for Capacity Planning, Visualization, ...
Hotsos 2011: Mining the AWR repository for Capacity Planning, Visualization, ...Hotsos 2011: Mining the AWR repository for Capacity Planning, Visualization, ...
Hotsos 2011: Mining the AWR repository for Capacity Planning, Visualization, ...
 
How to Load Data, Revisited, UTOUG
How to Load Data, Revisited, UTOUGHow to Load Data, Revisited, UTOUG
How to Load Data, Revisited, UTOUG
 
How The Weather Company Uses Apache Spark to Serve Weather Data Fast at Low Cost
How The Weather Company Uses Apache Spark to Serve Weather Data Fast at Low CostHow The Weather Company Uses Apache Spark to Serve Weather Data Fast at Low Cost
How The Weather Company Uses Apache Spark to Serve Weather Data Fast at Low Cost
 
PUT is the new rename()
PUT is the new rename()PUT is the new rename()
PUT is the new rename()
 
Best Practices for Supercharging Cloud Analytics on Amazon Redshift
Best Practices for Supercharging Cloud Analytics on Amazon RedshiftBest Practices for Supercharging Cloud Analytics on Amazon Redshift
Best Practices for Supercharging Cloud Analytics on Amazon Redshift
 
SQLDAY 2023 Chodkowski Adrian Databricks Performance Tuning
SQLDAY 2023 Chodkowski Adrian Databricks Performance TuningSQLDAY 2023 Chodkowski Adrian Databricks Performance Tuning
SQLDAY 2023 Chodkowski Adrian Databricks Performance Tuning
 

Más de AiougVizagChapter

All about Oracle Golden Gate by Udaya Kumar Pyla
All about Oracle Golden Gate by Udaya Kumar PylaAll about Oracle Golden Gate by Udaya Kumar Pyla
All about Oracle Golden Gate by Udaya Kumar PylaAiougVizagChapter
 
Oracle database in cloud, dr in cloud and overview of oracle database 18c
Oracle database in cloud, dr in cloud and overview of oracle database 18cOracle database in cloud, dr in cloud and overview of oracle database 18c
Oracle database in cloud, dr in cloud and overview of oracle database 18cAiougVizagChapter
 
Awr + 12c performance tuning
Awr + 12c performance tuningAwr + 12c performance tuning
Awr + 12c performance tuningAiougVizagChapter
 

Más de AiougVizagChapter (6)

All about Oracle Golden Gate by Udaya Kumar Pyla
All about Oracle Golden Gate by Udaya Kumar PylaAll about Oracle Golden Gate by Udaya Kumar Pyla
All about Oracle Golden Gate by Udaya Kumar Pyla
 
Developer day v2
Developer day v2Developer day v2
Developer day v2
 
Oracle database in cloud, dr in cloud and overview of oracle database 18c
Oracle database in cloud, dr in cloud and overview of oracle database 18cOracle database in cloud, dr in cloud and overview of oracle database 18c
Oracle database in cloud, dr in cloud and overview of oracle database 18c
 
Aioug big data and hadoop
Aioug  big data and hadoopAioug  big data and hadoop
Aioug big data and hadoop
 
Performance Tuning intro
Performance Tuning introPerformance Tuning intro
Performance Tuning intro
 
Awr + 12c performance tuning
Awr + 12c performance tuningAwr + 12c performance tuning
Awr + 12c performance tuning
 

Último

VictoriaMetrics Q1 Meet Up '24 - Community & News Update
VictoriaMetrics Q1 Meet Up '24 - Community & News UpdateVictoriaMetrics Q1 Meet Up '24 - Community & News Update
VictoriaMetrics Q1 Meet Up '24 - Community & News UpdateVictoriaMetrics
 
2024-04-09 - From Complexity to Clarity - AWS Summit AMS.pdf
2024-04-09 - From Complexity to Clarity - AWS Summit AMS.pdf2024-04-09 - From Complexity to Clarity - AWS Summit AMS.pdf
2024-04-09 - From Complexity to Clarity - AWS Summit AMS.pdfAndrey Devyatkin
 
What’s New in VictoriaMetrics: Q1 2024 Updates
What’s New in VictoriaMetrics: Q1 2024 UpdatesWhat’s New in VictoriaMetrics: Q1 2024 Updates
What’s New in VictoriaMetrics: Q1 2024 UpdatesVictoriaMetrics
 
Best Angular 17 Classroom & Online training - Naresh IT
Best Angular 17 Classroom & Online training - Naresh ITBest Angular 17 Classroom & Online training - Naresh IT
Best Angular 17 Classroom & Online training - Naresh ITmanoharjgpsolutions
 
Amazon Bedrock in Action - presentation of the Bedrock's capabilities
Amazon Bedrock in Action - presentation of the Bedrock's capabilitiesAmazon Bedrock in Action - presentation of the Bedrock's capabilities
Amazon Bedrock in Action - presentation of the Bedrock's capabilitiesKrzysztofKkol1
 
Strategies for using alternative queries to mitigate zero results
Strategies for using alternative queries to mitigate zero resultsStrategies for using alternative queries to mitigate zero results
Strategies for using alternative queries to mitigate zero resultsJean Silva
 
The Ultimate Guide to Performance Testing in Low-Code, No-Code Environments (...
The Ultimate Guide to Performance Testing in Low-Code, No-Code Environments (...The Ultimate Guide to Performance Testing in Low-Code, No-Code Environments (...
The Ultimate Guide to Performance Testing in Low-Code, No-Code Environments (...kalichargn70th171
 
Tech Tuesday Slides - Introduction to Project Management with OnePlan's Work ...
Tech Tuesday Slides - Introduction to Project Management with OnePlan's Work ...Tech Tuesday Slides - Introduction to Project Management with OnePlan's Work ...
Tech Tuesday Slides - Introduction to Project Management with OnePlan's Work ...OnePlan Solutions
 
eSoftTools IMAP Backup Software and migration tools
eSoftTools IMAP Backup Software and migration toolseSoftTools IMAP Backup Software and migration tools
eSoftTools IMAP Backup Software and migration toolsosttopstonverter
 
Introduction to Firebase Workshop Slides
Introduction to Firebase Workshop SlidesIntroduction to Firebase Workshop Slides
Introduction to Firebase Workshop Slidesvaideheekore1
 
GraphSummit Madrid - Product Vision and Roadmap - Luis Salvador Neo4j
GraphSummit Madrid - Product Vision and Roadmap - Luis Salvador Neo4jGraphSummit Madrid - Product Vision and Roadmap - Luis Salvador Neo4j
GraphSummit Madrid - Product Vision and Roadmap - Luis Salvador Neo4jNeo4j
 
Zer0con 2024 final share short version.pdf
Zer0con 2024 final share short version.pdfZer0con 2024 final share short version.pdf
Zer0con 2024 final share short version.pdfmaor17
 
Revolutionizing the Digital Transformation Office - Leveraging OnePlan’s AI a...
Revolutionizing the Digital Transformation Office - Leveraging OnePlan’s AI a...Revolutionizing the Digital Transformation Office - Leveraging OnePlan’s AI a...
Revolutionizing the Digital Transformation Office - Leveraging OnePlan’s AI a...OnePlan Solutions
 
Osi security architecture in network.pptx
Osi security architecture in network.pptxOsi security architecture in network.pptx
Osi security architecture in network.pptxVinzoCenzo
 
SAM Training Session - How to use EXCEL ?
SAM Training Session - How to use EXCEL ?SAM Training Session - How to use EXCEL ?
SAM Training Session - How to use EXCEL ?Alexandre Beguel
 
Understanding Plagiarism: Causes, Consequences and Prevention.pptx
Understanding Plagiarism: Causes, Consequences and Prevention.pptxUnderstanding Plagiarism: Causes, Consequences and Prevention.pptx
Understanding Plagiarism: Causes, Consequences and Prevention.pptxSasikiranMarri
 
OpenChain AI Study Group - Europe and Asia Recap - 2024-04-11 - Full Recording
OpenChain AI Study Group - Europe and Asia Recap - 2024-04-11 - Full RecordingOpenChain AI Study Group - Europe and Asia Recap - 2024-04-11 - Full Recording
OpenChain AI Study Group - Europe and Asia Recap - 2024-04-11 - Full RecordingShane Coughlan
 
Mastering Project Planning with Microsoft Project 2016.pptx
Mastering Project Planning with Microsoft Project 2016.pptxMastering Project Planning with Microsoft Project 2016.pptx
Mastering Project Planning with Microsoft Project 2016.pptxAS Design & AST.
 
Advantages of Cargo Cloud Solutions.pptx
Advantages of Cargo Cloud Solutions.pptxAdvantages of Cargo Cloud Solutions.pptx
Advantages of Cargo Cloud Solutions.pptxRTS corp
 
Understanding Flamingo - DeepMind's VLM Architecture
Understanding Flamingo - DeepMind's VLM ArchitectureUnderstanding Flamingo - DeepMind's VLM Architecture
Understanding Flamingo - DeepMind's VLM Architecturerahul_net
 

Último (20)

VictoriaMetrics Q1 Meet Up '24 - Community & News Update
VictoriaMetrics Q1 Meet Up '24 - Community & News UpdateVictoriaMetrics Q1 Meet Up '24 - Community & News Update
VictoriaMetrics Q1 Meet Up '24 - Community & News Update
 
2024-04-09 - From Complexity to Clarity - AWS Summit AMS.pdf
2024-04-09 - From Complexity to Clarity - AWS Summit AMS.pdf2024-04-09 - From Complexity to Clarity - AWS Summit AMS.pdf
2024-04-09 - From Complexity to Clarity - AWS Summit AMS.pdf
 
What’s New in VictoriaMetrics: Q1 2024 Updates
What’s New in VictoriaMetrics: Q1 2024 UpdatesWhat’s New in VictoriaMetrics: Q1 2024 Updates
What’s New in VictoriaMetrics: Q1 2024 Updates
 
Best Angular 17 Classroom & Online training - Naresh IT
Best Angular 17 Classroom & Online training - Naresh ITBest Angular 17 Classroom & Online training - Naresh IT
Best Angular 17 Classroom & Online training - Naresh IT
 
Amazon Bedrock in Action - presentation of the Bedrock's capabilities
Amazon Bedrock in Action - presentation of the Bedrock's capabilitiesAmazon Bedrock in Action - presentation of the Bedrock's capabilities
Amazon Bedrock in Action - presentation of the Bedrock's capabilities
 
Strategies for using alternative queries to mitigate zero results
Strategies for using alternative queries to mitigate zero resultsStrategies for using alternative queries to mitigate zero results
Strategies for using alternative queries to mitigate zero results
 
The Ultimate Guide to Performance Testing in Low-Code, No-Code Environments (...
The Ultimate Guide to Performance Testing in Low-Code, No-Code Environments (...The Ultimate Guide to Performance Testing in Low-Code, No-Code Environments (...
The Ultimate Guide to Performance Testing in Low-Code, No-Code Environments (...
 
Tech Tuesday Slides - Introduction to Project Management with OnePlan's Work ...
Tech Tuesday Slides - Introduction to Project Management with OnePlan's Work ...Tech Tuesday Slides - Introduction to Project Management with OnePlan's Work ...
Tech Tuesday Slides - Introduction to Project Management with OnePlan's Work ...
 
eSoftTools IMAP Backup Software and migration tools
eSoftTools IMAP Backup Software and migration toolseSoftTools IMAP Backup Software and migration tools
eSoftTools IMAP Backup Software and migration tools
 
Introduction to Firebase Workshop Slides
Introduction to Firebase Workshop SlidesIntroduction to Firebase Workshop Slides
Introduction to Firebase Workshop Slides
 
GraphSummit Madrid - Product Vision and Roadmap - Luis Salvador Neo4j
GraphSummit Madrid - Product Vision and Roadmap - Luis Salvador Neo4jGraphSummit Madrid - Product Vision and Roadmap - Luis Salvador Neo4j
GraphSummit Madrid - Product Vision and Roadmap - Luis Salvador Neo4j
 
Zer0con 2024 final share short version.pdf
Zer0con 2024 final share short version.pdfZer0con 2024 final share short version.pdf
Zer0con 2024 final share short version.pdf
 
Revolutionizing the Digital Transformation Office - Leveraging OnePlan’s AI a...
Revolutionizing the Digital Transformation Office - Leveraging OnePlan’s AI a...Revolutionizing the Digital Transformation Office - Leveraging OnePlan’s AI a...
Revolutionizing the Digital Transformation Office - Leveraging OnePlan’s AI a...
 
Osi security architecture in network.pptx
Osi security architecture in network.pptxOsi security architecture in network.pptx
Osi security architecture in network.pptx
 
SAM Training Session - How to use EXCEL ?
SAM Training Session - How to use EXCEL ?SAM Training Session - How to use EXCEL ?
SAM Training Session - How to use EXCEL ?
 
Understanding Plagiarism: Causes, Consequences and Prevention.pptx
Understanding Plagiarism: Causes, Consequences and Prevention.pptxUnderstanding Plagiarism: Causes, Consequences and Prevention.pptx
Understanding Plagiarism: Causes, Consequences and Prevention.pptx
 
OpenChain AI Study Group - Europe and Asia Recap - 2024-04-11 - Full Recording
OpenChain AI Study Group - Europe and Asia Recap - 2024-04-11 - Full RecordingOpenChain AI Study Group - Europe and Asia Recap - 2024-04-11 - Full Recording
OpenChain AI Study Group - Europe and Asia Recap - 2024-04-11 - Full Recording
 
Mastering Project Planning with Microsoft Project 2016.pptx
Mastering Project Planning with Microsoft Project 2016.pptxMastering Project Planning with Microsoft Project 2016.pptx
Mastering Project Planning with Microsoft Project 2016.pptx
 
Advantages of Cargo Cloud Solutions.pptx
Advantages of Cargo Cloud Solutions.pptxAdvantages of Cargo Cloud Solutions.pptx
Advantages of Cargo Cloud Solutions.pptx
 
Understanding Flamingo - DeepMind's VLM Architecture
Understanding Flamingo - DeepMind's VLM ArchitectureUnderstanding Flamingo - DeepMind's VLM Architecture
Understanding Flamingo - DeepMind's VLM Architecture
 

Aioug vizag ado_12c_aug20

  • 1. Vinay Raj Malla Oracle DBA Miracle Software Systems Oracle Automatic Data Optimization using Heat maps
  • 2. Our Agenda • Automatic Data Optimization: Why Bother? • ADO: Space-Based vs. Time-Based • Compression: Saving Space and I/O • How Hot is My Data: Heat Maps • ADO: How It All Works • ADO Policies: Results • Q+A
  • 3. Questions for you ! • What do you do if your storage is full in your laptops? • What do you do if you want access some data regularly and catch up the things in a very rapid manner? • What do you do if you don’t access your data at all but you need the data ? • What do you do if one of your disk drive is full ?
  • 4. Automatic Data Optimization: Why Bother? • Flash storage is everywhere and getting cheaper • Disk storage has never been cheaper • Abundant CPU resources yields cheaper compression • Compressed data accessed in fewer physical IOs • Proper compression usage takes planning … • Just because we can compress doesn’t mean we should • For now, flash-based storage is still somewhat precious • Improper compression negatively affects application performance • … but compression can mean extreme performance!
  • 5. Automatic Data Optimization (ADO) • Moves or compresses data based on observed usage patterns. • Leverages heat maps to determine how often data has been accessed • Tracks exactly how data has been utilized • DML versus query • Random access versus table scan • Usage patterns can be tracked at tablespace, segment, and even row level
  • 6. Covering the Space-Time Continuum An ADO policy can be based on either space or time • Space-based: • Moves segment between tablespaces in different storage tiers • Movement is based on “fullness” of initial tablespace • Time-based: • Compresses data more tightly within same object • Data compressible at three levels: • ROW STORAGE (row level within a segment) • SEGMENT (one database object at segment level) • GROUP (multiple database objects)
  • 7. Space-Based Migration: An Example Tier 0 storage space monitoring detects either: • Tablespace space in use exceeds 90%, or • Tablespace free space drops below 25% ADO automatically moves entire segment to another tablespace on Tier 1 / 2 storage Segment resides initially in tablespace on Tier 0 storage: • Server-based flash (e.g. FusionIO card in PCIe slot) • Exadata flash-based grid disk • SSD +FLASH +COLD
  • 8. Time-Based Compression: An Example After 3 days of more limited access: Enable ADVANCED compression After 30 days of no modifications: Enable HCC QUERY LOW* compression Initially, heavy DML and query activity: Leave data uncompressed After 90 days of no access: Enable HCC ARCHIVE HIGH* compression * Requires Exadata, ZFS Appliance, or Pillar Axiom storage
  • 9. ADO: A Simple Example • Data source: 5M row fact table, 10 years of history • Data will be loaded into two tables: • AP.ROLLON_PARTED contains most recent three months’ data • AP.RANDOMIZED_PARTED partitioned for historical data storage: P1_HOT : 12/2013 and later P3_COOL: 2009 - 2011 P2_WARM: 01/2012 – 11/2013 P4_COLD: Before 2009 • ADO Goals: • Limit usage of tablespace ADO_HOT_DATA on flash storage • Apply appropriate compression as data grows “colder” over time
  • 10. Leveraging Storage Tiers +FLASH ADO_HOT_DATA ADO_HOT_IDX +COLD ADO_COLD_DATA ADO_COLD_IDX +WARM ADO_WARM_DATA ADO_WARM_IDX +COOL ADO_COOL_DATA ADO_COOL_IDX Tier 0: SSD or Flash Tier 1: Fast HDD (SAS, Outer Cylinders) Tier 2: Slow HDD (SATA) Tier 1: Slower HDD (SAS, Inner Cylinders)
  • 11. Activating Heat Mapping SQL> ALTER SYSTEM SET heat_map = ON; Must be activated before any ADO policies are created! Hottest Tablespaces (from DBA_HEATMAP_TOP_TABLESPACES) Alloc Tablespace Segment Space Earliest Earliest Earliest Name Count (MB) Write Time FTS Time Lookup Time --------------- ------- ------- ------------------- ------------------- ------------------- ADO_COLD_DATA 1 57 2014-02-07.12:14:13 2014-02-07.00:07:55 ADO_COLD_IDX 0 1 ADO_COOL_DATA 1 41 2014-02-07.12:14:13 2014-02-07.00:07:55 ADO_COOL_IDX 0 1 ADO_HOT_DATA 1 9 2014-02-07.00:07:55 2014-02-07.12:14:13 2014-02-07.00:07:55 ADO_HOT_IDX 0 1 2014-02-07.18:03:35 2014-02-07.18:03:35 ADO_WARM_DATA 3 50 2014-02-07.12:14:13 2014-02-07.00:07:55 ADO_WARM_IDX 0 1 AP_DATA 4 131 2014-02-07.12:14:13 AP_IDX 7 130 2014-02-07.00:07:55 2014-02-07.12:14:13 Recently-Touched Segments (from DBA_HEAT_MAP_SEG_HISTOGRAM) Object Segment Segment Segment Owner Object Name Subobject Name Last Touched Wrtn To? FTS? LKP? ------ -------------------- -------------------- ------------------- -------- -------- -------- AP RANDOMIZED_PARTED P1_HOT 2014-02-07 11:27:05 NO YES NO AP RANDOMIZED_PARTED P2_WARM 2014-02-07 11:27:05 NO YES NO AP RANDOMIZED_PARTED P2_WARM 2014-02-07 11:27:05 NO YES NO AP RANDOMIZED_PARTED P3_COOL 2014-02-07 11:27:05 NO YES NO AP RANDOMIZED_PARTED P3_COOL 2014-02-07 11:27:05 NO YES NO AP RANDOMIZED_PARTED P4_COLD 2014-02-07 11:27:05 NO YES NO AP RANDOMIZED_PARTED P4_COLD 2014-02-07 11:27:05 NO YES NO AP ROLLON_PARTED 2014-02-07 11:27:05 NO YES NO AP ROLLON_PARTED_PK 2014-02-07 11:27:05 NO NO YES
  • 12. Sample Objects Partitioned: CREATE TABLE ap.rollon_parted ( key_id NUMBER(8) ,key_date DATE ,key_desc VARCHAR2(32) ,key_sts NUMBER(2) NOT NULL ) TABLESPACE ado_hot_data NOLOGGING PARALLEL 4 ILM ADD POLICY TIER TO ado_warm_data; Non-Partitioned: CREATE TABLE ap.randomized_parted ( key_id NUMBER(8) ,key_date DATE ,key_desc VARCHAR2(32) ,key_sts NUMBER(2) NOT NULL ) PARTITION BY RANGE(key_date) ( PARTITION P4_COLD VALUES LESS THAN (TO_DATE('2009-01-01','yyyy-mm- dd')) TABLESPACE ado_cold_data ,PARTITION P3_COOL VALUES LESS THAN (TO_DATE('2012-01-01','yyyy-mm- dd')) TABLESPACE ado_cool_data ,PARTITION P2_WARM VALUES LESS THAN (TO_DATE('2013-12-01','yyyy-mm- dd')) TABLESPACE ado_warm_data ,PARTITION P1_HOT VALUES LESS THAN (MAXVALUE) TABLESPACE ado_hot_data NOCOMPRESS) NOLOGGING PARALLEL 4;
  • 13. Implementing ADO Time-Based Policies ALTER TABLE ap.randomized_parted MODIFY PARTITION p1_hot ILM ADD POLICY ROW STORE COMPRESS ADVANCED ROW AFTER 180 DAYS OF NO MODIFICATION; ALTER TABLE ap.randomized_parted MODIFY PARTITION p2_warm ILM ADD POLICY COMPRESS FOR QUERY LOW SEGMENT AFTER 300 DAYS OF NO ACCESS; ALTER TABLE ap.randomized_parted MODIFY PARTITION p3_cool ILM ADD POLICY COMPRESS FOR QUERY HIGH SEGMENT AFTER 600 DAYS OF NO ACCESS; ALTER TABLE ap.randomized_parted MODIFY PARTITION p4_cold ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 900 DAYS OF NO ACCESS; Row-level compression policy Segment-level compression policies, but leveraging HCC compression
  • 14. Testing ADO Policies: Global Policy Attributes BEGIN DBMS_ILM_ADMIN.CUSTOMIZE_ILM( parameter => DBMS_ILM_ADMIN.POLICY_TIME ,value => DBMS_ILM_ADMIN.ILM_POLICY_IN_SECONDS); DBMS_ILM_ADMIN.CUSTOMIZE_ILM( parameter => DBMS_ILM_ADMIN.EXECUTION_INTERVAL ,value => 3); DBMS_ILM_ADMIN.CUSTOMIZE_ILM( parameter => DBMS_ILM_ADMIN.TBS_PERCENT_USED ,value => 90); DBMS_ILM_ADMIN.CUSTOMIZE_ILM( parameter => DBMS_ILM_ADMIN.TBS_PERCENT_FREE ,value => 25); END; / Adjusting ADO Policy Attributes for Faster Test Cycles Treats days like seconds Sets ILM execution interval Overrides tablespace fullness defaults
  • 15. Testing ADO Policies: Executing Workloads DECLARE cur_max NUMBER := 0; new_max NUMBER := 0; ctr NUMBER := 0; BEGIN SELECT MAX(key_id) INTO cur_max FROM ap.rollon_parted; cur_max := cur_max + 1; new_max := cur_max + 50000; FOR ctr IN cur_max..new_max LOOP INSERT INTO ap.rollon_parted VALUES(ctr ,(TO_DATE('12/31/2013','mm/dd/yyyy') + DBMS_RANDOM.VALUE(1,90)) ,'NEWHOTROW‘ ,'N'); IF MOD(ctr, 5000) = 0 THEN COMMIT; END IF; END LOOP; COMMIT; EXCEPTION WHEN OTHERS THEN NULL; END; / Inserts 50,000 new rows into AP.ROLLON_PARTED to test space-based ADO policy
  • 16. Testing ADO Policies: Executing Workloads SELECT MAX(LENGTH(key_desc)), COUNT(key_sts) FROM ap.randomized_parted WHERE key_date BETWEEN TO_DATE('2011-03-15','YYYY-MM-DD') AND TO_DATE('2011-04-14','YYYY-MM-DD') AND ROWNUM < 10001; Touches 10,000 rows in “colder” table partition SELECT MAX(LENGTH(key_desc)), COUNT(key_sts) FROM ap.randomized_parted WHERE key_date BETWEEN TO_DATE('2012-03-15','YYYY-MM-DD') AND TO_DATE('2013-11-30','YYYY-MM-DD') AND ROWNUM < 10001; Touches 10,000 rows in “warmer” table partition UPDATE ap.randomized_parted SET key_desc = 'Modified *** MODIFIED!!!' WHERE key_desc <> 'Modified *** MODIFIED!!!' AND key_date BETWEEN TO_DATE('2013-12-01','YYYY-MM-DD') AND TO_DATE('2013-12-31','YYYY-MM-DD') AND ROWNUM < 50001; COMMIT; Updates 50,000 rows in “hottest” table partition
  • 17. Testing ADO Policies: Forcing ILM Evaluation DECLARE tid NUMBER; BEGIN DBMS_ILM.EXECUTE_ILM(owner => 'AP', object_name => 'ROLLON_PARTED', task_id => tid); DBMS_ILM.EXECUTE_ILM(owner => 'AP‘, object_name => 'RANDOMIZED_PARTED',task_id => tid); END; / ILM Objects Most Recently Evaluated (from DBA_ILMEVALUATIONDETAILS) Task ILM Object Subobject ID Policy Owner Object Name Name Object Type Reason Chosen Job Name ----- ------ ------ -------------------- ---------- --------------- ------------------------------ ---------- 2905 P150 AP RANDOMIZED_PARTED P4_COLD TABLE PARTITION SELECTED FOR EXECUTION ILMJOB582 2905 P149 AP RANDOMIZED_PARTED P3_COOL TABLE PARTITION SELECTED FOR EXECUTION ILMJOB580 2905 P147 AP RANDOMIZED_PARTED P1_HOT TABLE PARTITION SELECTED FOR EXECUTION ILMJOB576 2905 P148 AP RANDOMIZED_PARTED P2_WARM TABLE PARTITION SELECTED FOR EXECUTION ILMJOB578 . . . . . 2895 P146 AP ROLLON_PARTED TABLE POLICY DISABLED 2889 P150 AP RANDOMIZED_PARTED P4_COLD TABLE PARTITION PRECONDITION NOT SATISFIED 2889 P148 AP RANDOMIZED_PARTED P2_WARM TABLE PARTITION PRECONDITION NOT SATISFIED 2889 P149 AP RANDOMIZED_PARTED P3_COOL TABLE PARTITION PRECONDITION NOT SATISFIED 2889 P147 AP RANDOMIZED_PARTED P1_HOT TABLE PARTITION SELECTED FOR EXECUTION ILMJOB564 2888 P146 AP ROLLON_PARTED TABLE SELECTED FOR EXECUTION ILMJOB562
  • 18. ADO Space-Based Policies: Results Free Tablespace Space Name (MB) --------------- ------- ADO_COLD_DATA 87 ADO_COOL_DATA 135 ADO_HOT_DATA 14 ADO_WARM_DATA 151 Initially, tablespace ADO_HOT_DATA (sized at just 25MB) is almost 50% empty. Next ILM evaluation detects change, so AP.ROLLON_PARTED moves automatically to ADO_WARM_DATA. Free Tablespace Space Name (MB) --------------- ------- ADO_COLD_DATA 87 ADO_COOL_DATA 135 ADO_HOT_DATA 16 ADO_WARM_DATA 127 Free Tablespace Space Name (MB) --------------- ------- ADO_COLD_DATA 87 ADO_COOL_DATA 135 ADO_HOT_DATA 2 ADO_WARM_DATA 151 Data grows in table AP.ROLLON_PARTED, so ADO_HOT_DATA falls below 25% ILM free space limit.
  • 19. ADO Time-Based Policies: Results Results of Partitioned Table Loading (from DBA_TAB_PARTITIONS) Avg Partition Compression Compress # of Row Name Level For Row Count Blocks Len ---------- ------------ -------- ------------ -------- ----- P1_HOT DISABLED 39,801 232 34 P2_WARM DISABLED 958,717 5,255 34 P3_COOL DISABLED 1,500,592 8,185 34 P4_COLD DISABLED 2,500,890 13,587 34 Results of Partitioned Table Loading (from DBA_TAB_PARTITIONS) Avg Partition Compression Compress # of Row Name Level For Row Count Blocks Len ---------- ------------ ---------- ------------ -------- ----- P1_HOT DISABLED 39,801 232 34 P2_WARM ENABLED 958,717 4,837 34 P3_COOL ENABLED QUERY HIGH 1,500,592 1,603 34 P4_COLD ENABLED ARCHIVE HIGH 2,500,890 1,879 34 Table partition sizes and compression before ADO policies went into effect … … and a few moments after test workloads were applied and ILM refresh requested.
  • 20. ADO Compression: Performance Impacts SELECT MAX(LENGTH(key_desc)), COUNT(key_sts) FROM ap.randomized_parted WHERE key_date BETWEEN TO_DATE('2005-03-15','YYYY-MM-DD') AND TO_DATE('2008-04-14','YYYY-MM-DD'); SELECT MAX(LENGTH(key_desc)), COUNT(key_sts) FROM ap.randomized_parted WHERE key_date BETWEEN TO_DATE('2010-01-15','YYYY-MM-DD') AND TO_DATE('2011-10-15','YYYY-MM-DD'); SELECT MAX(LENGTH(key_desc)), COUNT(key_sts) FROM ap.randomized_parted WHERE key_date BETWEEN TO_DATE('2012-01-15','YYYY-MM-DD') AND TO_DATE('2013-10-15','YYYY-MM-DD'); Statistic Before Compression After Compression Execution Time 3.61s 3.29s Physical Reads 5,185 4,783 Optimizer Cost 398 366 Execution Time 4.46s 1.88s Physical Reads 8,074 1,605 Optimizer Cost 619 124 Execution Time 5.77s 4.23s Physical Reads 13,451 2,116 Optimizer Cost 1,027 147 Sample Queries:
  • 21. Over To You
  • 22. How Hot Is My Data? Leveraging Automatic Database Optimization (ADO) Features in Oracle 12c Database For Dramatic Performance Improvements If you have any questions or comments, feel free to:  E-mail me at vnairaj@gmail.com/vmalla1@miraclesoft.com Connect with me on LinkedIn (Vinay Raj Malla) Thank You For Your Kind Attention

Notas del editor

  1. Explain: Heat Maps ILM Policies Heat Map related Space Related ILM Policy Default Settings
  2. Demonstrate: Setting ILM Policy Default Settings Activating Heat Mapping ORA-38342: WTF?? Setting ILM Policies for Segments Setting ILM Policies for Rows Setting ILM Policies for Space Usage Enabling ILM Policies Setting ILM Policies for << others? Enumerate! >> Forcing ILM Segment Compression Forcing ILM Row Movement Forcing ILM Space Reduction/Compression Disabling ILM Policies Deactivating Heat Mapping Removing ILM Policies