SlideShare una empresa de Scribd logo
1 de 70
Descargar para leer sin conexión
The Oracle SQLT Utility
Presented to The Midwest Oracle Users Group – October 24, 2017
Kevin Gilpin, Advizex
kgilpin@advizex.com
Agenda
2
• Background/Overview
• How SQLT Works
• SQLT installation
• Using SQLT
• SQLT Maintenance
• Recommendations
• References
3
Background/Overview
4
4
• The utility is used to troubleshoot SQL performance as a
supplement to the Oracle SQL Tuning Advisor (STA) and
provides a superset of the information produced by the STA.
• SQLT is a free utility developed by the Oracle Server
Technologies Center of Expertise (CoE). It is downloadable
from My Oracle Support Note 215187.1.
• It consists of some database schema objects and some
associated SQL scripts and also includes the CoE Trace
Analyzer utility.
Background/Overview
5
5
• Studying how the SQLT and TRCA utilities work leads to an
appreciation of how far Oracle has come in providing
automatic SQL tuning functionality and of what the
database is capable of doing to manage SQL performance.
SQL baselines, profiles and plan management have
become impressively robust and it is worth the effort to
master knowledge of how these structures work.
• The PL/SQL code is unwrapped, allowing DBA’s to learn
about how the utility works and how Oracle manages
execution plans, SQL Profiles and related items.
Background/Overview
6
6
• The SQL Tuning Advisor is one of several advisors that is
available if the Tuning Pack is licensed. The advisor performs
in-depth analysis of SQL and derives pertinent configuration
findings and performance recommendations that can be
implemented with provided syntax, usually in the form of a
SQL profile.
• The Trace Analyzer (TRCA) utility is closely related to the
SQLT utility and is installed as part of the SQLT installation.
This utility can also be downloaded, installed and used
separately from SQLT. This utility creates an HTML report
describing the pertinent information in a SQL trace file
produced with the 10046 trace event.
Background/Overview
7
7
• SQL Plan Management is a framework that Oracle Database
uses to prevent performance regression of SQL statements
as the data evolves and other conditions change such as
index changes and database patches and upgrades are
installed.
• A SQL Plan Baseline is a set of SQL execution plans that
have been used at various times for a given SQL statement.
Background/Overview
8
8
• A SQL profile is a set of performance data and corrections to
execution plans that have been derived by the Oracle
optimizer. SQL profiles are derived by the invoking SQL
Tuning Advisor to perform an of the objects that a SQL
statement uses an in-depth analysis of the data and
statement construct.
• A SQL plan hash value is an identifier for an execution plan
that the Oracle optimizer has derived for a given SQL
statement. This identifier is used by SQL Plan Management
to manage plans and metadata related to those plans.
Background/Overview
9
9
• All of these features are used directly or indirectly by SQLT.
• The SQLT version and the TRCA versions have been
synchronized since 2012.
Background/Overview
10
10
• Can be used with or without licenses for the Tuning and Diagnostics
Packs.
• Answer these script prompts during the install:
• "T" if you have license for Diagnostic and Tuning
• "D" if you have license only for Oracle Diagnostic
• "N" if you do not have these two licenses
• Gives the most comprehensive metadata report about your SQL.
• Derived information presented in an impressive and useful HTML
report.
• The utility is easy to use and saves time and effort to manually
collect configuration and metadata related to a SQL statement.
Background/Overview
11
11
• Can be used with RDBMS 9.2 and higher.
• Be sure to download the proper version of SQLT for the database
version you intend to use it with.
• The 12.1 version works with RDBMS 12.2.
• Hot off the press: SQLT v12.2.171004 now available!
• First 12.2 release (12.2.170914) released 9/14/17.
• See MOS Note 215187.1 as the main SQLT information source.
• SQLT is now bundled with TFA (see note 1513912.1).
How SQLT works
12
• SQLT works by calling STA and TRCA and its own PL/SQL
packages and scripts to perform various analyses and
manipulations on SQL execution plans, SQL profiles and
trace files.
• dbms_spm, dbms_sqltune, various dictionary queries.
The code of these scripts is in <SQLT_HOME>/install/
and <SQLT_HOME>/run/.
• The tool outputs reports of its configuration findings and
tuning recommendations and of the metadata about the
relevant database objects and SQL statements as well as
scripts to implement configuration changes and SQL profiles.
How SQLT works
13
• In performing your own SQL performance troubleshooting
(now called SQL Plan Management), have you found yourself
hunting down information related to the statement by
executing many dictionary queries, poking around OEM,
deriving DDL for relevant objects, hunting down execution
plans, etc.?
• That takes a LOT of time and effort!
• OEM is great and the tuning advisors are great and ASH
reports and the awrsqrpt.sql script report are great and should
be used, but …
How SQLT works
14
• SQLT supplements all of those and saves you all of that
research time that you used to spend when investigating SQL
performance.
• SQLT gives you ALL of that information in one place – in
one phenomenally detailed html report. It is phenomenal
because it gives you everything you would have thought of
researching related to the SQL statement, plus a lot of stuff
you would not have thought of.
• We will examine an example of this report.
15
SQLT Installation
16
• Download the proper zip file from MOS Note 215187.1 or
1513912.1.
• Unzip the file to any appropriate working directory.
• Navigate to the <SQLT_HOME>/install/ directory, invoke
SQL*Plus and connect to the database that runs the relevant SQL
and execute the sqcreate.sql script. Execute the script while
connected as “sys as sysdba”.
• The SQLTXPLAIN schema (v12.1) requires less than 10MB of
space for the installation. Allocate 50-100MB of space for this
schema to allow for data creation in the schema as the utility is
used.
SQLT Installation
17
• The installation will create a database user named SQLTXPLAIN and these
objects in its schema.
OBJECT_TYPE COUNT(*)
------------------------------ ----------
PROCEDURE 1
SEQUENCE 12
SYNONYM 17
TABLE PARTITION 32
LOB 110
TABLE 215
INDEX 246
SQLT Installation
18
Installation prompts:
1. Connect identifier for TNS connections (optional).
2. SQLTXPLAIN database user password.
3. Default and temp tablespaces for SQLTXPLAIN user.
4. Main application user of the SQLT utility. This user will be the one to execute
the SQLT scripts. Note that additional users can be added to the
configuration later.
5. Management Pack license, if any:
• "T" if you have license for Diagnostic and Tuning
• "D" if you have license only for Oracle Diagnostic
• "N" if you do not have these two licenses
• After the last prompt, the remainder of the installation should take just a few
minutes to complete.
SQLT Installation
19
• The latter part of the installation steps could take much longer than a few
minutes if the database has a large number of objects, such as in an Oracle
EBS installation.
• At the end of the installation, review the *.log files in the
<SQLT_HOME>/install/log directory.
• To enable the use of the SQLT utility by additional database users, grant the
SQLT_USER_ROLE role to those users.
• This role has the system privileges
ADMINISTER_SQL_MANAGEMENT_OBJECT and ADVISOR granted to it in
RDBMS 12.1 and only ADVISOR in RDBMS 12.2. The SQLTXPLAIN user has
the SELECT_CATALOG_ROLE role granted to it.
SQLT Installation
20
• If you wish to “start over” and just purge the entire SQLT schema and re-install
it, execute the <SQLT_HOME>/install/sqdrop.sql in SQL*Plus as “sys as
sysdba”.
• The installation will create six database directory objects that reference the
database user_dump_dest or diagnostic_dest directory (for trace file access).
• SQLT installs and uses the Oracle CoE Trace Analyzer (TRCA) utility. See
MOS Note 224270.1 for a description of Trace Analyzer.
• If you wish, you can install and use TRCA separately in the same database as
SQLT, although there is no need to.
21
Using SQLT
22
• Determine the appropriate mode (method) in which to run SQLT.
• Execute the proper script with the proper syntax as the database
user that executes the particular SQL under normal application
conditions.
• Some execution modes may take a couple of minutes or less and
some may take much longer, depending on the SQL to be evaluated.
• Find the zip file, unzip it and open the *main.html file in a browser.
• Read the full report.
Using SQLT
23
• Key things to look for – Observations, STA Report, STA Script,
Session Events, the subsections in the Plans section of the report, “hot
blocks” section of the TRCANLZR report.
• Read the recommendations report and script, if they exist.
• Implement the recommendations in test environment and evaluate
the effects. Migrate recommendations to production if appropriate.
• A prudent usage scenario is to install it with all new databases.
Using SQLT
24
• Sample SQLT report discussion.
Using SQLT
25
• Test the syntax to “un-implement” (back out) the recommendations in
the test environment.
• Decide whether to implement the recommendations in production.
• Prepare to “un-implement” the recommendations from production if
the results are detrimental.
Using SQLT
26
XTRACT Method
• Probably the most common SQLT usage method.
• This method captures information about the particular SQL from the
library cache.
• Invoke SQL*Plus from the <SQLT_HOME>/ directory and connect as
the database user that would normally execute a particular SQL
statement or set of statements and execute this command:
SQL> start run/sqltxtract <SQLID>
• The sqltxtract.sql script will produce a zip file in the <SQLT_HOME>/
directory. Unzip this zip file and open the *main.html file in a browser.
Using SQLT
27
XECUTE Method
• Takes as its input a text file containing a SQL statement or STS.
• Invoke SQL*Plus from the <SQLT_HOME>/ directory and connect as
the database user that would normally execute a particular SQL
statement or set of statements and execute this command:
SQL> start run/sqltxecute <full_pathname_of_sql_script>
Using SQLT
28
XECUTE Method
• The sqltxecute.sql script executes the SQL statement or statements
in the STS.
• Therefore, the length of time that the sqltxecute.sql script takes is
bound by what those SQL statements take to execute.
• The product of executing the script is a zipfile produced in the
<SQLT_HOME>/ directory. To use this file, unzip it in any directory
and open the *main.html file.
Using SQLT
29
XTRXEC Method
• The XTREC method is a hybrid of the XTRACT and XTREC
methods. It performs its analysis on a currently running or recently
run SQL statement and then produces a script that is used to execute
the given statement.
• The literal values substituted for bind variables in the statement are
those derived from bind peeking at the time of executing the
statement with the most expensive plans derived by the optimizer for
the statement.
Using SQLT
30
XTRXEC Method
• To execute this method, navigate to the <SQLT_HOME>/ directory,
invoke SQL*Plus and connect to the main application user that
executes the SQL in question, and execute this command:
SQL> start run/sqltxtrec <SQLID>
where <SQLID> is the SQL ID of a currently running or recently run
SQL statement. To find this SQLID, use Enterprise Manager’s Top
Activity page or query the v$sql dynamic performance view.
Using SQLT
31
XPLAIN Method
• The sqltxplain.sql script executes explain plan on the the SQL
statement or statements in the STS.
• The sqltxplain.sql script does not execute the statement.
• It is recommended that the XTRACT, XECUTE or XTREC methods
be used before considering using the XPLAIN method because they
are more accurate. They are more accurate because XPLAIN cannot
utilize bind peeking.
Using SQLT
32
XPLAIN Method
• The XPLAIN script will be least accurate if the SQL statement has
bind variables. If it does not, then it should be fairly accurate.
• Consider optimizer effects from adaptive cursor sharing,
cursor_sharing!
• The product of executing the script is a zipfile produced in the
<SQLT_HOME>/ directory. To use this file, unzip it in any directory
and open the *main.html file.
Using SQLT
33
COMPARE Method
• The COMPARE method uses the data in the SQLTXPLAIN database
schema from two different databases in which a given SQL
statement or STS has inexplicably produced different execution
plans and the goal is to find out why.
• The sqltcompare.sql script takes two SQLID values as input and
assumes that the SQLTXPLAIN schema data has been exported from
one of the databases and imported into the other.
Using SQLT
34
COMPARE Method
• Alternatively, a third database could be used into which the
SQLTXPLAIN data from both databases has been imported.
• The output file is similar to the other SQLT methods and should
contain the reason(s) for why the execution plans used by each
database are different.
• To execute this method, navigate to the <SQLT_HOME>/ directory,
invoke SQL*Plus and connect to the main application user that
executes the SQL in question, and execute this command:
SQL> start run/sqltcompare <SQLID1> <SQLID2>
Using SQLT
35
TRCAXTR Method
• The TRCAXTR method extends the functionality of the TRCANLZR
method by first executing the TRCANLZR method on the given SQL
trace file and then calling the XTRACT method on the top SQL that
is found in the file.
• To execute this method, navigate to the <SQLT_HOME>/ directory,
invoke SQL*Plus and connect to the main application user that
executes the SQL in question, and execute this command:
SQL> start run/sqltrcaxtr <full_pathname_of_trace_file>
Using SQLT
36
TRCASET Method
• The TRCASET method is automatically executed by the TRCAXTR
method.
• In standalone execution mode, this method allows the user to select
particular SQLID’s upon which prior analyses have been performed.
• The user picks one prior analysis to have the XTRACT method
performed on.
Using SQLT
37
TRCASET Method
• This method can be executed in standalone mode by navigating to the
<SQLT_HOME>/ directory invoking SQL*Plus, and connecting to the
main application user that executes the SQL in question, and
executing this command:
SQL> start run/sqltrcaset [list of SQLIDs and PHVs]
Using SQLT
38
TRCANLZR Method
• This method invokes the Trace Analyzer (TRCA) utility.
• This method negates the need to separately download and install the
TRCA utility because it is the same utility.
Using SQLT
39
TRCANLZR Method
• Syntax:
Invoke SQL*Plus from the <SQLT_HOME>/ directory and connect
as the database user that would normally execute a particular SQL
statement or set of statements and execute this command:
SQL> start run/sqltrcanlzr <full pathname of an existing trace file
generated with event 10046>
• The sqltrcanlzr.sql script will produce a zip file in the <SQLT_HOME>/
directory. Unzip this zip file and open the *.html file in a browser.
Using SQLT
40
TRCASPLIT Method
• The TRCASPLIT method takes a SQL trace file that was produced
with events 10046 and 10053 and splits the trace file into two
separate trace files – one for the 10046 data and one for the 10053
data.
• The two separate trace files produced could then be separately used
with the TRCANLZR method, if desired.
Using SQLT
41
TRCASPLIT Method
• To execute this method, navigate to the <SQLT_HOME>/ directory,
invoke SQL*Plus and connect to the main application user that
executes the SQL in question, and execute this command:
SQL> start run/sqltrcasplit <full_pathname_of_trace_file>
Using SQLT
42
XTRSET Method
• The XTRSET method is an extension of the XTRACT method in
which a set of SQLID’s are passed to the script and the script derives
the data about these SQLID’s from the library cache and/or the AWR
and combines the output of all of them into one report.
• To execute this method, navigate to the <SQLT_HOME>/ directory,
invoke SQL*Plus and connect to the main application user that
executes the SQL in question, and execute this command:
SQL> start run/sqltxtrset <SQLID1> <SQLID2> … <SQLIDn>
Using SQLT
43
PROFILE Method
• The PROFILE requires that you first execute the XTRACT or
XECUTE method on a particular SQLID.
• The script accepts a SQLID as input. When the sqltprofile script is
executed, it will prompt for a plan hash value from a list of plan hash
values captured from previous executions of XTRACT or XECUTE on
the indicated SQLID.
Using SQLT
44
PROFILE Method
• The user can choose a particular one of these plan hash values and
the sqltprofile.sql script will output a script to install that plan hash
value as the one to be used for that SQLID by use of a SQL profile.
• To execute this method, navigate to the <SQLT_HOME>/ directory,
invoke SQL*Plus and connect to the main application user that
executes the SQL in question, and execute this command:
SQL> start run/sqltprofile <SQLID>
Using SQLT
45
XTRSBY Method
• Like XTRACT, XECUTE, XTRXEC, but will analyze the statement in
the instance of a read-only standby database (Active Data Guard).
• Invoke the methods while connected to the read-write database.
• Create a database link to the standby database.
SQL> start run/sqltxtrsby <SQLID> <dblink>
Using SQLT
46
XGRAM Method
• The XGRAM method involves the usage of several scripts in the
<SQLT_HOME>/utl/xgram/ directory.
• These scripts can be used to make modifications to stored
optimizer histograms related to particular SQL.
• Before using these, read all of the scripts in the
<SQLT_HOME>/utl/xgram/ directory to determine which is appropriate
to execute in a given case.
Using SQLT
47
XGRAM Method
• The <SQLT_HOME>/utl/xgram/sqlt_display_column_stats.sql script is
a handy script to execute.
• See sample output on the next two slides.
48
SQL> @sqlt_display_column_stats.sql
Table Owner [SOE]: SOE
Table Name [UNKNOWN]: ORDERS
Partition Name [NULL]: NULL
Column Name [UNKNOWN]: ORDER_ID
sqlt_display_column_stats_20170721031245.log
p_ownname : 'SOE'
p_tabname : 'ORDERS'
p_partname: ''
p_colname : 'ORDER_ID'
Using SQLT
49
SELECT * FROM TABLE(SQLTXADMIN.sqlt$s.display_column_stats('SOE',
'ORDERS', 'ORDER_ID', ''));
ownname:SOE tabname:ORDERS colname:ORDER_ID partname:<partname>
distcnt:1432690 density:6.979877e-07 nullcnt:0 avgclen:6
type:"NUMBER" hgrm:"NONE" buckets:1 analyzed:"2017/07/20 03:09:46"
sample:1432690 global:"YES" user:"NO" rows:1432690
minval:"1"(C102)
maxval:"1432691"(C4022C1B5C)
epc:2
ep:1 bkvals:0 novals:1 value:"1" size:0
ep:2 bkvals:1 novals:1432691 value:"1432691" size:1
SQLT_DISPLAY_COLUMN_STATS completed.
Using SQLT
Using SQLT
50
XPLORE Method
• The conditions to consider using the XPLORE method are rare.
• The scripts to use this method are in the <SQLT_HOME>/utl/xplore/
directory.
• Read the readme.txt file in that directory before doing anything else
with this method.
Using SQLT
51
XPLORE Method
• This method is appropriate to use in rare cases in which the
optimizer is producing invalid results following an upgrade.
• The scripts used by this method will “explore” variations in the
optimizer_features_enabled and fix parameters that may have led
to the problematic behavior.
• You must install this script separately using the
<SQLT_HOME>/utl/xplore/install.sql script. Read the README.txt file
for it first.
Using SQLT
52
XPLORE Method
• This method is appropriate to use in rare cases in which the
optimizer is producing invalid results following an upgrade.
• The scripts used by this method will “explore” variations in the
optimizer_features_enabled and fix parameters that may have led to
the problematic behavior.
• You must install this script separately using the
<SQLT_HOME>/utl/xplore/install.sql script. Read the README.txt file
for it first.
Using SQLT
53
XPLORE Method
• This method will query the v$system_fix_control (1301 rows on a new,
empty RDBMS 12.2 database).
• This method will examine execution plans derived using various
combinations of the entries found in v$system_fix_control. If one is
found that has a lower cost than a plan that belongs to that sql_id’s
baseline, then the value corresponding to that entry (bug) in
v$system_fix_control will be displayed.
Using SQLT
54
XPLORE Method
• You can then test executing that sql_id using that fix_control value, as
follows:
Select /*+ opt_param(‘_fix_control’,’bugno:eventno’) */ …
• You can also just go to MOS and look up anything useful related to
that bug number.
55
SQLT Maintenance and Recommendations
56
• Remove old, orphaned SQL profiles.
• Delete old zip files produced by SQLT executions.
• Space consumption by the SQLTXPLAIN database schema. Purge it
occasionally.
• To add additional database users to the configuration so that you can
run SQLT as those users, grant the role SQLT_USER_ROLE to those
users.
• Upgrading SQLT – instructions in the sqlt_instructions.html file (in zip
distribution bundle).
SQLT Maintenance and Recommendations
57
• If you wish to “start over” and just purge the entire SQLT schema and
re-install it, connect as “sys as sysdba” and execute the
<SQLT_HOME>/install/sqdrop.sql.
• Be patient when installing and running the utility on databases that
have a *lot* of objects (Oracle EBS, BAAN, etc.). The utility executes
queries on dictionary tables like all_objects, etc.
• You will get the best results from SQLTXTRACT and SQLTXECUTE if
you execute these scripts as soon after the SQL of interest has
started, if not right at the moment it has started execution.
SQLT Maintenance and Recommendations
58
• Optimizer statistics on the SQLTXPLAIN schema are locked after
the SQLT installation. To gather stats on them, you must first unlock
them with
dbms_stats.unlock_schema_stats(ownname=>’SQLTXPLAIN’);
• If you have a database with a *lot* of extents (a lot of rows in
sys.fet$), the installation may take a long time. Be patient with the
install because subsequent executions of the SQLT scripts should
then not suffer some of the performance issues caused by bugs in
some database versions, like bugs 2948717, 5029334, 5259025.
Some MOS notes like 422730.1 suggest gathering stats on
sys.x$ktfbue. Otherwise, upgrading/patching may be required to
resolve these.
SQLT Maintenance and Recommendations
59
• Due to bugs like this, it might be helpful, for example, to create and
schedule a batch script that drops and re-installs SQLT periodically
so that you have the freshest data in tables like
SQLTXPLAIN.TRCA$_EXTENTS.
• It is highly prudent to run frequently used and high importance SQL
through SQLT on a regular basis and review the findings and
recommendations. This can be scripted and scheduled (cron SQLT to
run during/after their important runs?).
60
SQL Health Check (SQLHC)
61
• Your boss: “Yeah, that SQLT thing is great, but we have these
restrictions, you see, on installing things in production and doing this
kind of SQL testing and research in production.”
• You: “Point well taken. Behold, I give you SQLHC.”
• SQLHC implements a subset of the functionality of SQLT and gives
you a nice html report of its findings.
• SQLHC does not execute anything. It simply queries what is there in
the dictionary and performance views pertaining to a sqlid and gives
you a report.
SQL Health Check (SQLHC)
62
• You don’t have to install anything to use SQLHC. Simply download
the script from Oracle Support Note 1366133.1 and execute it (yes, in
production). It is non-intrusive.
• Discuss sample report.
63
TFA
64
• See Oracle Support Note 1513912.1.
• Oracle has created a bundle of tuning tools that seems like it will
evolve as a distribution of evolving tools.
• SQLT
• TFA
• oswatcher
• procwatcher
• oratop
• Others
SQLd360
65
• What is it? Who created it?
• A tool similar to SQLT created by the creator of SQLT.
• Comparison to SQLT:
• No installation required.
• Runs faster, but *may* be less comprehensive.
• How to install and use it.
• Download script(s) and execute them.
eAdam
66
• What is it? Who created it?
• A tool similar to SQLT created by the creator of SQLT.
• Comparison to SQLT:
• Same direction of evolution as SQLd360, but eAdam is intended to
be an “AWR repository” of AWR data from potentially multiple
databases.
• How to install and use it.
• Download scripts and execute.
67
References
68
My Oracle Support Notes
• 215187.1 – SQLT main note.
• 1454160.1 – SQLT FAQ.
• 1513912.1 - TFA.
• 224270.1 – TRCANLZR – Tool for interpreting raw SQL traces.
• 1229904.1 – Real time SQL monitoring in 11g.
• 1401111.1 – SQLT webcast.
• 199083.1 – SQL query performance overview.
• 376442.1 – How to collect 10046 trace diagnostics for performance
issues.
• 398838.1 – FAQ: SQL query performance.
References
69
My Oracle Support Notes
• 276103.1 – Performance tuning using Advisors and manageability
features
• 1366133.1 – SQL Tuning Health Check Script (SQLHC)
• 1417774.1 – FAQ: SQL Health Check (SQLHC) Frequently Asked
Questions
Thank You!
Kevin Gilpin, Advizex
kgilpin@advizex.com

Más contenido relacionado

La actualidad más candente

Nordic infrastructure Conference 2017 - SQL Server on Linux Overview
Nordic infrastructure Conference 2017 - SQL Server on Linux OverviewNordic infrastructure Conference 2017 - SQL Server on Linux Overview
Nordic infrastructure Conference 2017 - SQL Server on Linux OverviewTravis Wright
 
Hadoop Virtualization - Intel White Paper
Hadoop Virtualization - Intel White PaperHadoop Virtualization - Intel White Paper
Hadoop Virtualization - Intel White PaperBlueData, Inc.
 
Bootcamp 2017 - SQL Server on Linux
Bootcamp 2017 - SQL Server on LinuxBootcamp 2017 - SQL Server on Linux
Bootcamp 2017 - SQL Server on LinuxMaximiliano Accotto
 
Openshift 3.10 & Container solutions for Blockchain, IoT and Data Science
Openshift 3.10 & Container solutions for Blockchain, IoT and Data ScienceOpenshift 3.10 & Container solutions for Blockchain, IoT and Data Science
Openshift 3.10 & Container solutions for Blockchain, IoT and Data ScienceJohn Archer
 
Exploring microservices in a Microsoft landscape
Exploring microservices in a Microsoft landscapeExploring microservices in a Microsoft landscape
Exploring microservices in a Microsoft landscapeAlex Thissen
 
HPC and cloud distributed computing, as a journey
HPC and cloud distributed computing, as a journeyHPC and cloud distributed computing, as a journey
HPC and cloud distributed computing, as a journeyPeter Clapham
 
2021 March Pravega Community Meeting
2021 March Pravega Community Meeting2021 March Pravega Community Meeting
2021 March Pravega Community MeetingDerek Moore
 
BlueData EPIC 2.0 Overview
BlueData EPIC 2.0 OverviewBlueData EPIC 2.0 Overview
BlueData EPIC 2.0 OverviewBlueData, Inc.
 
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmark
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmarkThe Apache Spark config behind the indsutry's first 100TB Spark SQL benchmark
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmarkLenovo Data Center
 
10/ EnterpriseDB @ OPEN'16
10/ EnterpriseDB @ OPEN'16 10/ EnterpriseDB @ OPEN'16
10/ EnterpriseDB @ OPEN'16 Kangaroot
 
SQL Server 2016 New Security Features
SQL Server 2016 New Security FeaturesSQL Server 2016 New Security Features
SQL Server 2016 New Security FeaturesGianluca Sartori
 
OpenStack + Nano Server + Hyper-V + S2D
OpenStack + Nano Server + Hyper-V + S2DOpenStack + Nano Server + Hyper-V + S2D
OpenStack + Nano Server + Hyper-V + S2DAlessandro Pilotti
 
Running Distributed TensorFlow with GPUs on Mesos with DC/OS
Running Distributed TensorFlow with GPUs on Mesos with DC/OS Running Distributed TensorFlow with GPUs on Mesos with DC/OS
Running Distributed TensorFlow with GPUs on Mesos with DC/OS Mesosphere Inc.
 
A Closer Look at Apache Kudu
A Closer Look at Apache KuduA Closer Look at Apache Kudu
A Closer Look at Apache KuduAndriy Zabavskyy
 
BlueData and Hortonworks Data Platform (HDP)
BlueData and Hortonworks Data Platform (HDP)BlueData and Hortonworks Data Platform (HDP)
BlueData and Hortonworks Data Platform (HDP)BlueData, Inc.
 
JCConf 2016 - Cloud Computing Applications - Hazelcast, Spark and Ignite
JCConf 2016 - Cloud Computing Applications - Hazelcast, Spark and IgniteJCConf 2016 - Cloud Computing Applications - Hazelcast, Spark and Ignite
JCConf 2016 - Cloud Computing Applications - Hazelcast, Spark and IgniteJoseph Kuo
 
Lessons Learned Running Hadoop and Spark in Docker Containers
Lessons Learned Running Hadoop and Spark in Docker ContainersLessons Learned Running Hadoop and Spark in Docker Containers
Lessons Learned Running Hadoop and Spark in Docker ContainersBlueData, Inc.
 
February 2016 HUG: Running Spark Clusters in Containers with Docker
February 2016 HUG: Running Spark Clusters in Containers with DockerFebruary 2016 HUG: Running Spark Clusters in Containers with Docker
February 2016 HUG: Running Spark Clusters in Containers with DockerYahoo Developer Network
 

La actualidad más candente (20)

Nordic infrastructure Conference 2017 - SQL Server on Linux Overview
Nordic infrastructure Conference 2017 - SQL Server on Linux OverviewNordic infrastructure Conference 2017 - SQL Server on Linux Overview
Nordic infrastructure Conference 2017 - SQL Server on Linux Overview
 
Hadoop Virtualization - Intel White Paper
Hadoop Virtualization - Intel White PaperHadoop Virtualization - Intel White Paper
Hadoop Virtualization - Intel White Paper
 
Bootcamp 2017 - SQL Server on Linux
Bootcamp 2017 - SQL Server on LinuxBootcamp 2017 - SQL Server on Linux
Bootcamp 2017 - SQL Server on Linux
 
Deploying Big-Data-as-a-Service (BDaaS) in the Enterprise
Deploying Big-Data-as-a-Service (BDaaS) in the EnterpriseDeploying Big-Data-as-a-Service (BDaaS) in the Enterprise
Deploying Big-Data-as-a-Service (BDaaS) in the Enterprise
 
Openshift 3.10 & Container solutions for Blockchain, IoT and Data Science
Openshift 3.10 & Container solutions for Blockchain, IoT and Data ScienceOpenshift 3.10 & Container solutions for Blockchain, IoT and Data Science
Openshift 3.10 & Container solutions for Blockchain, IoT and Data Science
 
Exploring microservices in a Microsoft landscape
Exploring microservices in a Microsoft landscapeExploring microservices in a Microsoft landscape
Exploring microservices in a Microsoft landscape
 
HPC and cloud distributed computing, as a journey
HPC and cloud distributed computing, as a journeyHPC and cloud distributed computing, as a journey
HPC and cloud distributed computing, as a journey
 
2021 March Pravega Community Meeting
2021 March Pravega Community Meeting2021 March Pravega Community Meeting
2021 March Pravega Community Meeting
 
BlueData EPIC 2.0 Overview
BlueData EPIC 2.0 OverviewBlueData EPIC 2.0 Overview
BlueData EPIC 2.0 Overview
 
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmark
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmarkThe Apache Spark config behind the indsutry's first 100TB Spark SQL benchmark
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmark
 
10/ EnterpriseDB @ OPEN'16
10/ EnterpriseDB @ OPEN'16 10/ EnterpriseDB @ OPEN'16
10/ EnterpriseDB @ OPEN'16
 
SQL Server 2016 New Security Features
SQL Server 2016 New Security FeaturesSQL Server 2016 New Security Features
SQL Server 2016 New Security Features
 
OpenStack + Nano Server + Hyper-V + S2D
OpenStack + Nano Server + Hyper-V + S2DOpenStack + Nano Server + Hyper-V + S2D
OpenStack + Nano Server + Hyper-V + S2D
 
Running Distributed TensorFlow with GPUs on Mesos with DC/OS
Running Distributed TensorFlow with GPUs on Mesos with DC/OS Running Distributed TensorFlow with GPUs on Mesos with DC/OS
Running Distributed TensorFlow with GPUs on Mesos with DC/OS
 
A Closer Look at Apache Kudu
A Closer Look at Apache KuduA Closer Look at Apache Kudu
A Closer Look at Apache Kudu
 
BlueData and Hortonworks Data Platform (HDP)
BlueData and Hortonworks Data Platform (HDP)BlueData and Hortonworks Data Platform (HDP)
BlueData and Hortonworks Data Platform (HDP)
 
JCConf 2016 - Cloud Computing Applications - Hazelcast, Spark and Ignite
JCConf 2016 - Cloud Computing Applications - Hazelcast, Spark and IgniteJCConf 2016 - Cloud Computing Applications - Hazelcast, Spark and Ignite
JCConf 2016 - Cloud Computing Applications - Hazelcast, Spark and Ignite
 
Lessons Learned Running Hadoop and Spark in Docker Containers
Lessons Learned Running Hadoop and Spark in Docker ContainersLessons Learned Running Hadoop and Spark in Docker Containers
Lessons Learned Running Hadoop and Spark in Docker Containers
 
Hybrid is the New Normal
Hybrid is the New NormalHybrid is the New Normal
Hybrid is the New Normal
 
February 2016 HUG: Running Spark Clusters in Containers with Docker
February 2016 HUG: Running Spark Clusters in Containers with DockerFebruary 2016 HUG: Running Spark Clusters in Containers with Docker
February 2016 HUG: Running Spark Clusters in Containers with Docker
 

Similar a MOUG17: DB Security; Secure your Data

SQL PPT.pptx
SQL PPT.pptxSQL PPT.pptx
SQL PPT.pptxKulbir4
 
Liquibase få kontroll på dina databasförändringar
Liquibase   få kontroll på dina databasförändringarLiquibase   få kontroll på dina databasförändringar
Liquibase få kontroll på dina databasförändringarSqueed
 
Migration From Oracle to PostgreSQL
Migration From Oracle to PostgreSQLMigration From Oracle to PostgreSQL
Migration From Oracle to PostgreSQLPGConf APAC
 
COUG_AAbate_Oracle_Database_12c_New_Features
COUG_AAbate_Oracle_Database_12c_New_FeaturesCOUG_AAbate_Oracle_Database_12c_New_Features
COUG_AAbate_Oracle_Database_12c_New_FeaturesAlfredo Abate
 
NoCOUG_201411_Patel_Managing_a_Large_OLTP_Database
NoCOUG_201411_Patel_Managing_a_Large_OLTP_DatabaseNoCOUG_201411_Patel_Managing_a_Large_OLTP_Database
NoCOUG_201411_Patel_Managing_a_Large_OLTP_DatabaseParesh Patel
 
Simplifying MySQL, Pre-FOSDEM MySQL Days, Brussels, January 30, 2020.
Simplifying MySQL, Pre-FOSDEM MySQL Days, Brussels, January 30, 2020.Simplifying MySQL, Pre-FOSDEM MySQL Days, Brussels, January 30, 2020.
Simplifying MySQL, Pre-FOSDEM MySQL Days, Brussels, January 30, 2020.Geir Høydalsvik
 
Upgrading to my sql 8.0
Upgrading to my sql 8.0Upgrading to my sql 8.0
Upgrading to my sql 8.0Ståle Deraas
 
Oracle to Netezza Migration Casestudy
Oracle to Netezza Migration CasestudyOracle to Netezza Migration Casestudy
Oracle to Netezza Migration CasestudyAsis Mohanty
 
Flink and Hive integration - unifying enterprise data processing systems
Flink and Hive integration - unifying enterprise data processing systemsFlink and Hive integration - unifying enterprise data processing systems
Flink and Hive integration - unifying enterprise data processing systemsBowen Li
 
Unify Enterprise Data Processing System Platform Level Integration of Flink a...
Unify Enterprise Data Processing System Platform Level Integration of Flink a...Unify Enterprise Data Processing System Platform Level Integration of Flink a...
Unify Enterprise Data Processing System Platform Level Integration of Flink a...Flink Forward
 
collab2011-tuning-ebusiness-421966.pdf
collab2011-tuning-ebusiness-421966.pdfcollab2011-tuning-ebusiness-421966.pdf
collab2011-tuning-ebusiness-421966.pdfElboulmaniMohamed
 
E business suite r12.2 changes for database administrators
E business suite r12.2 changes for database administratorsE business suite r12.2 changes for database administrators
E business suite r12.2 changes for database administratorsSrinivasa Pavan Marti
 
E business suite r12.2 changes for database administrators
E business suite r12.2 changes for database administratorsE business suite r12.2 changes for database administrators
E business suite r12.2 changes for database administratorsSrinivasa Pavan Marti
 
Remote DBA Experts 11g Features
Remote DBA Experts 11g FeaturesRemote DBA Experts 11g Features
Remote DBA Experts 11g FeaturesRemote DBA Experts
 

Similar a MOUG17: DB Security; Secure your Data (20)

SQL PPT.pptx
SQL PPT.pptxSQL PPT.pptx
SQL PPT.pptx
 
Liquibase få kontroll på dina databasförändringar
Liquibase   få kontroll på dina databasförändringarLiquibase   få kontroll på dina databasförändringar
Liquibase få kontroll på dina databasförändringar
 
01 oracle architecture
01 oracle architecture01 oracle architecture
01 oracle architecture
 
Migration From Oracle to PostgreSQL
Migration From Oracle to PostgreSQLMigration From Oracle to PostgreSQL
Migration From Oracle to PostgreSQL
 
COUG_AAbate_Oracle_Database_12c_New_Features
COUG_AAbate_Oracle_Database_12c_New_FeaturesCOUG_AAbate_Oracle_Database_12c_New_Features
COUG_AAbate_Oracle_Database_12c_New_Features
 
NoCOUG_201411_Patel_Managing_a_Large_OLTP_Database
NoCOUG_201411_Patel_Managing_a_Large_OLTP_DatabaseNoCOUG_201411_Patel_Managing_a_Large_OLTP_Database
NoCOUG_201411_Patel_Managing_a_Large_OLTP_Database
 
Simplifying MySQL, Pre-FOSDEM MySQL Days, Brussels, January 30, 2020.
Simplifying MySQL, Pre-FOSDEM MySQL Days, Brussels, January 30, 2020.Simplifying MySQL, Pre-FOSDEM MySQL Days, Brussels, January 30, 2020.
Simplifying MySQL, Pre-FOSDEM MySQL Days, Brussels, January 30, 2020.
 
Upgrading to my sql 8.0
Upgrading to my sql 8.0Upgrading to my sql 8.0
Upgrading to my sql 8.0
 
Oracle to Netezza Migration Casestudy
Oracle to Netezza Migration CasestudyOracle to Netezza Migration Casestudy
Oracle to Netezza Migration Casestudy
 
Free oracle performance tools
Free oracle performance toolsFree oracle performance tools
Free oracle performance tools
 
Flink and Hive integration - unifying enterprise data processing systems
Flink and Hive integration - unifying enterprise data processing systemsFlink and Hive integration - unifying enterprise data processing systems
Flink and Hive integration - unifying enterprise data processing systems
 
Unify Enterprise Data Processing System Platform Level Integration of Flink a...
Unify Enterprise Data Processing System Platform Level Integration of Flink a...Unify Enterprise Data Processing System Platform Level Integration of Flink a...
Unify Enterprise Data Processing System Platform Level Integration of Flink a...
 
collab2011-tuning-ebusiness-421966.pdf
collab2011-tuning-ebusiness-421966.pdfcollab2011-tuning-ebusiness-421966.pdf
collab2011-tuning-ebusiness-421966.pdf
 
ow.ppt
ow.pptow.ppt
ow.ppt
 
ow.ppt
ow.pptow.ppt
ow.ppt
 
Ow
OwOw
Ow
 
E business suite r12.2 changes for database administrators
E business suite r12.2 changes for database administratorsE business suite r12.2 changes for database administrators
E business suite r12.2 changes for database administrators
 
E business suite r12.2 changes for database administrators
E business suite r12.2 changes for database administratorsE business suite r12.2 changes for database administrators
E business suite r12.2 changes for database administrators
 
Remote DBA Experts 11g Features
Remote DBA Experts 11g FeaturesRemote DBA Experts 11g Features
Remote DBA Experts 11g Features
 
Plantilla oracle
Plantilla oraclePlantilla oracle
Plantilla oracle
 

Más de Monica Li

MOUG17: SQLT Utility for Tuning - Practical Examples
MOUG17: SQLT Utility for Tuning - Practical ExamplesMOUG17: SQLT Utility for Tuning - Practical Examples
MOUG17: SQLT Utility for Tuning - Practical ExamplesMonica Li
 
MOUG17: How to Build Multi-Client APEX Applications
MOUG17: How to Build Multi-Client APEX ApplicationsMOUG17: How to Build Multi-Client APEX Applications
MOUG17: How to Build Multi-Client APEX ApplicationsMonica Li
 
MOUG17: Visualizing Air Traffic with Oracle APEX and Raspberry PI
MOUG17: Visualizing Air Traffic with Oracle APEX and Raspberry PIMOUG17: Visualizing Air Traffic with Oracle APEX and Raspberry PI
MOUG17: Visualizing Air Traffic with Oracle APEX and Raspberry PIMonica Li
 
MOUG17: Oracle APEX - Tame IT Backlog Low Code Micro Apps in APEX
MOUG17: Oracle APEX - Tame IT Backlog Low Code Micro Apps in APEXMOUG17: Oracle APEX - Tame IT Backlog Low Code Micro Apps in APEX
MOUG17: Oracle APEX - Tame IT Backlog Low Code Micro Apps in APEXMonica Li
 
MOUG17: Three Types of Table Compression
MOUG17: Three Types of Table CompressionMOUG17: Three Types of Table Compression
MOUG17: Three Types of Table CompressionMonica Li
 
MOUG17 Keynote: Coding Therapy for Database Developers
MOUG17 Keynote: Coding Therapy for Database DevelopersMOUG17 Keynote: Coding Therapy for Database Developers
MOUG17 Keynote: Coding Therapy for Database DevelopersMonica Li
 
MOUG17 Keynote: What's New from Oracle Database Development
MOUG17 Keynote: What's New from Oracle Database DevelopmentMOUG17 Keynote: What's New from Oracle Database Development
MOUG17 Keynote: What's New from Oracle Database DevelopmentMonica Li
 
VIscosity-WhoWeAre
VIscosity-WhoWeAreVIscosity-WhoWeAre
VIscosity-WhoWeAreMonica Li
 

Más de Monica Li (8)

MOUG17: SQLT Utility for Tuning - Practical Examples
MOUG17: SQLT Utility for Tuning - Practical ExamplesMOUG17: SQLT Utility for Tuning - Practical Examples
MOUG17: SQLT Utility for Tuning - Practical Examples
 
MOUG17: How to Build Multi-Client APEX Applications
MOUG17: How to Build Multi-Client APEX ApplicationsMOUG17: How to Build Multi-Client APEX Applications
MOUG17: How to Build Multi-Client APEX Applications
 
MOUG17: Visualizing Air Traffic with Oracle APEX and Raspberry PI
MOUG17: Visualizing Air Traffic with Oracle APEX and Raspberry PIMOUG17: Visualizing Air Traffic with Oracle APEX and Raspberry PI
MOUG17: Visualizing Air Traffic with Oracle APEX and Raspberry PI
 
MOUG17: Oracle APEX - Tame IT Backlog Low Code Micro Apps in APEX
MOUG17: Oracle APEX - Tame IT Backlog Low Code Micro Apps in APEXMOUG17: Oracle APEX - Tame IT Backlog Low Code Micro Apps in APEX
MOUG17: Oracle APEX - Tame IT Backlog Low Code Micro Apps in APEX
 
MOUG17: Three Types of Table Compression
MOUG17: Three Types of Table CompressionMOUG17: Three Types of Table Compression
MOUG17: Three Types of Table Compression
 
MOUG17 Keynote: Coding Therapy for Database Developers
MOUG17 Keynote: Coding Therapy for Database DevelopersMOUG17 Keynote: Coding Therapy for Database Developers
MOUG17 Keynote: Coding Therapy for Database Developers
 
MOUG17 Keynote: What's New from Oracle Database Development
MOUG17 Keynote: What's New from Oracle Database DevelopmentMOUG17 Keynote: What's New from Oracle Database Development
MOUG17 Keynote: What's New from Oracle Database Development
 
VIscosity-WhoWeAre
VIscosity-WhoWeAreVIscosity-WhoWeAre
VIscosity-WhoWeAre
 

Último

Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfOverkill Security
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDropbox
 
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot ModelNavi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot ModelDeepika Singh
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Zilliz
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Jeffrey Haguewood
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native ApplicationsWSO2
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...apidays
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobeapidays
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024The Digital Insurer
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...apidays
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businesspanagenda
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxRustici Software
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWERMadyBayot
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CVKhem
 

Último (20)

Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot ModelNavi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 

MOUG17: DB Security; Secure your Data

  • 1. The Oracle SQLT Utility Presented to The Midwest Oracle Users Group – October 24, 2017 Kevin Gilpin, Advizex kgilpin@advizex.com
  • 2. Agenda 2 • Background/Overview • How SQLT Works • SQLT installation • Using SQLT • SQLT Maintenance • Recommendations • References
  • 3. 3
  • 4. Background/Overview 4 4 • The utility is used to troubleshoot SQL performance as a supplement to the Oracle SQL Tuning Advisor (STA) and provides a superset of the information produced by the STA. • SQLT is a free utility developed by the Oracle Server Technologies Center of Expertise (CoE). It is downloadable from My Oracle Support Note 215187.1. • It consists of some database schema objects and some associated SQL scripts and also includes the CoE Trace Analyzer utility.
  • 5. Background/Overview 5 5 • Studying how the SQLT and TRCA utilities work leads to an appreciation of how far Oracle has come in providing automatic SQL tuning functionality and of what the database is capable of doing to manage SQL performance. SQL baselines, profiles and plan management have become impressively robust and it is worth the effort to master knowledge of how these structures work. • The PL/SQL code is unwrapped, allowing DBA’s to learn about how the utility works and how Oracle manages execution plans, SQL Profiles and related items.
  • 6. Background/Overview 6 6 • The SQL Tuning Advisor is one of several advisors that is available if the Tuning Pack is licensed. The advisor performs in-depth analysis of SQL and derives pertinent configuration findings and performance recommendations that can be implemented with provided syntax, usually in the form of a SQL profile. • The Trace Analyzer (TRCA) utility is closely related to the SQLT utility and is installed as part of the SQLT installation. This utility can also be downloaded, installed and used separately from SQLT. This utility creates an HTML report describing the pertinent information in a SQL trace file produced with the 10046 trace event.
  • 7. Background/Overview 7 7 • SQL Plan Management is a framework that Oracle Database uses to prevent performance regression of SQL statements as the data evolves and other conditions change such as index changes and database patches and upgrades are installed. • A SQL Plan Baseline is a set of SQL execution plans that have been used at various times for a given SQL statement.
  • 8. Background/Overview 8 8 • A SQL profile is a set of performance data and corrections to execution plans that have been derived by the Oracle optimizer. SQL profiles are derived by the invoking SQL Tuning Advisor to perform an of the objects that a SQL statement uses an in-depth analysis of the data and statement construct. • A SQL plan hash value is an identifier for an execution plan that the Oracle optimizer has derived for a given SQL statement. This identifier is used by SQL Plan Management to manage plans and metadata related to those plans.
  • 9. Background/Overview 9 9 • All of these features are used directly or indirectly by SQLT. • The SQLT version and the TRCA versions have been synchronized since 2012.
  • 10. Background/Overview 10 10 • Can be used with or without licenses for the Tuning and Diagnostics Packs. • Answer these script prompts during the install: • "T" if you have license for Diagnostic and Tuning • "D" if you have license only for Oracle Diagnostic • "N" if you do not have these two licenses • Gives the most comprehensive metadata report about your SQL. • Derived information presented in an impressive and useful HTML report. • The utility is easy to use and saves time and effort to manually collect configuration and metadata related to a SQL statement.
  • 11. Background/Overview 11 11 • Can be used with RDBMS 9.2 and higher. • Be sure to download the proper version of SQLT for the database version you intend to use it with. • The 12.1 version works with RDBMS 12.2. • Hot off the press: SQLT v12.2.171004 now available! • First 12.2 release (12.2.170914) released 9/14/17. • See MOS Note 215187.1 as the main SQLT information source. • SQLT is now bundled with TFA (see note 1513912.1).
  • 12. How SQLT works 12 • SQLT works by calling STA and TRCA and its own PL/SQL packages and scripts to perform various analyses and manipulations on SQL execution plans, SQL profiles and trace files. • dbms_spm, dbms_sqltune, various dictionary queries. The code of these scripts is in <SQLT_HOME>/install/ and <SQLT_HOME>/run/. • The tool outputs reports of its configuration findings and tuning recommendations and of the metadata about the relevant database objects and SQL statements as well as scripts to implement configuration changes and SQL profiles.
  • 13. How SQLT works 13 • In performing your own SQL performance troubleshooting (now called SQL Plan Management), have you found yourself hunting down information related to the statement by executing many dictionary queries, poking around OEM, deriving DDL for relevant objects, hunting down execution plans, etc.? • That takes a LOT of time and effort! • OEM is great and the tuning advisors are great and ASH reports and the awrsqrpt.sql script report are great and should be used, but …
  • 14. How SQLT works 14 • SQLT supplements all of those and saves you all of that research time that you used to spend when investigating SQL performance. • SQLT gives you ALL of that information in one place – in one phenomenally detailed html report. It is phenomenal because it gives you everything you would have thought of researching related to the SQL statement, plus a lot of stuff you would not have thought of. • We will examine an example of this report.
  • 15. 15
  • 16. SQLT Installation 16 • Download the proper zip file from MOS Note 215187.1 or 1513912.1. • Unzip the file to any appropriate working directory. • Navigate to the <SQLT_HOME>/install/ directory, invoke SQL*Plus and connect to the database that runs the relevant SQL and execute the sqcreate.sql script. Execute the script while connected as “sys as sysdba”. • The SQLTXPLAIN schema (v12.1) requires less than 10MB of space for the installation. Allocate 50-100MB of space for this schema to allow for data creation in the schema as the utility is used.
  • 17. SQLT Installation 17 • The installation will create a database user named SQLTXPLAIN and these objects in its schema. OBJECT_TYPE COUNT(*) ------------------------------ ---------- PROCEDURE 1 SEQUENCE 12 SYNONYM 17 TABLE PARTITION 32 LOB 110 TABLE 215 INDEX 246
  • 18. SQLT Installation 18 Installation prompts: 1. Connect identifier for TNS connections (optional). 2. SQLTXPLAIN database user password. 3. Default and temp tablespaces for SQLTXPLAIN user. 4. Main application user of the SQLT utility. This user will be the one to execute the SQLT scripts. Note that additional users can be added to the configuration later. 5. Management Pack license, if any: • "T" if you have license for Diagnostic and Tuning • "D" if you have license only for Oracle Diagnostic • "N" if you do not have these two licenses • After the last prompt, the remainder of the installation should take just a few minutes to complete.
  • 19. SQLT Installation 19 • The latter part of the installation steps could take much longer than a few minutes if the database has a large number of objects, such as in an Oracle EBS installation. • At the end of the installation, review the *.log files in the <SQLT_HOME>/install/log directory. • To enable the use of the SQLT utility by additional database users, grant the SQLT_USER_ROLE role to those users. • This role has the system privileges ADMINISTER_SQL_MANAGEMENT_OBJECT and ADVISOR granted to it in RDBMS 12.1 and only ADVISOR in RDBMS 12.2. The SQLTXPLAIN user has the SELECT_CATALOG_ROLE role granted to it.
  • 20. SQLT Installation 20 • If you wish to “start over” and just purge the entire SQLT schema and re-install it, execute the <SQLT_HOME>/install/sqdrop.sql in SQL*Plus as “sys as sysdba”. • The installation will create six database directory objects that reference the database user_dump_dest or diagnostic_dest directory (for trace file access). • SQLT installs and uses the Oracle CoE Trace Analyzer (TRCA) utility. See MOS Note 224270.1 for a description of Trace Analyzer. • If you wish, you can install and use TRCA separately in the same database as SQLT, although there is no need to.
  • 21. 21
  • 22. Using SQLT 22 • Determine the appropriate mode (method) in which to run SQLT. • Execute the proper script with the proper syntax as the database user that executes the particular SQL under normal application conditions. • Some execution modes may take a couple of minutes or less and some may take much longer, depending on the SQL to be evaluated. • Find the zip file, unzip it and open the *main.html file in a browser. • Read the full report.
  • 23. Using SQLT 23 • Key things to look for – Observations, STA Report, STA Script, Session Events, the subsections in the Plans section of the report, “hot blocks” section of the TRCANLZR report. • Read the recommendations report and script, if they exist. • Implement the recommendations in test environment and evaluate the effects. Migrate recommendations to production if appropriate. • A prudent usage scenario is to install it with all new databases.
  • 24. Using SQLT 24 • Sample SQLT report discussion.
  • 25. Using SQLT 25 • Test the syntax to “un-implement” (back out) the recommendations in the test environment. • Decide whether to implement the recommendations in production. • Prepare to “un-implement” the recommendations from production if the results are detrimental.
  • 26. Using SQLT 26 XTRACT Method • Probably the most common SQLT usage method. • This method captures information about the particular SQL from the library cache. • Invoke SQL*Plus from the <SQLT_HOME>/ directory and connect as the database user that would normally execute a particular SQL statement or set of statements and execute this command: SQL> start run/sqltxtract <SQLID> • The sqltxtract.sql script will produce a zip file in the <SQLT_HOME>/ directory. Unzip this zip file and open the *main.html file in a browser.
  • 27. Using SQLT 27 XECUTE Method • Takes as its input a text file containing a SQL statement or STS. • Invoke SQL*Plus from the <SQLT_HOME>/ directory and connect as the database user that would normally execute a particular SQL statement or set of statements and execute this command: SQL> start run/sqltxecute <full_pathname_of_sql_script>
  • 28. Using SQLT 28 XECUTE Method • The sqltxecute.sql script executes the SQL statement or statements in the STS. • Therefore, the length of time that the sqltxecute.sql script takes is bound by what those SQL statements take to execute. • The product of executing the script is a zipfile produced in the <SQLT_HOME>/ directory. To use this file, unzip it in any directory and open the *main.html file.
  • 29. Using SQLT 29 XTRXEC Method • The XTREC method is a hybrid of the XTRACT and XTREC methods. It performs its analysis on a currently running or recently run SQL statement and then produces a script that is used to execute the given statement. • The literal values substituted for bind variables in the statement are those derived from bind peeking at the time of executing the statement with the most expensive plans derived by the optimizer for the statement.
  • 30. Using SQLT 30 XTRXEC Method • To execute this method, navigate to the <SQLT_HOME>/ directory, invoke SQL*Plus and connect to the main application user that executes the SQL in question, and execute this command: SQL> start run/sqltxtrec <SQLID> where <SQLID> is the SQL ID of a currently running or recently run SQL statement. To find this SQLID, use Enterprise Manager’s Top Activity page or query the v$sql dynamic performance view.
  • 31. Using SQLT 31 XPLAIN Method • The sqltxplain.sql script executes explain plan on the the SQL statement or statements in the STS. • The sqltxplain.sql script does not execute the statement. • It is recommended that the XTRACT, XECUTE or XTREC methods be used before considering using the XPLAIN method because they are more accurate. They are more accurate because XPLAIN cannot utilize bind peeking.
  • 32. Using SQLT 32 XPLAIN Method • The XPLAIN script will be least accurate if the SQL statement has bind variables. If it does not, then it should be fairly accurate. • Consider optimizer effects from adaptive cursor sharing, cursor_sharing! • The product of executing the script is a zipfile produced in the <SQLT_HOME>/ directory. To use this file, unzip it in any directory and open the *main.html file.
  • 33. Using SQLT 33 COMPARE Method • The COMPARE method uses the data in the SQLTXPLAIN database schema from two different databases in which a given SQL statement or STS has inexplicably produced different execution plans and the goal is to find out why. • The sqltcompare.sql script takes two SQLID values as input and assumes that the SQLTXPLAIN schema data has been exported from one of the databases and imported into the other.
  • 34. Using SQLT 34 COMPARE Method • Alternatively, a third database could be used into which the SQLTXPLAIN data from both databases has been imported. • The output file is similar to the other SQLT methods and should contain the reason(s) for why the execution plans used by each database are different. • To execute this method, navigate to the <SQLT_HOME>/ directory, invoke SQL*Plus and connect to the main application user that executes the SQL in question, and execute this command: SQL> start run/sqltcompare <SQLID1> <SQLID2>
  • 35. Using SQLT 35 TRCAXTR Method • The TRCAXTR method extends the functionality of the TRCANLZR method by first executing the TRCANLZR method on the given SQL trace file and then calling the XTRACT method on the top SQL that is found in the file. • To execute this method, navigate to the <SQLT_HOME>/ directory, invoke SQL*Plus and connect to the main application user that executes the SQL in question, and execute this command: SQL> start run/sqltrcaxtr <full_pathname_of_trace_file>
  • 36. Using SQLT 36 TRCASET Method • The TRCASET method is automatically executed by the TRCAXTR method. • In standalone execution mode, this method allows the user to select particular SQLID’s upon which prior analyses have been performed. • The user picks one prior analysis to have the XTRACT method performed on.
  • 37. Using SQLT 37 TRCASET Method • This method can be executed in standalone mode by navigating to the <SQLT_HOME>/ directory invoking SQL*Plus, and connecting to the main application user that executes the SQL in question, and executing this command: SQL> start run/sqltrcaset [list of SQLIDs and PHVs]
  • 38. Using SQLT 38 TRCANLZR Method • This method invokes the Trace Analyzer (TRCA) utility. • This method negates the need to separately download and install the TRCA utility because it is the same utility.
  • 39. Using SQLT 39 TRCANLZR Method • Syntax: Invoke SQL*Plus from the <SQLT_HOME>/ directory and connect as the database user that would normally execute a particular SQL statement or set of statements and execute this command: SQL> start run/sqltrcanlzr <full pathname of an existing trace file generated with event 10046> • The sqltrcanlzr.sql script will produce a zip file in the <SQLT_HOME>/ directory. Unzip this zip file and open the *.html file in a browser.
  • 40. Using SQLT 40 TRCASPLIT Method • The TRCASPLIT method takes a SQL trace file that was produced with events 10046 and 10053 and splits the trace file into two separate trace files – one for the 10046 data and one for the 10053 data. • The two separate trace files produced could then be separately used with the TRCANLZR method, if desired.
  • 41. Using SQLT 41 TRCASPLIT Method • To execute this method, navigate to the <SQLT_HOME>/ directory, invoke SQL*Plus and connect to the main application user that executes the SQL in question, and execute this command: SQL> start run/sqltrcasplit <full_pathname_of_trace_file>
  • 42. Using SQLT 42 XTRSET Method • The XTRSET method is an extension of the XTRACT method in which a set of SQLID’s are passed to the script and the script derives the data about these SQLID’s from the library cache and/or the AWR and combines the output of all of them into one report. • To execute this method, navigate to the <SQLT_HOME>/ directory, invoke SQL*Plus and connect to the main application user that executes the SQL in question, and execute this command: SQL> start run/sqltxtrset <SQLID1> <SQLID2> … <SQLIDn>
  • 43. Using SQLT 43 PROFILE Method • The PROFILE requires that you first execute the XTRACT or XECUTE method on a particular SQLID. • The script accepts a SQLID as input. When the sqltprofile script is executed, it will prompt for a plan hash value from a list of plan hash values captured from previous executions of XTRACT or XECUTE on the indicated SQLID.
  • 44. Using SQLT 44 PROFILE Method • The user can choose a particular one of these plan hash values and the sqltprofile.sql script will output a script to install that plan hash value as the one to be used for that SQLID by use of a SQL profile. • To execute this method, navigate to the <SQLT_HOME>/ directory, invoke SQL*Plus and connect to the main application user that executes the SQL in question, and execute this command: SQL> start run/sqltprofile <SQLID>
  • 45. Using SQLT 45 XTRSBY Method • Like XTRACT, XECUTE, XTRXEC, but will analyze the statement in the instance of a read-only standby database (Active Data Guard). • Invoke the methods while connected to the read-write database. • Create a database link to the standby database. SQL> start run/sqltxtrsby <SQLID> <dblink>
  • 46. Using SQLT 46 XGRAM Method • The XGRAM method involves the usage of several scripts in the <SQLT_HOME>/utl/xgram/ directory. • These scripts can be used to make modifications to stored optimizer histograms related to particular SQL. • Before using these, read all of the scripts in the <SQLT_HOME>/utl/xgram/ directory to determine which is appropriate to execute in a given case.
  • 47. Using SQLT 47 XGRAM Method • The <SQLT_HOME>/utl/xgram/sqlt_display_column_stats.sql script is a handy script to execute. • See sample output on the next two slides.
  • 48. 48 SQL> @sqlt_display_column_stats.sql Table Owner [SOE]: SOE Table Name [UNKNOWN]: ORDERS Partition Name [NULL]: NULL Column Name [UNKNOWN]: ORDER_ID sqlt_display_column_stats_20170721031245.log p_ownname : 'SOE' p_tabname : 'ORDERS' p_partname: '' p_colname : 'ORDER_ID' Using SQLT
  • 49. 49 SELECT * FROM TABLE(SQLTXADMIN.sqlt$s.display_column_stats('SOE', 'ORDERS', 'ORDER_ID', '')); ownname:SOE tabname:ORDERS colname:ORDER_ID partname:<partname> distcnt:1432690 density:6.979877e-07 nullcnt:0 avgclen:6 type:"NUMBER" hgrm:"NONE" buckets:1 analyzed:"2017/07/20 03:09:46" sample:1432690 global:"YES" user:"NO" rows:1432690 minval:"1"(C102) maxval:"1432691"(C4022C1B5C) epc:2 ep:1 bkvals:0 novals:1 value:"1" size:0 ep:2 bkvals:1 novals:1432691 value:"1432691" size:1 SQLT_DISPLAY_COLUMN_STATS completed. Using SQLT
  • 50. Using SQLT 50 XPLORE Method • The conditions to consider using the XPLORE method are rare. • The scripts to use this method are in the <SQLT_HOME>/utl/xplore/ directory. • Read the readme.txt file in that directory before doing anything else with this method.
  • 51. Using SQLT 51 XPLORE Method • This method is appropriate to use in rare cases in which the optimizer is producing invalid results following an upgrade. • The scripts used by this method will “explore” variations in the optimizer_features_enabled and fix parameters that may have led to the problematic behavior. • You must install this script separately using the <SQLT_HOME>/utl/xplore/install.sql script. Read the README.txt file for it first.
  • 52. Using SQLT 52 XPLORE Method • This method is appropriate to use in rare cases in which the optimizer is producing invalid results following an upgrade. • The scripts used by this method will “explore” variations in the optimizer_features_enabled and fix parameters that may have led to the problematic behavior. • You must install this script separately using the <SQLT_HOME>/utl/xplore/install.sql script. Read the README.txt file for it first.
  • 53. Using SQLT 53 XPLORE Method • This method will query the v$system_fix_control (1301 rows on a new, empty RDBMS 12.2 database). • This method will examine execution plans derived using various combinations of the entries found in v$system_fix_control. If one is found that has a lower cost than a plan that belongs to that sql_id’s baseline, then the value corresponding to that entry (bug) in v$system_fix_control will be displayed.
  • 54. Using SQLT 54 XPLORE Method • You can then test executing that sql_id using that fix_control value, as follows: Select /*+ opt_param(‘_fix_control’,’bugno:eventno’) */ … • You can also just go to MOS and look up anything useful related to that bug number.
  • 55. 55
  • 56. SQLT Maintenance and Recommendations 56 • Remove old, orphaned SQL profiles. • Delete old zip files produced by SQLT executions. • Space consumption by the SQLTXPLAIN database schema. Purge it occasionally. • To add additional database users to the configuration so that you can run SQLT as those users, grant the role SQLT_USER_ROLE to those users. • Upgrading SQLT – instructions in the sqlt_instructions.html file (in zip distribution bundle).
  • 57. SQLT Maintenance and Recommendations 57 • If you wish to “start over” and just purge the entire SQLT schema and re-install it, connect as “sys as sysdba” and execute the <SQLT_HOME>/install/sqdrop.sql. • Be patient when installing and running the utility on databases that have a *lot* of objects (Oracle EBS, BAAN, etc.). The utility executes queries on dictionary tables like all_objects, etc. • You will get the best results from SQLTXTRACT and SQLTXECUTE if you execute these scripts as soon after the SQL of interest has started, if not right at the moment it has started execution.
  • 58. SQLT Maintenance and Recommendations 58 • Optimizer statistics on the SQLTXPLAIN schema are locked after the SQLT installation. To gather stats on them, you must first unlock them with dbms_stats.unlock_schema_stats(ownname=>’SQLTXPLAIN’); • If you have a database with a *lot* of extents (a lot of rows in sys.fet$), the installation may take a long time. Be patient with the install because subsequent executions of the SQLT scripts should then not suffer some of the performance issues caused by bugs in some database versions, like bugs 2948717, 5029334, 5259025. Some MOS notes like 422730.1 suggest gathering stats on sys.x$ktfbue. Otherwise, upgrading/patching may be required to resolve these.
  • 59. SQLT Maintenance and Recommendations 59 • Due to bugs like this, it might be helpful, for example, to create and schedule a batch script that drops and re-installs SQLT periodically so that you have the freshest data in tables like SQLTXPLAIN.TRCA$_EXTENTS. • It is highly prudent to run frequently used and high importance SQL through SQLT on a regular basis and review the findings and recommendations. This can be scripted and scheduled (cron SQLT to run during/after their important runs?).
  • 60. 60
  • 61. SQL Health Check (SQLHC) 61 • Your boss: “Yeah, that SQLT thing is great, but we have these restrictions, you see, on installing things in production and doing this kind of SQL testing and research in production.” • You: “Point well taken. Behold, I give you SQLHC.” • SQLHC implements a subset of the functionality of SQLT and gives you a nice html report of its findings. • SQLHC does not execute anything. It simply queries what is there in the dictionary and performance views pertaining to a sqlid and gives you a report.
  • 62. SQL Health Check (SQLHC) 62 • You don’t have to install anything to use SQLHC. Simply download the script from Oracle Support Note 1366133.1 and execute it (yes, in production). It is non-intrusive. • Discuss sample report.
  • 63. 63
  • 64. TFA 64 • See Oracle Support Note 1513912.1. • Oracle has created a bundle of tuning tools that seems like it will evolve as a distribution of evolving tools. • SQLT • TFA • oswatcher • procwatcher • oratop • Others
  • 65. SQLd360 65 • What is it? Who created it? • A tool similar to SQLT created by the creator of SQLT. • Comparison to SQLT: • No installation required. • Runs faster, but *may* be less comprehensive. • How to install and use it. • Download script(s) and execute them.
  • 66. eAdam 66 • What is it? Who created it? • A tool similar to SQLT created by the creator of SQLT. • Comparison to SQLT: • Same direction of evolution as SQLd360, but eAdam is intended to be an “AWR repository” of AWR data from potentially multiple databases. • How to install and use it. • Download scripts and execute.
  • 67. 67
  • 68. References 68 My Oracle Support Notes • 215187.1 – SQLT main note. • 1454160.1 – SQLT FAQ. • 1513912.1 - TFA. • 224270.1 – TRCANLZR – Tool for interpreting raw SQL traces. • 1229904.1 – Real time SQL monitoring in 11g. • 1401111.1 – SQLT webcast. • 199083.1 – SQL query performance overview. • 376442.1 – How to collect 10046 trace diagnostics for performance issues. • 398838.1 – FAQ: SQL query performance.
  • 69. References 69 My Oracle Support Notes • 276103.1 – Performance tuning using Advisors and manageability features • 1366133.1 – SQL Tuning Health Check Script (SQLHC) • 1417774.1 – FAQ: SQL Health Check (SQLHC) Frequently Asked Questions
  • 70. Thank You! Kevin Gilpin, Advizex kgilpin@advizex.com