SlideShare una empresa de Scribd logo
1 de 174
Descargar para leer sin conexión
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Rittman Mead BI Forum 2015 Masterclass

Delivering the Data Factory, Data Reservoir and
a Scalable Oracle Big Data Architecture
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Part 1

Designing the Data Reservoir & Data Factory
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
The Oracle IM + Big Data Reference Architecture
Actionable

Events
Event Engine Data 

Reservoir
Data Factory Enterprise
Information Store
Reporting
Discovery Lab
Actionable
Information
Actionable

Insights
Input
Events
Execution
Innovation
Discovery
Output
Events 

& Data
Structured

Enterprise
Data
Other
Data
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
The Next-Gen BI Environment from this Architecture
•Traditional RDBMS DW now complemented by a Hadoop/NoSQL-based data reservoir
•“Data Factory” term used for ETL and loading processes that provide conduit between them
•Some data may be loaded into the data reservoir and only exist there
•Some will be further processed and loaded into the DW (“Enterprise Information Store”)
•Some may get directly loaded into the RBDMS
•Use best option to support business needs
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Introducing … The “Data Reservoir”?
•A reservoir is a lake than also can process and refine (your data)
•Wide-ranging source of low-density, lower-value data to complement the DW
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Today’s Layered Data Warehouse Architecture
Virtualization&

QueryFederation
Enterprise
Performance
Management
Pre-built & 

Ad-hoc 

BI Assets
Information

Services
Data Ingestion
Information Interpretation
Access & Performance Layer
Foundation Data Layer
Raw Data Reservoir
Data 

Science
Data Engines & 

Poly-structured 

sources
Content
Docs Web & Social Media
SMS
Structured
Data

Sources
•Operational Data
•COTS Data
•Master & Ref. Data
•Streaming & BAM
Immutable raw data reservoir
Raw data at rest is not interpreted
Immutable modelled data. Business
Process Neutral form. Abstracted from
business process changes
Past, current and future interpretation of
enterprise data. Structured to support agile
access & navigation
Discovery Lab Sandboxes Rapid Development Sandboxes
Project based data stores to
support specific discovery
objectives
Project based data stored to
facilitate rapid content /
presentation delivery
Data Sources
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Combining Oracle RDBMS with Hadoop + NoSQL
•High-value, high-density data goes into Oracle RDBMS
•Better support for fast queries, summaries, referential integrity etc
•Lower-value, lower-density data goes into Hadoop + NoSQL
‣Also provides flexible schema, more agile development
•Successful next-generation BI+DW projects combine both - neither on their own is sufficient
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Options for Implementing a Data Reservoir
•Can add a Hadoop cluster, on commodity/existing server hardware, and link to Oracle DB
‣Use ODI etc for data transfer between Hadoop + Oracle
•Can implement using VMs etc for prototyping exercise
‣But beware of shared/virtualized storage for real production usage
•Approach taken by most of our “starter” 

customers, and by us in development
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Oracle’s Engineered System Data Reservoir Platform
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
•Cloudera CDH
‣Used in Oracle Big Data Appliance, typically first to be supported with ODI etc
•Hortonworks HDP
‣Usually second to be supported; supports Tez, but late with Spark etc
•MapR
‣Some prefer this but rarely certified with Oracle products
•Pivotal / ODP
‣Sometimes find in use with Banks etc, but also rarely certified
•..etc
Hadoop Distribution Options
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Oracle’s Big Data Products
•Oracle Big Data Appliance
‣Optimized hardware for Hadoop processing
‣Cloudera Distribution incl. Hadoop
‣Oracle Big Data Connectors, ODI etc
•Oracle Big Data Connectors
•Oracle Big Data SQL
•Oracle NoSQL Database
•Oracle Data Integrator
•Oracle R Distribution
•OBIEE, BI Publisher and 

Endeca Info Discovery
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Oracle Big Data Appliance
•Engineered system for big data processing and analysis
•Optimized for enterprise Hadoop workloads
•288 Intel® Xeon® E5 Processors
•1152 GB total memory
•648TB total raw storage capacity
‣Cloudera Distribution of Hadoop
‣Cloudera Manager
‣Open-source R
‣Oracle NoSQL Database Community Edition
‣Oracle Enterprise Linux + Oracle JVM
‣New - Oracle Big Data SQL
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Working with Oracle Big Data Appliance
•Don’t underestimate the value of “pre-integrated” - massive time-saver for client
‣No need to integrate Big Data Connectors, ODI Agent etc with HDFS, Hive etc etc
•Single support route - raise SR with Oracle, they will route to Cloudera if needed
•Single patch process for whole cluster - OS, CDH etc etc
•Full access to Cloudera Enterprise features
•Otherwise … just another CDH cluster in terms of SSH access etc
•We like it ;-)
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Working with Cloudera Hadoop (CDH) - Observations
•Very good product stack, enterprise-friendly, big community, can do lots with free edition
•Cloudera have their favoured Hadoop technologies - Spark, Kafka
•Also makes use of Cloudera-specific tools - Impala, Cloudera Manager etc
•But ignores some tools that have value - Apache Tez for example
•Easy for an Oracle developer to get productive with the CDH stack
•But beware of some immature technologies / products
‣Hive != Oracle SQL
‣Spark is very much an “alpha” product
‣Limitations in things like LDAP integration, end-to-end security
‣Lots of products in stack = lots of places

to go to diagnose issues
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
CDH : Things That Work Well
•HDFS as a low-cost, flexible

data store / reservoir; Hive for

SQL access to structured +

semi-structured HDFS data
•Pig, Spark, Python, R

for data analysis and
munging
•Cloudera Manager and

Hue for web-based

admin + dev access
Real-Time 

Logs / Events
RDBMS

Imports
File / 

Unstructured

Imports
Hive Metastore /

HCatalog
HDFS Cluster Filesystem
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Oracle Big Data Connectors
•Oracle-licensed utilities to connect Hadoop to Oracle RBDMS
‣Bulk-extract data from Hadoop to Oracle, or expose HDFS / Hive data as external tables
‣Run R analysis and processing on Hadoop
‣Leverage Hadoop compute resources to offload ETL and other work from Oracle RBDMS
‣Enable Oracle SQL to access and load Hadoop data
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Working with the Oracle Big Data Connectors
•Oracle Loader for Hadoop, Oracle SQL Connector for HDFS - rarely used
‣Sqoop works both way (Oracle>Hadoop, Hadoop>Oracle) and is “good enough”
‣OSCH replaced by Oracle Big Data SQL for direct Oracle>Hive access
•Oracle R Advanced Analytics for Hadoop has been very useful though
‣Run MapReduce jobs from R
‣Run R functions across Hive tables
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Oracle R Advanced Analytics for Hadoop Key Features
•Run R functions on Hive Dataframes •Write MapReduce functions in R
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Oracle Big Data SQL
•Part of Oracle Big Data 4.0 (BDA-only)
‣Also requires Oracle Database 12c, Oracle Exadata Database Machine
•Extends Oracle Data Dictionary to cover Hive
•Extends Oracle SQL and SmartScan to Hadoop
•Extends Oracle Security Model over Hadoop
‣Fine-grained access control
‣Data redaction, data masking
‣Uses fast c-based readers where possible

(vs. Hive MapReduce generation)
‣Map Hadoop parallelism to Oracle PQ
‣Big Data SQL engine works on top of YARN
‣Like Spark, Tez, MR2
Exadata

Storage Servers
Hadoop

Cluster
Exadata Database

Server
Oracle Big

Data SQL
SQL Queries
SmartScan SmartScan
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Still a Key Role for Data Integration, and BI Tools
•Fast, scaleable low-cost / flexible-schema data capture using Hadoop + NoSQL (BDA)
•Long-term storage of the most important downstream data - Oracle RBDMS (Exadata)
•Fast analysis + business-friendly interface : OBIEE, Endeca (Exalytics), RTD etc
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Productising the Next-Generation IM Architecture
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
OBIEE for Enterprise Analysis Across all Data Sources
•Dashboards, analyses, OLAP analytics, scorecards, 

published reporting, mobile
•Presented as an integrated business semantic model
•Optional mid-tier query acceleration using 

Oracle Exalytics In-Memory Machine
•Access data from RBDMS, applications, 

Hadoop, OLAP, ADF BCs etc
Enterprise Semantic

Business Model
Business Presentation

Layer (Reports, Dashboards)
In-Memory Caching Layer
Application

Sources
Hadoop /

NoSQL

Sources
DW / OLAP Sources
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Adding Search / Discovery Tools
•For searching and cataloging data in the data reservoir
•Typically use concepts of faceted search, and reading from Hive metastore
•Options include Elasticsearch, Cloudera Search / Hue, Oracle Big Data Discovery
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Bringing it All Together : Oracle Data Integrator 12c
•ODI provides an excellent framework for running Hadoop ETL jobs
‣ELT approach pushes transformations down to Hadoop - leveraging power of cluster
•Hive, HBase, Sqoop and OLH/ODCH KMs provide native Hadoop loading / transformation
‣Whilst still preserving RDBMS push-down
‣Extensible to cover Pig, Spark etc
•Process orchestration
•Data quality / error handling
•Metadata and model-driven
•New in 12.1.3.0.1 - ability to generate

Pig and Spark jobs too
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
How This Differs from the Discovery Lab
•We’re still loading and storing into Hadoop and NoSQL, but…
‣There’s governance and change control
‣Data is secured
‣Data loading and pipelines are resilient and “industrialized”
‣We use ETL tools, BI tools and search tools to enable access by end-users
‣We think about design standards, file and directory layouts, metadata etc
•Build on insights and models created in the Discovery Lab
•Put them into production so the business can rely on them
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Part 2

Building the Data Reservoir & Data Factory
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Typical RM Project BDA Topology
•Starter BDA rack, or full rack
•Kerberos-secured using

included KDC server
•Integration with corporate LDAP

for Cloudera Manager, Hue etc
•Developer access through Hue,

Beeline, R Studio
•End-user access through

OBIEE, Endeca and other tools
‣With final datasets usually

exported to Exadata or Exalytics
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Typical RM Hadoop + BDD Development Environment
•Development takes place on workstations, not

directly on Hadoop / BDA nodes
•ODI agent needs to be installed on a 

Hadoop node, or just use Oozie scheduler
•BDD typically runs on dedicated servers,

can also be clustered
•CDH5.3 is a good place to start in-terms

of compatibility, being supported etc
•Can usually use CDH Express, but full

version can be trialled for 60 days
‣Useful for Cloudera Navigator,

testing LDAP integration with CM
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Components Required for Typical Production Environment
•Hadoop cluster - typically 6-20 nodes, CDH or Hortonworks HDP with YARN / Hadoop 2.0
‣Can deploy on-premise, or in cloud (AWS etc) using Cloudera Director
•Oracle Database, ideally Exadata for Big Data SQL capabilities
•ODI12c 12.1.3.0.1 with Big Data Options (additional license required over ODI EE)
•Oracle Big Data Discovery
‣Currently only certified on CDH5.3, no Kerberos support yet
•Oracle Business Intelligence 11g
‣Limited Hive compatibility with 11.1.1.7; 11.1.1.9 promises HiveServer2 + Impala support
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Complete Oracle Big Data Product Stack
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Typical Configuration Tasks Post-Install
•Configure BDA directory structure, user access, LDAP integration etc
•Connect ODI12c 12.1.3.0.1 to Hive, HDFS, Pig and Spark on Hadoop cluster
•Connect OBIEE11g to Hive (and Impala)
•Set up a developer workstation with client libraries, ODI Studio, OBIEE BI Administrator etc
/user/mrittman/scratchpad
/user/ryeardley/scratchpad
/user/mpatel/scratchpad
/user/mrittman/scratchpad
/user/mrittman/scratchpad
/data/rm_website_analysis/logfiles/incoming
/data/rm_website_analysis/logfiles/archive
/data/rm_website_analysis/tweets/incoming
/data/rm_website_analysis/tweets/archive
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Configuring Hadoop (BDA) for LDAP Integration
•Both Cloudera Manager (with CDH Enterprise) and Hue can be linked to corporate LDAP
•Hive, Impala etc also need to be configured if you want to use Apache Sentry
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Configure HDFS Directory Structure, Permissions
•Best practice is to create application-specific HDFS directories for shared data
•Separate ETL out from archiving, store data in subdirectory partitions
•Use POSIX security model to grant RO access to groups of users
•Consider using new HDFS ACLs where appropriate (beware memory implications though)
/user/mrittman/scratchpad
/user/ryeardley/scratchpad
/user/mpatel/scratchpad
/user/mrittman/scratchpad
/user/mrittman/scratchpad
/data/rm_website_analysis/logfiles/incoming
/data/rm_website_analysis/logfiles/archive
/data/rm_website_analysis/tweets/incoming
/data/rm_website_analysis/tweets/archive
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Consider Access Control to Hive, Impala Tables
•Usual access control strategy is to limit users to accessing data
through Hive tables
•Consider using Apache Sentry to provide RBAC over Hive and
Impala tables
‣Column-based restrictions possible through SQL views
‣Requires Kerberos authentication and Hive/Impala LDAP
integration as prerequisites
•Oracle Big Data SQL potentially a more complete solution, if
available
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Configuring ODI12c 12.1.3.0.1 for Hadoop Data Integration
•New Hadoop DS technology used for registering base cluster details
•New WebLogic Hive drivers used for Hive table access
•Pig and Spark datasources configured for Pig Latin / Spark execution
•Either client workstation needs to be configured as Hadoop client,

or ODI agent installed on a Hadoop node
‣To execute Pig, Hive etc mappings
•Option now to use Oozie scheduler rather than ODI agent
‣Avoids need to install ODI agent on cluster
‣Integrates ODI workflows with other Hadoop scheduling
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Configuring OBIEE for Cloudera Impala Access
•Not officially supported with OBIEE 11.1.1.7, but does work
•Only possible using Windows version of OBIEE (looser rules around unsupported drivers)
•OBIEE 11.1.1.9 will come with Impala support
•Use Cloudera ODBC drivers
•Configure Database Type as Apache Hadoop
•For earlier versions of Impala, may need to 

disable ORDER BY in Database Features, 

have the BI Server do sorting
•Issue is that earlier versions of Impala 

requires LIMIT with all ORDER BY clauses
‣OBIEE could use LIMIT, but doesn’t for Impala 

at the moment (because not supported)
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Configuring OBIEE to Access a Kerberos-Secured Cluster
•Most production Hadoop clusters are Kerberos-secured
•OBIEE can access secured clusters with appropriate ODBC drivers
•Typically install Kerberos client on Windows workstation, and on server side
•If OBIEE runs using a system service account, 

ensure it can request a ticket too
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Configuring Oracle Big Data Discovery
•Configuration done during BDD installation, tied to a particular Hadoop cluster
•Specify Cloudera Manager + Hadoop service URLs
•May need to adjust RAM allocated to Spark Workers in Cloudera Manager
‣Currently only Spark Standalone

(not YARN) supported
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
End-to-End Oracle Big Data Example
•Rittman Mead want to understand drivers and audience for their website
‣What is our most popular content? Who are the most in-demand blog authors?
‣Who are the influencers? What do they read?
•Three data sources in scope:
RM Website Logs Twitter Stream Website Posts, Comments etc
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Two Analysis Scenarios : Reporting, and Data Discovery
•Initial task will be to ingest data from webserver logs, Twitter firehose, site content + ref data
•Land in Hadoop cluster, basic transform, format, store; then, analyse the data:
Combine with Oracle Big Data SQL
for structured OBIEE dashboard analysis
Combine with site content, semantics, text enrichment
Catalog and explore using Oracle Big Data Discovery
What pages are people visiting?
Who is referring to us on Twitter?
What content has the most reach?
Why is some content more popular?
Does sentiment affect viewership?
What content is popular, where?
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Data Sources used for ETL Ingestion & Reporting Exercise
Spark
Hive
HDFS
Spark
Hive
HDFS
Spark
Hive
HDFS
Cloudera CDH5.3 BDA Hadoop Cluster
Big Data
SQL
Exadata Exalytics
Flume
Flume
Dim

Attributes
SQL for

BDA Exec
Filtered &

Projected

Rows / 

Columns
OBIEE
TimesTen
12c In-Mem
Ingest Process Publish
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Apache Flume : Distributed Transport for Log Activity
•Apache Flume is the standard way to transport log files from source through to target
•Initial use-case was webserver log files, but can transport any file from A>B
•Does not do data transformation, but can send to multiple targets / target types
•Mechanisms and checks to ensure successful transport of entries
•Has a concept of “agents”, “sinks” and “channels”
•Agents collect and forward log data
•Sinks store it in final destination
•Channels store log data en-route
•Simple configuration through INI files
•Handled outside of ODI12c
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Flume Source / Target Configuration
•Conf file for source system agent
•TCP port, channel size+type, source type
•Conf settings for target agent, through CM
•TCP port, channel size+type, sink type
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Also - Apache Kafka : Reliable, Message-Based
•Developed by LinkedIn, designed to address Flume issues around reliability, throughput
‣(though many of those issues have been addressed since)
•Designed for persistent messages as the common use case
‣Website messages, events etc vs. log file entries
•Consumer (pull) rather than Producer (push) model
•Supports multiple consumers per message queue
•More complex to set up than Flume, and can use

Flume as a consumer of messages
‣But gaining popularity, especially 

alongside Spark Streaming
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Starting Flume Agents, Check Files Landing in HDFS Directory
•Start the Flume agents on source and target (BDA) servers
•Check that incoming file data starts appearing in HDFS
‣Note - files will be continuously written-to as 

entries added to source log files
‣Channel size for source, target agents

determines max no. of events buffered
‣If buffer exceeded, new events dropped

until buffer < channel size
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Adding Social Media Datasources to the Hadoop Dataset
•The log activity from the Rittman Mead website tells us what happened, but not “why”
•Common customer requirement now is to get a “360 degree view” of their activity
‣Understand what’s being said about them
‣External drivers for interest, activity
‣Understand more about customer intent, opinions
•One example is to add details of social media mentions,

likes, tweets and retweets etc to the transactional dataset
‣Correlate twitter activity with sales increases, drops
‣Measure impact of social media strategy
‣Gather and include textual, sentiment, contextual

data from surveys, media etc
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Accessing the Twitter “Firehose”
•Twitter provides an API for developers to
use to consume the Twitter “firehose”
•Can specify keywords to limit the tweets
consumed
•Free service, but some limitations on
actions (number of requests etc)
•Install additional Flume source JAR (pre-
built available, but best to compile from
source)
‣https://github.com/cloudera/cdh-twitter-
example
•Specify Twitter developer API key and
keyword filters in the Flume conf settings
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Making the Webserver Log Data Available to ODI
•Flume log data from webserver arrives as files in HDFS
•Can either be accessed in that form by ODI, or presented as a Hive table to ODI using SerDe
‣Both are fine, but creating the Hive table in advance makes ODI developer job simpler
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Creating a Hive Table over the Log Data, using SerDe
•Hive works by defining a table structure over data in HDFS, typically plain text with delimiter
•But can make use of SerDes (serializer-deserializers) to parse other formats
•Takes semi-structured data (Apache Combined Log Format) and turns into structured (Hive)
‣Can also use IKM File to Hive with same SerDe definition, to do within ODI
CREATE external TABLE apachelog_parsed(
host STRING,
identity STRING,
user STRING,
time STRING,
request STRING,
status STRING,
size STRING,
referer STRING,
agent STRING)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
"input.regex" = "([^]*) ([^]*) ([^]*) (-|[^]*]) 

([^ ”]*|"[^"]*")(-|[0-9]*) (-|[0-9]*)(?: ([^ "]

*|".*") ([^ "]*|".*"))?"
)
STORED AS TEXTFILE
LOCATION '/user/flume/rm_website_logs;
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Copying SerDe JAR Files to Hadoop Lib Directory
•Make sure any SerDe files for parsing Hive table data are copied to Hadoop lib directory
•Do this for all Hadoop nodes in the cluster
sudo cp /usr/lib/hive/lib/hive-contrib-0.13.1-cdh5.3.0.jar /usr/lib/hadoop/lib
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Making Twitter Data Available to ODI
•Simplest approach again is to define a Hive table over the Twitter data
•Arrives in files via Flume agent, but in JSON format
•Potentially contains more fields than we are interested in - and in JSON format
•Can address in ODI data load, but simpler to parse and select elements of interest beforehand
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Two-Stage Hive Table Creation using JSON SerDe
•Initial table uses JSON SerDe to parse all Twitter JSON documents in HDFS directory
•Clone + build from https://github.com/cloudera/cdh-twitter-example/tree/master/hive-serdes
CREATE EXTERNAL TABLE `tweets`(
`id` bigint COMMENT 'from deserializer',
`created_at` string COMMENT 'from deserializer',
`source` string COMMENT 'from deserializer',
`favorited` boolean COMMENT 'from deserializer',
`retweeted_status` struct<text:string,user:struct<screen_name:string,name:string>,

retweet_count:int> COMMENT 'from deserializer',
`entities` struct<urls:array<struct<expanded_url:string>>,

user_mentions:array<struct<screen_name:string,name:string>>,

hashtags:array<struct<text:string>>> COMMENT 'from deserializer',
`text` string COMMENT 'from deserializer',
`user` struct<screen_name:string,name:string,friends_count:int,followers_count:int,

statuses_count:int,verified:boolean,utc_offset:int,time_zone:string> COMMENT 'from deserializer',
`in_reply_to_screen_name` string COMMENT 'from deserializer')
ROW FORMAT SERDE
'com.cloudera.hive.serde.JSONSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
'hdfs://bigdatalite.rittmandev.com:8020/user/oracle/data/tweets';
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Two-Stage Hive Table Creation using JSON SerDe
•Second table extracts the individual fields from STRUCT datatypes in first table
‣Could be done through a view, but Big Data Discovery doesn’t support them yet
CREATE TABLE `tweets_expanded` AS select
`tweets`.`id`,
`tweets`.`created_at`,
`tweets`.`user`.screen_name as `user_screen_name`,
`tweets`.`user`.friends_count as `user_friends_count`,
`tweets`.`user`.followers_count as `user_followers_count`,
`tweets`.`user`.statuses_count as `user_tweets_count`,
`tweets`.`text`,
`tweets`.`in_reply_to_screen_name`,
`tweets`.`favorited`,
`tweets`.`retweeted_status`.user.screen_name as `retweet_user_screen_name`,
`tweets`.`retweeted_status`.retweet_count as `retweet_count`,
`tweets`.`entities`.urls[0].expanded_url as `url1`,
`tweets`.`entities`.urls[1].expanded_url as `url2`,
`tweets`.`entities`.hashtags[0].text as `hashtag1`,
`tweets`.`entities`.hashtags[1].text as `hashtag2`,
`tweets`.`entities`.hashtags[2].text as `hashtag3`,
`tweets`.`entities`.hashtags[3].text as `hashtag4`,
`tweets`.`entities`.user_mentions[0].screen_name as `user_mentions_screen_name1`,
`tweets`.`entities`.user_mentions[1].screen_name as `user_mentions_screen_name2`,
`tweets`.`entities`.user_mentions[2].screen_name as `user_mentions_screen_name3`,
`tweets`.`entities`.user_mentions[3].screen_name as `user_mentions_screen_name4`,
`tweets`.`entities`.user_mentions[4].screen_name as `user_mentions_screen_name5`
from `tweets`;
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Configuring the ODI12c 12.1.3.0.1 Hadoop Datasource
•New feature in ODI12.1.3.0.1 with Big Data Extensions
•Defines the physical server and Java library locations

for other tools (Pig etc) to use
‣Namenode location
‣Working area in HDFS for ODI
‣Location on HDFS to store basic

details of ODI installation / repo
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Configuring the ODI12c 12.1.3.0.1 Hive Datasource
•Used for reverse-engineering Hive table structures from Hadoop
•Uses JDBC connection, new WLS-derived driver
•Need to also either install Hadoop/Hive client on ODI Studio workstation, or 

install ODI Agent on target Hadoop cluster to actually execute mappings
‣New option to use Oozie removes need for ODI Agent though
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Import Hive Table Metadata into ODI Repository
•Connections to Hive, Hadoop (and Pig) set up earlier
•Define physical and logical schemas, reverse-engineer the table definitions into repository
‣Can be temperamental with tables using non-standard SerDes; make sure JARs registered
1
2
3
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Data Flow through the Hadoop + Exadata Data Reservoir
Spark
Hive
HDFS
Spark
Hive
HDFS
Spark
Hive
HDFS
Cloudera CDH5.3 BDA Hadoop Cluster
Big Data
SQL
Exadata Exalytics
Flume
Flume
Dim

Attributes
SQL for

BDA Exec
Filtered &

Projected

Rows / 

Columns
OBIEE
TimesTen
12c In-Mem
Ingest Process Publish
GG
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Major ETL Steps
1. Join initial log data extract to additional reference data (already in Hive)
2. Supplement with additional Oracle RDBMS data (brought in via Sqoop)
3. Filter log data to leave just requests for blog pages
4. Take the Twitter data, and filter to just tweets referencing RM web pages
5. Join Twitter activity to page hits, to create aggregate for the two
6. Geocode page hits to determine 

country + city of visitor
7. Sessionize the log data for use with 

an R classification routine
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
ETL Step 1 : Join Incoming Log Hive Table to Hive Ref Data
•IKM Hive Append can be used to perform Hive table joins, filtering, agg. etc.
•INSERT only, no DELETE, UPDATE etc
•Join to other Hive tables, or combine with Sqoop KMs etc to bring in Oracle data
•Supports most ODI operators
‣Filter
‣Aggregate
‣Join (ANSI-style)
‣etc
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
ETL Step 1 : Join Incoming Log Hive Table to Hive Ref Data
•ODI 12.1.3.0.1 replaces the previous template-style KMs (IKM Hive-to-Hive Control Append)
with new component-style KMs
‣Makes it possible to mix-and-match sources
‣Enables logical mapping to generate Hive, Pig and Spark code
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
ETL Step 1 : Join Incoming Log Hive Table to Hive Ref Data
•Executing mapping generates HiveQL code, executed through an ODI Agent (or Oozie)
•Code runs on Hadoop cluster, compiling down to Java MapReduce code
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
ETL Step 2 : Supplement with Oracle Reference Data
•In this step, the log data will be supplemented with additional reference data in Oracle
•Uses Sqoop (LKM SQL to Hive Sqoop) to extract Oracle data into Hive staging table
•Join temporary Hive table to the main log Hive table
‣Logical mapping just references the

Oracle source table, no need for

mapping designer to consider Sqoop
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
ETL Step 2 : Supplement with Oracle Reference Data
•Mapping physical details specify Sqoop KM for extract (LKM SQL to Hive Sqoop)
•IKM Hive Append used for join and load into Hive target
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
ETL Step 2 : Supplement with Oracle Reference Data
•Mapping execution then runs in three stages:
‣Create temporary Hive table for staging data
‣Generate and run Sqoop job to export reference data out of Oracle RBDMS
‣Join incoming reference Hive table to log data Hive table
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Alternative to Batch Replication using Sqoop : GoldenGate
•Oracle GoldenGate 12c for Big Data can replicate database transactions into Hadoop
•Load directly into Hive / HDFS, or feed transactions into Apache Flume as flume events
•Provides a way to replicate Oracle + other RBDMS data into the data reservoir
‣Works with Flume to provide a single streaming route into the the data reservoir
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Enabling Oracle Database 12c for GoldenGate Replication
•Oracle GoldenGate 11gR2 for Oracle Database introduced Integrated Capture Mode
‣Integrated with database, just enable with alter system set
enable_goldengate_replication=true
‣Required for Oracle Database 12c container databases (as found on Big Data Lite 4.1 VM)
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Oracle RDBMS to Hive via Flume Configuration Steps
1. Configure the source database for ARCHIVELOG mode, integrated capture and
supplementary logging
2. Create data source definition file to specify the database schema / tables to replicate
3. Set up the database capture (extract) process to write transactions to the trail file
4. Configure the GoldenGate Flume adapter to send transactions written to the trail file to a
Flume Adapter, via Avro RPC messages
5. Set up and configure a Flume Adapter to receive those messages, and write them in Hive
data storage format to HDFS for the target Hive table
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING FLUME 00:00:00 00:00:02
EXTRACT RUNNING ORAEXT 00:00:10 00:00:02
select CONCAT('Rows loaded from gg_Test.logs into HDFS via Flume: '

, count(*)) from gg_test.logs;
…
Rows loaded from gg_Test.logs into HDFS via Flume: 100
sqlplus gg_test@orcl/welcome1
begin 

P_GENERATE_LOGS(100); 

end;
2
1 3
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
ETL Step 3 : Filter Log Data to Retain Just Blog Page Views
•Same approach as with first mapping, Hive source to Hive target
•Uses Filter operator to add WHERE clause to HiveQL SELECT statement
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
ETL Step 4 : Filter Tweets to Just Leave RM Blog References
•Same process as previous step; extract from Hive source, filter, load into Hive target
•Filter on two URL columns as tweet can contain multiple URL references
‣Two picked as arbitrary limit to URL extraction
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Mapping Variant : Generate as Pig Latin vs. HiveQL
•ODI 12.1.3.0.1 comes with the ability to generate Pig Latin as well as HiveQL
•Alternative to Hive, defines data manipulation as dataflow steps (like an execution plan)
•Start with one or more data sources, add steps to apply filters, group, project columns
•Generates MapReduce to execute data flow, similar to Hive; extensible through UDFs
a = load '/user/oracle/pig_demo/marriott_wifi.txt';
b = foreach a generate flatten(TOKENIZE((chararray)$0)) as word;
c = group b by word;
d = foreach c generate COUNT(b), group;
store d into '/user/oracle/pig_demo/pig_wordcount';
[oracle@bigdatalite ~]$ hadoop fs -ls /user/oracle/pig_demo/pig_wordcount
Found 2 items
-rw-r--r-- 1 oracle oracle 0 2014-10-11 11:48 /user/oracle/pig_demo/pig_wordcount/_SUCCESS
-rw-r--r-- 1 oracle oracle 1965 2014-10-11 11:48 /user/oracle/pig_demo/pig_wordcount/part-r-00000
[oracle@bigdatalite ~]$ hadoop fs -cat /user/oracle/pig_demo/pig_wordcount/part-r-00000
2 .
1 I
6 a
...
2
1
3
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Configuring the ODI12c 12.1.3.0.1 Pig Datasource
•A way of linking a Pig execution environment to a previously-defined Hadoop DS
•Also gives ability to define additional JARs to use with Pig - DataFu, Piggybank etc
•Can be defined as either Local (running Pig code on workstation) or MapReduce
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Configuring a Mapping for Pig Latin Code Generation
•On the logical mapping, set the Staging Location Hint to the Pig logical schema
•For the mapping operators, set the Execute on Hint to Staging
Set as property

for whole mapping
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Creating a Physical Mapping Configured for Pig Latin
•Create additional deployment specification for Pig physical mapping
•Mapping operators will use Pig component KMs
•Set KM for target table or file to <Default> (from original IKM Hive Append)
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Executing a Pig Latin Mapping
•Can either run in Local, or MapReduce mode
‣Local usually faster for unit testing, MapReduce runs on full Hadoop cluster
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
ETL Step 5 : Join Tweets to Log Entries, Aggregate
•Simple join between two Hive tables, after aggregating their contents
‣Previous transformations in earlier mappings standardised the URL format
•Add page view and tweet totals to list of blog pages accessed
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
ETL Step 6 : Geocode Log Entries using IP Address
•Another requirement we have is to “geocode” the webserver log entries
•Based on the fact that IP ranges can usually be attributed to specific countries
•Not functionality normally found in Hive etc, but can be done with add-on APIs
•Approach used by Google Analytics etc to show where visitors are located
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
How GeoIP Geocoding Works
•Uses free Geocoding API and database from Maxmind
•Convert IP address to an integer
•Find which integer range our IP address sits within
•But Hive can’t use BETWEEN in a join…
•Solution : Expose PAGEVIEWS Hive table using Big Data SQL, then join to lookup table

in Oracle database
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Oracle Big Data SQL and Data Integration
•Gives us the ability to easily bring in Hadoop (Hive) data into Oracle-based mappings
•Allows us to create Hive-based mappings that use Oracle SQL for transforms, joins
•Faster access to Hive data for real-time ETL scenarios
•Through Hive, bring NoSQL and semi-structured data access to Oracle ETL projects
•For our scenario - join weblog + customer data in Oracle RDBMS, no need to stage in Hive
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Using Big Data SQL in an ODI12c Mapping
•By default, Hive table has to be exposed as an ORACLE_HIVE external table in Oracle first
•Then register that Oracle external table in ODI repository + model
External table creation in Oracle
Logical Mapping using just Oracle tables
1
2
Register in ODI Model
3
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
New KM : LKM Hive to Oracle (Big Data SQL)
•New KM works in similar way to Sqoop KM : Creates temporary ORACLE_HIVE table

to expose Hive data in Oracle environment
‣Allows Hive+Oracle joins by auto-creating ORACLE_HIVE extttab 

definition to enable Big Data SQL Hive table access
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
ODI12c Mapping Creates Temp Exttab, Joins to Oracle
1
2
Register in ODI Model
3
4
Hive table AP uses LKM Hive to Oracle (Big Data SQL)
IKM Oracle Insert
Big Data SQL Hive External Table created as temp object
Main integration SQL routines uses regular Oracle SQL join

(including use of advanced SQL functions, e.g. REGEXP_SUBSTR)
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
ETL Step 7 : Sessionize Log Data, for R Classification Model
•Discovery Lab part of the masterclass created a
classification model using R
•Used as input a sessionized version of the log
activity, grouping page views within 60s
•Sessionization routine was written as Pig script,
using DataFu and Piggybank UDFs
‣DataFu is a library of Pig functions initially
developed by LinkedIn, now an Apache project
‣Piggybank is a community-created library of Pig
UDFs and store/load routines
•So why was Pig used for this sessionization task?
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Apache Pig Characteristics vs. Hive
•Ability to load data into a defined schema, or use schema-less (access fields by position)
•Fields can contain nested fields (tuples)
•Grouping records on a key doesn’t aggregate them, it creates a nested set of rows in column
•Uses “lazy execution” - only evaluates data flow once final output has been requests
•Makes Pig an excellent language for interactive data exploration
vs.
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Pig Data Processing Example : Count Page Request Totals
raw_logs =LOAD '/user/oracle/rm_logs/' USING TextLoader AS (line:chararray);
logs_base = FOREACH raw_logs
GENERATE FLATTEN
(
REGEX_EXTRACT_ALL
(
line,
'^(S+) (S+) (S+) [([w:/]+s[+-]d{4})] "(.+?)" (S+) (S+) "([^"]*)" "([^"]*)"'
)
)
AS
(
remoteAddr: chararray, remoteLogname: chararray, user: chararray,
time: chararray, request: chararray, status: chararray, bytes_string: chararray,
referrer: chararray, browser: chararray
);
page_requests = FOREACH logs_base
GENERATE SUBSTRING(time,3,6) as month,
FLATTEN(STRSPLIT(request,' ',5)) AS (method:chararray, request_page:chararray, protocol:chararray);
page_requests_short = FOREACH page_requests
GENERATE $0,$2;
page_requests_short_filtered = FILTER page_requests_short BY (request_page is not null AND SUBSTRING(request_page,0,3) == '/20');
page_request_group = GROUP page_requests_short_filtered BY request_page;
page_request_group_count = FOREACH page_request_group GENERATE $0, COUNT(page_requests_short_filtered) as total_hits;
page_request_group_count_sorted = ORDER page_request_group_count BY $1 DESC;
page_request_group_count_limited = LIMIT page_request_group_count_sorted 10;
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Pig Data Processing Example : Join to Post Titles, Authors
•Pig allows aliases (datasets) to be joined to each other
•Example below adds details of post names, authors; outputs top pages dataset to file
raw_posts = LOAD '/user/oracle/pig_demo/posts_for_pig.csv' USING TextLoader AS (line:chararray);
posts_line = FOREACH raw_posts
GENERATE FLATTEN
(
STRSPLIT(line,';',10)
)
AS
(
post_id: chararray, title: chararray, post_date: chararray,
type: chararray, author: chararray, post_name: chararray,
url_generated: chararray
);
posts_and_authors = FOREACH posts_line
GENERATE title,author,post_name,CONCAT(REPLACE(url_generated,'"',''),'/') AS (url_generated:chararray);
pages_and_authors_join = JOIN posts_and_authors BY url_generated, page_request_group_count_limited BY group;
pages_and_authors = FOREACH pages_and_authors_join GENERATE url_generated, post_name, author, total_hits;
top_pages_and_authors = ORDER pages_and_authors BY total_hits DESC;
STORE top_pages_and_authors into '/user/oracle/pig_demo/top-pages-and-authors.csv' USING PigStorage(‘,');
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Pig Extensibility through UDFs and Streaming
•Similar to Apache Hive, Pig can be programatically extended through UDFs
•Example below uses Function defined in Python script to geocode IP addresses
#!/usr/bin/python
import sys
sys.path.append('/usr/lib/python2.6/site-packages/')
import pygeoip
@outputSchema("country:chararray")
def getCountry(ip):
gi = pygeoip.GeoIP('/home/nelio/GeoIP.dat')
country = gi.country_name_by_addr(ip)
return country
register 'python_geoip.py' using jython as pythonGeoIP;
raw_logs =LOAD '/user/root/logs/' USING TextLoader AS (line:chararray);
logs_base = FOREACH raw_logs
GENERATE FLATTEN
(
REGEX_EXTRACT_ALL
(
line,
'^(S+) (S+) (S+) [([w:/]+s[+-]d{4})] 

"(.+?)" (S+) (S+) "([^"]*)" "([^"]*)"'
)
)
AS (
remoteAddr: chararray, remoteLogname: chararray, user: chararray,
time: chararray, request: chararray, 

status: int, bytes_string: chararray, referrer: chararray, 

browser: chararray
);
ipaddress = FOREACH logs_base GENERATE remoteAddr;
clean_ip = FILTER ipaddress BY 

(remoteAddr matches '^.*?((?:d{1,3}.){3}d{1,3}).*?$');
country_by_ip = FOREACH clean_ip 

GENERATE pythonGeoIP.getCountry(remoteAddr);
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Pig Sessionization Script used in Discovery Lab
register /opt/cloudera/parcels/CDH/lib/pig/datafu.jar;
register /opt/cloudera/parcels/CDH/lib/pig/piggybank.jar;
DEFINE Sessionize datafu.pig.sessions.Sessionize('60m');
DEFINE Median datafu.pig.stats.StreamingMedian();
DEFINE Quantile datafu.pig.stats.StreamingQuantile('0.9','0.95');
DEFINE VAR datafu.pig.VAR();
DEFINE CustomFormatToISO org.apache.pig.piggybank.evaluation.datetime.convert.CustomFormatToISO();
DEFINE ISOToUnix org.apache.pig.piggybank.evaluation.datetime.convert.ISOToUnix();
--------------------------------------------------------------------------------
-- Import and clean logs
raw_logs = LOAD '/user/flume/rm_logs/apache_access_combined' USING TextLoader AS (line:chararray);
-- Extract individual fields
logs_base = FOREACH raw_logs
GENERATE FLATTEN
(REGEX_EXTRACT_ALL(line,'^(S+) (S+) (S+) [([w:/]+s[+-]d{4})] 

"(.+?)" (S+) (S+) "([^"]*)" "([^"]*)"')) AS
(remoteAddr: chararray, remoteLogName: chararray, user: chararray, time: chararray, 

request: chararray, status: chararray, bytes_string: chararray, referrer:chararray, browser: chararray);
-- Remove Bots and convert timestamp
logs_base_nobots = FILTER logs_base BY NOT (browser matches '.*(spider|robot|bot|slurp|Bot|monitis|

Baiduspider|AhrefsBot|EasouSpider|HTTrack|Uptime|FeedFetcher|dummy).*');
-- Remove uselesss columns and convert timestamp
clean_logs = FOREACH logs_base_nobots GENERATE CustomFormatToISO(time,'dd/MMM/yyyy:HH:mm:ss Z') as time, 

remoteAddr, request, status, bytes_string, referrer, browser;
--------------------------------------------------------------------------------
-- Sessionize the data
clean_logs_sessionized = FOREACH (GROUP clean_logs BY remoteAddr) {
ordered = ORDER clean_logs BY time;
GENERATE FLATTEN(Sessionize(ordered))
AS (time, remoteAddr, request, status, bytes_string, referrer, browser, sessionId);
};
-- The following steps will generate a tsv file in your home directory to download and work with in R
store clean_logs_sessionized into '/user/jmeyer/clean_logs' using PigStorage('t','-schema');
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Converting the Pig Script to an ODI Mapping
•Not an obvious translation - Pig data flows don’t map 1:1 with Hive set-based transformations
‣Pig aliases use lazy execution: intermediate results aren’t materialised as Hive tables
‣Some concepts - GENERATE FLATTEN etc - don’t translate to SQL expressions
‣DataFu and Piggybank UDFs don’t have equivalent Hive versions
clean_logs_sessionized = FOREACH (GROUP clean_logs BY remoteAddr) {
ordered = ORDER clean_logs BY time;
GENERATE FLATTEN(Sessionize(ordered))
AS (time, remoteAddr, request, status, bytes_string, referrer, browser, sessionId);
};
select sum(f.flights)

from flight_performance f 

join origin o on (f.origin = o.origin)

where o.origin = 'SFO';
vs.
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
ODI 12.1.3.0.1 Logical Mapping for Log Sessionization
Expression operator used instead of Hive table target;

generated as ALIAS when deployed as Pig Latin mapping Table Function operator used to generate another ALIAS

by running input attributes through arbitrary Pig Latin script
Only data materialised is in Hive table,

at end of dataflow
3
2
1
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Expression Mapping Operator Used to Create Next Alias
•Using Expression rather than datastore operator creates transformation “in-line”
•With Pig execution, generates expression as ALIAS
•Allows use of expressions (e.g. CustomFormatToISO Piggybank UDF)
•Filters etc included in ALIAS definition
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Table Function Operator used for Executing Pig Commands
•Table function operator processes input attributes through arbitrary script
•In pig mappings, allows use of more complex Pig transformations
‣GENERATE FLATTEN, use of DataFu Sessionize UDF
•Final ALIAS defined within Pig Latin script has to match name of Table Function operator
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Pig Latin Generated Script for Sessionization Task
•Creates single dataflow using series of ALIASes
•Includes Pig Latin commands added through Table Function
•Matches logic and approach of original hand-coded Pig script, but now managed within ODI
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Create ODI Package for Processing Steps, and Execute
•Create ODI Package or Load Plan to run steps in sequence
‣With load plan, can also add exceptions and recoverability
•Execute package to load data into final Hive tables
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Summary : Data Processing Phase
•We’ve now processed the incoming data, filtering it and transforming to required state
•Joined (“mashed-up”) datasets from website activity, and social media mentions
•Ingestion and the load/processing stages are now complete
•Now we want to make the Hadoop 

output available to a wider, 

non-technical audience…
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Part 3

Reporting and Dashboards across the Data
Reservoir using Oracle Big Data SQL + OBIEE
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Options for Sharing Data Reservoir Data with Users
•Several options for reporting on the content in the data reservoir and DW
‣Using a reporting & dashboarding tool compatible with Hive + DW, e.g. OBIEE11g
‣Using a search/data discovery tool, for example Big Data Discovery
‣Export Hadoop/Hive data into Oracle

and report from there Actionable
Events
Event.Engine Enterprise.
Information.Store
Reporting
Discovery.Lab
Actionable
Information
Actionable
Insights
Input
Events
Execution
Innovation
Discovery.
Output
Events.
&.Data
Structured
Enterprise.
Data
Other
Data
Data.
Reservoir
Data.Factory
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Alternative to Reporting Against Hadoop : Export to Data Mart
•In most cases, for general reporting access, exporting into RDBMS makes sense
•Export Hive data from Hadoop into Oracle Data Mart or Data Warehouse
•Use Oracle RDBMS for high-value data analysis, full access to RBDMS optimisations
•Potentially use Exalytics for in-memory RBDMS access
Loading

Stage
Processing
Stage
Store / Export
Stage
Real-Time 

Logs / Events
RDBMS

Imports
File / 

Unstructured

Imports
RDBMS

Exports
File

Exports
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Using the Right Server for the Right Job
•Hadoop for large scale, high-speed data ingestion and processing
•Oracle RDBMS and Exadata for long-term storage of high-value data
•Oracle Exalytics for speed-of-though analytics in TimesTen and Oracle Essbase
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Oracle Business Intelligence and Big Data Sources
•OBIEE 11g from 11.1.1.7 can connect to Hadoop sources
‣OBIEE 11.1.1.7+ supports Hive/Hadoop as a data source, via specific Hive ODBC drivers

and Apache Hive Physical Layer database type
‣But practically, it comes with limitations
‣Current 11.1.1.7 version of OBIEE only ships with HiveServer1 ODBC drivers
‣HiveQL is a limited subset of ISO/Oracle SQL
‣… and Hive access is really slow
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Configuring OBIEE for Hive Access
•As of OBIEE 11.1.1.7, access is through Oracle-supplied Data Direct Drivers
‣Not compatible with HiveServer2 protocol used by CDH4+
‣As workaround, use Windows version of OBIEE and Cloudera ODBC drivers
‣OBIEE 11.1.1.9 will come with HiveServer2 drivers (hopefully)
•Need to configure on both server, and BI Administration workstation
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Setting up the ODBC Connection to Hadoop Environment
•Example uses OBIEE 11.1.1.7 on Windows, to
allow use of Cloudera Hive ODBC drivers
(HiveServer2)
‣Linux OBIEE 11g version only allows use of
Oracle-supplied HiveServer1 drivers
•Install ODBC drivers, create system DSN
•Use username/password authentication, or
Kerberos if required
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Importing Hive Metadata
1. Use BI Administration tool, File > Import Metadata
2. Select DSN previously created for Hive datasource
3. Import table metadata from correct Hive database
4. Set Database Type to Apache Hadoop
3
2
1
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Testing Hive Connection & Data Retrieval
•Confirm that Hive table data can be returned by the BI Administration tool
‣Basic check before carrying on; should also check with the RPD online too (for BI Server)
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Building an Initial Business Model from Hive Tables
•Main fact table is based on page requests
(ACCESS_PER_POST)
•Pages dimension table (POSTS)
•Simple counts of pages viewed per author,
post category etc
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Federated Hive and Oracle Data via BI Server
•Oracle Database has a table containing HTTP status codes
•Import into RPD to include in business model
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Join Hive Fact (Log) Data to Oracle Reference Data
•BI Server issues two separate queries; one to Hive, one to Oracle
•Returned datasets then joined (stitch-join) by BI Server and returned as single resultset
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
How Can This Be Improved On?
•Gives the ability to supplement Hadoop data with reference data
from Oracle, Excel etc
•But response time is still quite slow
•What about faster versions of Hive - Cloudera Impala for example?
•Cloudera’s answer to Hive query response time issues
•MPP SQL query engine running on Hadoop, bypasses
MapReduce for direct data access
•Mostly in-memory, but spills to disk if required
•Uses Hive metastore to access Hive table metadata
•Similar SQL dialect to Hive - not as rich though and no support
for Hive SerDes, storage handlers etc
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
How Impala Works
•A replacement for Hive, but uses Hive concepts and

data dictionary (metastore)
•MPP (Massively Parallel Processing) query engine

that runs within Hadoop
‣Uses same file formats, security,

resource management as Hadoop
•Processes queries in-memory
•Accesses standard HDFS file data
•Option to use Apache AVRO, RCFile,

LZO or Parquet (column-store)
•Designed for interactive, real-time

SQL-like access to Hadoop
Impala
Hadoop
HDFS etc
BI Server
Presentation Svr
Cloudera Impala

ODBC Driver
Impala
Hadoop
HDFS etc
Impala
Hadoop
HDFS etc
Impala
Hadoop
HDFS etc
Impala
Hadoop
HDFS etc
Multi-Node

Hadoop Cluster
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Enabling Hive Tables for Impala
•Log into Impala Shell, run INVALIDATE METADATA command to refresh Impala table list
•Run SHOW TABLES Impala SQL command to view tables available
•Run COUNT(*) on main ACCESS_PER_POST table to see typical response time
[oracle@bigdatalite ~]$ impala-shell
Starting Impala Shell without Kerberos authentication
[bigdatalite.localdomain:21000] > invalidate metadata;
Query: invalidate metadata
Fetched 0 row(s) in 2.18s
[bigdatalite.localdomain:21000] > show tables;
Query: show tables
+-----------------------------------+
| name |
+-----------------------------------+
| access_per_post |
| access_per_post_cat_author |
| … |
| posts |
|——————————————————————————————————-+
Fetched 45 row(s) in 0.15s
[bigdatalite.localdomain:21000] > select count(*) 

from access_per_post;
Query: select count(*) from access_per_post
+----------+
| count(*) |
+----------+
| 343 |
+----------+
Fetched 1 row(s) in 2.76s
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Setting up an ODBC Connection to Impala
•Download ODBC drivers for Impala from Cloudera
Website
‣Windows, Linux, Mac, AIX
•Create system DSN as normal, use port 21050
•Configure authentication
‣For unsecured cluster, use “No Authentication”
‣For secured, use Kerberos, etc
•Test datasource to check successful connectivity
•Complete on both Windows workstation, and
server hosting BI Server component
|
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Recreate Business Model, Re-run Basic Report
•Significant improvement over Hive response time
•Now makes Hadoop suitable for ad-hoc querying
|
Logical Query Summary Stats: Elapsed time 2, Response time 1, Compilation time 0 (seconds)
Logical Query Summary Stats: Elapsed time 50, Response time 49, Compilation time 0 (seconds)
Simple Two-Table Join against Hive Data Only
Simple Two-Table Join against Impala Data Only
vs
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Re-Create Oracle Query Federation, and Retest
•Add Oracle HTTP Status table to business model sourced from Impala data
•Join HTTP Status table to Impala fact table in Physical layer
•Recreate query to compare response time to Hive + Oracle version
Logical Query Summary Stats: Elapsed time 102, Response time 102, Compilation time 0 (seconds)
Logical Query Summary Stats: Elapsed time 1, Response time 1, Compilation time 0 (seconds)
Federated Query joining Hive + Oracle Data
Federated Query joining Impala + Oracle Data
vs
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Any Way We Can Improve This Further?
•If available, use Oracle Big Data SQL to query Hive data only, or federated Hive + Oracle
•Access Hive data through Big Data SQL SmartScan feature, for Exadata-type response time
•Use standard Oracle SQL across both Hive and Oracle data
•Also extends to data in Oracle NoSQL database
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Oracle Big Data SQL
•Part of Oracle Big Data 4.0 (BDA-only)
‣Also requires Oracle Database 12c, Oracle Exadata Database Machine
•Extends Oracle Data Dictionary to cover Hive
•Extends Oracle SQL and SmartScan to Hadoop
•Extends Oracle Security Model over Hadoop
‣Fine-grained access control
‣Data redaction, data masking
‣Uses fast c-based readers where possible

(vs. Hive MapReduce generation)
‣Map Hadoop parallelism to Oracle PQ
‣Big Data SQL engine works on top of YARN
‣Like Spark, Tez, MR2
Exadata

Storage Servers
Hadoop

Cluster
Exadata Database

Server
Oracle Big

Data SQL
SQL Queries
SmartScan SmartScan
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
View Hive Table Metadata in the Oracle Data Dictionary
•Oracle Database 12c 12.1.0.2.0 with Big Data SQL option can view Hive table metadata
‣Linked by Exadata configuration steps to one or more BDA clusters
•DBA_HIVE_TABLES and USER_HIVE_TABLES exposes Hive metadata
•Oracle SQL*Developer 4.0.3, with Cloudera Hive drivers, can connect to Hive metastore
SQL> col database_name for a30
SQL> col table_name for a30
SQL> select database_name, table_name
2 from dba_hive_tables;
DATABASE_NAME TABLE_NAME
------------------------------ ------------------------------
default access_per_post
default access_per_post_categories
default access_per_post_full
default apachelog
default categories
default countries
default cust
default hive_raw_apache_access_log
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Hive Access through Oracle External Tables + Hive Driver
•Big Data SQL accesses Hive tables through external table mechanism
‣ORACLE_HIVE external table type imports Hive metastore metadata
‣ORACLE_HDFS requires metadata to be specified
•Access parameters cluster and tablename specify Hive table source and BDA cluster
CREATE TABLE access_per_post_categories(
hostname varchar2(100),
request_date varchar2(100),
post_id varchar2(10),
title varchar2(200),
author varchar2(100),
category varchar2(100),
ip_integer number)
organization external
(type oracle_hive
default directory default_dir
access parameters(com.oracle.bigdata.tablename=default.access_per_post_categories));
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Big Data SQL Server Dataflow
•Read data from HDFS Data Node
‣Direct-path reads
‣C-based readers when possible
‣Use native Hadoop classes otherwise

•Translate bytes to Oracle

•Apply SmartScan to Oracle bytes
‣Apply filters
‣Project columns
‣Parse JSON/XML
‣Score models Disks%
Data$Node$
Big$Data$SQL$Server$
External$Table$Services$
Smart$Scan$
RecordReader%
SerDe%
10110010%10110010%10110010%
1%
2%
3%
1
2
3
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Use Rich Oracle SQL Dialect over Hadoop (Hive) Data
•Ranking Functions
‣rank, dense_rank, cume_dist,
percent_rank, ntile
•Window Aggregate Functions
‣Avg, sum, min, max, count, variance,
first_value, last_value
•LAG/LEAD Functions
•Reporting Aggregate Functions
‣Sum, Avg, ratio_to_report
•Statistical Aggregates
‣Correlation, linear regression family,
covariance
•Linear Regression
‣Fitting of ordinary-least-squares
regression line to set of number pairs
•Descriptive Statistics
•Correlations
‣Pearson’s correlation coefficients
•Crosstabs
‣Chi squared, phi coefficinet
•Hypothesis Testing
‣Student t-test, Bionomal test
•Distribution
‣Anderson-Darling test - etc.
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Leverages Hive Metastore for Hadoop Java Access Classes
•As with other next-gen SQL access layers, uses common Hive metastore table metadata
•Provides route to underlying Hadoop data for Oracle Big Data SQL c-based SmartScan
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Extending SmartScan, and Oracle SQL, Across All Data
•Brings query-offloading features of Exadata

to Oracle Big Data Appliance
•Query across both Oracle and Hadoop sources
•Intelligent query optimisation applies SmartScan

close to ALL data
•Use same SQL dialect across both sources
•Apply same security rules, policies, 

user access rights across both sources
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Example Usage : Use Big Data SQL for Geocoding Exercise
•Earlier on we used ODI and Big Data SQL to join incoming log data to Geocoding table
•Big Data SQL used as it enabled Hive data to use BETWEEN join
•We will now reproduce using OBIEE environment
•Benefit is doing on the fly, outside of ETL
Hive Weblog Activity table
Oracle Geocoding lookup tables
Combined output

in report form
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Create ORACLE_HIVE External Table over Hive Table
•Use the ORACLE_HIVE access driver type to create Oracle external table over Hive table
•ACCESS_PER_POST_EXTTAB and POSTS_EXTTAB now appear in Oracle data dictionary
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Import Oracle Tables, Create RPD joining Tables Together
•No need to use Hive ODBC drivers -
Oracle OCI connection instead
•No issue around HiveServer1 vs
HiveServer2
•Big Data SQL handles authentication

with Hadoop cluster in background,
Kerberos etc
•Transparent to OBIEE - all appear as
Oracle tables
•Join across schemas if required
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Create Physical Data Model from Imported Table Metadata
•Join ORACLE_HIVE external tables to reference table from Oracle DB
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Recreate Business Model, All Sourced From Oracle
•Map incoming physical tables into a star schema
•Add aggregation method for fact measures
•Add logical keys for logical dimension tables
•Remove columns from fact table that aren’t measures
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Create Report against Oracle + Big Data SQL Tables
•BI Server thinks that all data sourced from Oracle
•Uses full Oracle SQL features, guarantees all Oracle-sourced reports will work if DW data
offloaded to Hadoop (Hive)
•Fast access through SmartScan feature
WITH
SAWITH0 AS (select count(T45134.TIME) as c1,
T45146.POST_AUTHOR as c2,
T44832.DSC as c3
from
BDA_OUTPUT.POSTS_EXTTAB T45146,
BLOG_REFDATA.HTTP_STATUS_CODES T44832,
BDA_OUTPUT.ACCESS_PER_POST_EXTTAB T45134
where ( T44832.STATUS = T45134.STATUS and T45134.POST_ID = T45146.POST_ID )
group by T44832.DSC, T45146.POST_AUTHOR)
select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3, D1.c4 as c4 from ( select distinct 0 as c1,
D1.c2 as c2,
D1.c3 as c3,
D1.c1 as c4
from
SAWITH0 D1
order by c3, c2 ) D1 where rownum <= 65001
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Uses Concept of Query Franchising vs Query Federation
•Oracle Database handles all queries for client tool, then offloads to Hive if needed
•Contrast with Query federation - BI Server has to issue separate

SQL queries for each source, then stitch-join results
‣And be aware of different SQL dialects, DB features etc
WITH
SAWITH0 AS (select count(T45134.TIME) as c1,
T45146.POST_AUTHOR as c2,
T44832.DSC as c3
from
BDA_OUTPUT.POSTS_EXTTAB T45146,
BLOG_REFDATA.HTTP_STATUS_CODES T44832,
BDA_OUTPUT.ACCESS_PER_POST_EXTTAB T45134
where ( T44832.STATUS = T45134.STATUS and T45134.POST_ID = T45146.POST_ID )
group by T44832.DSC, T45146.POST_AUTHOR)
select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3, D1.c4 as c4 from ( select distinct 0 as c1,
D1.c2 as c2,
D1.c3 as c3,
D1.c1 as c4
from
SAWITH0 D1
order by c3, c2 ) D1 where rownum <= 65001
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Uses Concept of Query Franchising vs Query Federation
•Oracle Database handles all queries for client tool, then offloads to Hive if needed
•Contrast with Query federation - BI Server has to issue separate

SQL queries for each source, then stitch-join results
‣And be aware of different SQL dialects, DB features etc
•Only columns (projection) and rows (filtering) required to 

answer query sent back to Exadata
•Storage Indexes used on both Exadata Storage Servers 

and BDA nodes to skip block reads for irrelevant data
•HDFS caching used to speed-up

access to commonly-used

HDFS data
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Create Initial Analyses Against Combined Dataset
•Create analyses using

full SQL features
•Access to Oracle RDBMS

Advanced Analytics functions

through EVALUATE,

EVALUATE_AGGR etc
•Big Data SQL SmartScan feature

provides fast, ad-hoc access

to Hive data, avoiding MapReduce
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Prepare Physical Model for Big Data SQL Join to GEOIP Data
•Create SELECT table view in RPD over ACCESS_PER_POST_EXTTAB table

to derive IP address integer from hostname IP address
‣Also add in a conversion of access date field - for later…
•Import GEOIP_COUNTRY reference table into RPD
•Join on
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Access to Full Set of Oracle Join Types
•No longer restricted to HiveQL equi-joins - Big Data SQL supports all Oracle join operators
•Use to join Hive data (using View over external table) to the IP range country lookup table

using BETWEEN join operator
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Reports Now Include Country Data via IP Geocoding
•Makes use of Oracle SQL’s BETWEEN join operator
•Underlying log + posts data still sourced from Hive, via Big Data SQL Query Franchising
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Add In Time Dimension Table
•Enables time-series reporting; pre-req for forecasting (linear regression-type queries)
•Map to Date field in view over ORACLE_HIVE table
‣Convert incoming Hive STRING field to Oracle DATE for better time-series manipulation
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Now Enables Time-Series Reporting and Country Lookups
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Use Exalytics In-Memory Aggregate Cache if Required
•If further query acceleration is required, Exalytics In-Memory Cache can be used
•Enabled through Summary Advisor, caches commonly-used aggregates in in-memory cache
•Options for TimesTen or Oracle Database 12c In-Memory Option
•Returns aggregated data “at the speed of thought”
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Part 4

Discovering and Analyzing the Data Reservoir
using Oracle Big Data Discovery
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Enable Incoming Site Activity Data for Data Discovery
•Another use-case for Hadoop data is “data discovery”
‣Load data into the data reservoir
‣Catalog and understand separate datasets
‣Enrich data using graphical tools
‣Join separate datasets together
‣Present textual data alongside measures

and key attributes
‣Explore and analyse using faceted search
2 Combine with site content, semantics, text enrichment
Catalog and explore using Oracle Big Data Discovery
Why is some content more popular?
Does sentiment affect viewership?
What content is popular, where?
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Oracle Big Data Discovery
•“The Visual Face of Hadoop” - cataloging, analysis and discovery for the data reservoir
•Runs on Cloudera CDH5.3+ (Hortonworks support coming soon)
•Combines Endeca Server + Studio technology with Hadoop-native (Spark) transformations
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Data Sources used for Data Discovery Exercise
Spark
Hive
HDFS
Spark
Hive
HDFS
Spark
Hive
HDFS
Cloudera CDH5.3 BDA Hadoop Cluster
Hive Client
HDFS Client
BDD

DGraph

Gateway
Hive Client
BDD
Studio

Web UI
BDD Node
BDD Data
Processing
BDD Data
Processing
BDD Data
Processing
Ingest semi-

process logs

(1m rows)
Ingest processed

Twitter activity
Write-back

Transformations

to full

datasets
Upload

Site page and

comment
contents
Persist uploaded DGraph

content in Hive / HDFS
Data Discovery
using Studio
web-based app
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Oracle Big Data Discovery Architecture
•Adds additional nodes into the CDH5.3 cluster, for running DGraph and Studio
•DGraph engine based on Endeca Server technology, can also be clustered
•Hive (HCatalog) used for reading table metadata,

mapping back to underlying HDFS files
•Apache Spark then used to upload (ingest)

data into DGraph, typically 1m row sample
•Data then held for online analysis in DGraph
•Option to write-back transformations to

underlying Hive/HDFS files using Apache Spark
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Ingesting & Sampling Datasets for the DGraph Engine
•Datasets in Hive have to be ingested into DGraph engine before analysis, transformation
•Can either define an automatic Hive table detector process, or manually upload
•Typically ingests 1m row random sample
‣1m row sample provides > 99% confidence that answer is within 2% of value shown

no matter how big the full dataset (1m, 1b, 1q+)
‣Makes interactivity cheap - representative dataset
Amount'of'data'queried
The'100%'premium
Cost
Accuracy
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Ingesting Site Activity and Tweet Data into DGraph
•Two output datasets from ODI process have to be ingested into DGraph engine
•Upload triggered by manual call to BDD Data Processing CLI
‣Runs Oozie job in the background to profile,

enrich and then ingest data into DGraph
[oracle@bddnode1 ~]$ cd /home/oracle/Middleware/BDD1.0/dataprocessing/edp_cli
[oracle@bddnode1 edp_cli]$ ./data_processing_CLI -t access_per_post_cat_author
[oracle@bddnode1 edp_cli]$ ./data_processing_CLI -t rm_linked_tweets
Hive
Apache Spark
pageviews
X rows
pageviews
>1m rows
Profiling pageviews
>1m rows
Enrichment pageviews
>1m rows
BDD
pageviews
>1m rows
{
"@class" : "com.oracle.endeca.pdi.client.config.workflow.

ProvisionDataSetFromHiveConfig",
"hiveTableName" : "rm_linked_tweets",
"hiveDatabaseName" : "default",
"newCollectionName" : “edp_cli_edp_a5dbdb38-b065…”,
"runEnrichment" : true,
"maxRecordsForNewDataSet" : 1000000,
"languageOverride" : "unknown"
}
1
2
3
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Ingesting Site Activity and Tweet Data into DGraph
•Two output datasets from ODI process have to be ingested into DGraph engine
•Upload triggered by manual call to BDD Data Processing CLI
‣Runs Oozie job in the background to profile,

enrich and then ingest data into DGraph
Hive
Apache Spark
Full Table
Sampled

Table
Profiling Profiled

Sampled Tbl
Enrichment Enriched

Sampled Tbl
BDD
BDD

Dataset
1
2
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Ingesting and Sampling Hive Data into Big Data Discovery
[oracle@bigdatalite ~]$ cd /home/oracle/movie/Middleware/BDD1.0/dataprocessing/edp_cli
[oracle@bigdatalite edp_cli]$ ./data_processing_CLI -t access_per_post_cat_author
[oracle@bigdatalite edp_cli]$ ./data_processing_CLI -t rm_linked_tweets
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
View Ingested Datasets, Create New Project
•Ingested datasets are now visible in Big Data Discovery Studio
•Create new project from first dataset, then add second
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Automatic Enrichment of Ingested Datasets
•Ingestion process has automatically geo-coded host IP addresses
•Other automatic enrichments run after initial discovery step, based on datatypes, content
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Initial Data Exploration On Uploaded Dataset Attributes
•For the ACCESS_PER_POST_CAT_AUTHORS dataset, 18 attributes now available
•Combination of original attributes, and derived attributes added by enrichment process
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Explore Attribute Values, Distribution using Scratchpad
•Click on individual attributes to view more details about them
•Add to scratchpad, automatically selects most relevant data visualisation
1
2
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Filter (Refine) Visualizations in Scratchpad
•Click on the Filter button to display a refinement list
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Display Refined Data Visualization
•Select refinement (filter) values from refinement pane
•Visualization in scratchpad now filtered by that attribute
‣Repeat to filter by multiple attribute values
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Save Scratchpad Visualization to Discovery Page
•For visualisations you want to keep, you can add them to Discovery page
•Dashboard / faceted search part of BDD Studio - we’ll see more later
1
2
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Select Multiple Attributes for Same Visualization
•Select AUTHOR attribute, see

initial ordered values, distribution
•Add attribute POST_DATE
‣choose between multiple 

instances of first attribute 

split by second
‣or one visualisation with 

multiple series
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Data Transformation & Enrichment
•Data ingest process automatically applies some enrichments - geocoding etc
•Can apply others from Transformation page - simple transformations & Groovy expressions
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Standard Transformations - Simple & Using Editor
•Group and bin attribute values; filter on attribute values, etc
•Use Transformation Editor for custom transformations (Groovy, incl. enrichment functions)
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Datatype Conversion Example : String to Date / Time
•Datatypes can be converted into other datatypes, with data transformed if required
•Example : convert Apache Combined Format Log date/time to Java date/time
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Transformations using Text Enrichment / Parsing
•Uses Salience text engine under the covers
•Extract terms, sentiment, noun groups, positive / negative words etc
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Create New Attribute using Derived (Transformed) Values
•Choose option to Create New Attribute, to add derived attribute to dataset
•Preview changes, then save to transformation script
12
3
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Commit Transforms to DGraph, or Create New Hive Table
•Transformation changes have to be committed to DGraph sample of dataset
‣Project transformations kept separate from other project copies of dataset
•Transformations can also be applied to full dataset, using Apache Spark
‣Creates new Hive table of complete dataset
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 

+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India)
E : info@rittmanmead.com
W : www.rittmanmead.com
Upload Additional Datasets
•Users can upload their own datasets into BDD, from MS Excel or CSV file
•Uploaded data is first loaded into Hive table, then sampled/ingested as normal
1
2
3
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Architecture
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Architecture
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Architecture
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Architecture
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Architecture
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Architecture
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Architecture
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Architecture
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Architecture
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Architecture
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Architecture
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Architecture
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Architecture
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Architecture

Más contenido relacionado

La actualidad más candente

Unlock the value in your big data reservoir using oracle big data discovery a...
Unlock the value in your big data reservoir using oracle big data discovery a...Unlock the value in your big data reservoir using oracle big data discovery a...
Unlock the value in your big data reservoir using oracle big data discovery a...Mark Rittman
 
Next Generation Hadoop Introduction
Next Generation Hadoop IntroductionNext Generation Hadoop Introduction
Next Generation Hadoop IntroductionAdam Muise
 
TimesTen - Beyond the Summary Advisor (ODTUG KScope'14)
TimesTen - Beyond the Summary Advisor (ODTUG KScope'14)TimesTen - Beyond the Summary Advisor (ODTUG KScope'14)
TimesTen - Beyond the Summary Advisor (ODTUG KScope'14)Mark Rittman
 
Hadoop at the Center: The Next Generation of Hadoop
Hadoop at the Center: The Next Generation of HadoopHadoop at the Center: The Next Generation of Hadoop
Hadoop at the Center: The Next Generation of HadoopAdam Muise
 
Innovation in the Data Warehouse - StampedeCon 2016
Innovation in the Data Warehouse - StampedeCon 2016Innovation in the Data Warehouse - StampedeCon 2016
Innovation in the Data Warehouse - StampedeCon 2016StampedeCon
 
Citizens Bank: Data Lake Implementation – Selecting BigInsights ViON Spark/Ha...
Citizens Bank: Data Lake Implementation – Selecting BigInsights ViON Spark/Ha...Citizens Bank: Data Lake Implementation – Selecting BigInsights ViON Spark/Ha...
Citizens Bank: Data Lake Implementation – Selecting BigInsights ViON Spark/Ha...Seeling Cheung
 
Paytm labs soyouwanttodatascience
Paytm labs soyouwanttodatasciencePaytm labs soyouwanttodatascience
Paytm labs soyouwanttodatascienceAdam Muise
 
2014 july 24_what_ishadoop
2014 july 24_what_ishadoop2014 july 24_what_ishadoop
2014 july 24_what_ishadoopAdam Muise
 
Moving to a data-centric architecture: Toronto Data Unconference 2015
Moving to a data-centric architecture: Toronto Data Unconference 2015Moving to a data-centric architecture: Toronto Data Unconference 2015
Moving to a data-centric architecture: Toronto Data Unconference 2015Adam Muise
 
What is Hadoop? Oct 17 2013
What is Hadoop? Oct 17 2013What is Hadoop? Oct 17 2013
What is Hadoop? Oct 17 2013Adam Muise
 
Webinar - Bringing connected graph data to Cassandra with DSE Graph
Webinar - Bringing connected graph data to Cassandra with DSE GraphWebinar - Bringing connected graph data to Cassandra with DSE Graph
Webinar - Bringing connected graph data to Cassandra with DSE GraphDataStax
 
Big Data for Managers: From hadoop to streaming and beyond
Big Data for Managers: From hadoop to streaming and beyondBig Data for Managers: From hadoop to streaming and beyond
Big Data for Managers: From hadoop to streaming and beyondDataWorks Summit/Hadoop Summit
 
DAMA Chicago - Ensuring your data lake doesn’t become a data swamp
DAMA Chicago - Ensuring your data lake doesn’t become a data swampDAMA Chicago - Ensuring your data lake doesn’t become a data swamp
DAMA Chicago - Ensuring your data lake doesn’t become a data swampNVISIA
 
Big Data Discovery
Big Data DiscoveryBig Data Discovery
Big Data DiscoveryHarald Erb
 
Data Vault 2.0 Demystified: East Coast Tour
Data Vault 2.0 Demystified: East Coast TourData Vault 2.0 Demystified: East Coast Tour
Data Vault 2.0 Demystified: East Coast TourWhereScape
 
How to get started in Big Data without Big Costs - StampedeCon 2016
How to get started in Big Data without Big Costs - StampedeCon 2016How to get started in Big Data without Big Costs - StampedeCon 2016
How to get started in Big Data without Big Costs - StampedeCon 2016StampedeCon
 
Introduction to Kudu - StampedeCon 2016
Introduction to Kudu - StampedeCon 2016Introduction to Kudu - StampedeCon 2016
Introduction to Kudu - StampedeCon 2016StampedeCon
 
Hadoop and the Data Warehouse: When to Use Which
Hadoop and the Data Warehouse: When to Use Which Hadoop and the Data Warehouse: When to Use Which
Hadoop and the Data Warehouse: When to Use Which DataWorks Summit
 
Southwest Power Pool big data case study
Southwest Power Pool big data case study Southwest Power Pool big data case study
Southwest Power Pool big data case study Seeling Cheung
 
2014 feb 5_what_ishadoop_mda
2014 feb 5_what_ishadoop_mda2014 feb 5_what_ishadoop_mda
2014 feb 5_what_ishadoop_mdaAdam Muise
 

La actualidad más candente (20)

Unlock the value in your big data reservoir using oracle big data discovery a...
Unlock the value in your big data reservoir using oracle big data discovery a...Unlock the value in your big data reservoir using oracle big data discovery a...
Unlock the value in your big data reservoir using oracle big data discovery a...
 
Next Generation Hadoop Introduction
Next Generation Hadoop IntroductionNext Generation Hadoop Introduction
Next Generation Hadoop Introduction
 
TimesTen - Beyond the Summary Advisor (ODTUG KScope'14)
TimesTen - Beyond the Summary Advisor (ODTUG KScope'14)TimesTen - Beyond the Summary Advisor (ODTUG KScope'14)
TimesTen - Beyond the Summary Advisor (ODTUG KScope'14)
 
Hadoop at the Center: The Next Generation of Hadoop
Hadoop at the Center: The Next Generation of HadoopHadoop at the Center: The Next Generation of Hadoop
Hadoop at the Center: The Next Generation of Hadoop
 
Innovation in the Data Warehouse - StampedeCon 2016
Innovation in the Data Warehouse - StampedeCon 2016Innovation in the Data Warehouse - StampedeCon 2016
Innovation in the Data Warehouse - StampedeCon 2016
 
Citizens Bank: Data Lake Implementation – Selecting BigInsights ViON Spark/Ha...
Citizens Bank: Data Lake Implementation – Selecting BigInsights ViON Spark/Ha...Citizens Bank: Data Lake Implementation – Selecting BigInsights ViON Spark/Ha...
Citizens Bank: Data Lake Implementation – Selecting BigInsights ViON Spark/Ha...
 
Paytm labs soyouwanttodatascience
Paytm labs soyouwanttodatasciencePaytm labs soyouwanttodatascience
Paytm labs soyouwanttodatascience
 
2014 july 24_what_ishadoop
2014 july 24_what_ishadoop2014 july 24_what_ishadoop
2014 july 24_what_ishadoop
 
Moving to a data-centric architecture: Toronto Data Unconference 2015
Moving to a data-centric architecture: Toronto Data Unconference 2015Moving to a data-centric architecture: Toronto Data Unconference 2015
Moving to a data-centric architecture: Toronto Data Unconference 2015
 
What is Hadoop? Oct 17 2013
What is Hadoop? Oct 17 2013What is Hadoop? Oct 17 2013
What is Hadoop? Oct 17 2013
 
Webinar - Bringing connected graph data to Cassandra with DSE Graph
Webinar - Bringing connected graph data to Cassandra with DSE GraphWebinar - Bringing connected graph data to Cassandra with DSE Graph
Webinar - Bringing connected graph data to Cassandra with DSE Graph
 
Big Data for Managers: From hadoop to streaming and beyond
Big Data for Managers: From hadoop to streaming and beyondBig Data for Managers: From hadoop to streaming and beyond
Big Data for Managers: From hadoop to streaming and beyond
 
DAMA Chicago - Ensuring your data lake doesn’t become a data swamp
DAMA Chicago - Ensuring your data lake doesn’t become a data swampDAMA Chicago - Ensuring your data lake doesn’t become a data swamp
DAMA Chicago - Ensuring your data lake doesn’t become a data swamp
 
Big Data Discovery
Big Data DiscoveryBig Data Discovery
Big Data Discovery
 
Data Vault 2.0 Demystified: East Coast Tour
Data Vault 2.0 Demystified: East Coast TourData Vault 2.0 Demystified: East Coast Tour
Data Vault 2.0 Demystified: East Coast Tour
 
How to get started in Big Data without Big Costs - StampedeCon 2016
How to get started in Big Data without Big Costs - StampedeCon 2016How to get started in Big Data without Big Costs - StampedeCon 2016
How to get started in Big Data without Big Costs - StampedeCon 2016
 
Introduction to Kudu - StampedeCon 2016
Introduction to Kudu - StampedeCon 2016Introduction to Kudu - StampedeCon 2016
Introduction to Kudu - StampedeCon 2016
 
Hadoop and the Data Warehouse: When to Use Which
Hadoop and the Data Warehouse: When to Use Which Hadoop and the Data Warehouse: When to Use Which
Hadoop and the Data Warehouse: When to Use Which
 
Southwest Power Pool big data case study
Southwest Power Pool big data case study Southwest Power Pool big data case study
Southwest Power Pool big data case study
 
2014 feb 5_what_ishadoop_mda
2014 feb 5_what_ishadoop_mda2014 feb 5_what_ishadoop_mda
2014 feb 5_what_ishadoop_mda
 

Similar a Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Architecture

Deep-Dive into Big Data ETL with ODI12c and Oracle Big Data Connectors
Deep-Dive into Big Data ETL with ODI12c and Oracle Big Data ConnectorsDeep-Dive into Big Data ETL with ODI12c and Oracle Big Data Connectors
Deep-Dive into Big Data ETL with ODI12c and Oracle Big Data ConnectorsMark Rittman
 
UKOUG Tech'14 Super Sunday : Deep-Dive into Big Data ETL with ODI12c
UKOUG Tech'14 Super Sunday : Deep-Dive into Big Data ETL with ODI12cUKOUG Tech'14 Super Sunday : Deep-Dive into Big Data ETL with ODI12c
UKOUG Tech'14 Super Sunday : Deep-Dive into Big Data ETL with ODI12cMark Rittman
 
Real-Time Data Replication to Hadoop using GoldenGate 12c Adaptors
Real-Time Data Replication to Hadoop using GoldenGate 12c AdaptorsReal-Time Data Replication to Hadoop using GoldenGate 12c Adaptors
Real-Time Data Replication to Hadoop using GoldenGate 12c AdaptorsMichael Rainey
 
Leveraging Hadoop with OBIEE 11g and ODI 11g - UKOUG Tech'13
Leveraging Hadoop with OBIEE 11g and ODI 11g - UKOUG Tech'13Leveraging Hadoop with OBIEE 11g and ODI 11g - UKOUG Tech'13
Leveraging Hadoop with OBIEE 11g and ODI 11g - UKOUG Tech'13Mark Rittman
 
Deploying OBIEE in the Cloud - Oracle Openworld 2014
Deploying OBIEE in the Cloud - Oracle Openworld 2014Deploying OBIEE in the Cloud - Oracle Openworld 2014
Deploying OBIEE in the Cloud - Oracle Openworld 2014Mark Rittman
 
Using Endeca with Oracle Exalytics - Oracle France BI Customer Event, October...
Using Endeca with Oracle Exalytics - Oracle France BI Customer Event, October...Using Endeca with Oracle Exalytics - Oracle France BI Customer Event, October...
Using Endeca with Oracle Exalytics - Oracle France BI Customer Event, October...Mark Rittman
 
UKOUG Tech 15 - Migration from Oracle Warehouse Builder to Oracle Data Integr...
UKOUG Tech 15 - Migration from Oracle Warehouse Builder to Oracle Data Integr...UKOUG Tech 15 - Migration from Oracle Warehouse Builder to Oracle Data Integr...
UKOUG Tech 15 - Migration from Oracle Warehouse Builder to Oracle Data Integr...Jérôme Françoisse
 
Ougn2013 high speed, in-memory big data analysis with oracle exalytics
Ougn2013   high speed, in-memory big data analysis with oracle exalyticsOugn2013   high speed, in-memory big data analysis with oracle exalytics
Ougn2013 high speed, in-memory big data analysis with oracle exalyticsMark Rittman
 
OBIEE11g Seminar by Mark Rittman for OU Expert Summit, Dubai 2015
OBIEE11g Seminar by Mark Rittman for OU Expert Summit, Dubai 2015OBIEE11g Seminar by Mark Rittman for OU Expert Summit, Dubai 2015
OBIEE11g Seminar by Mark Rittman for OU Expert Summit, Dubai 2015Mark Rittman
 
Part 2 - Hadoop Data Loading using Hadoop Tools and ODI12c
Part 2 - Hadoop Data Loading using Hadoop Tools and ODI12cPart 2 - Hadoop Data Loading using Hadoop Tools and ODI12c
Part 2 - Hadoop Data Loading using Hadoop Tools and ODI12cMark Rittman
 
ODI 11g in the Enterprise - BIWA 2013
ODI 11g in the Enterprise - BIWA 2013ODI 11g in the Enterprise - BIWA 2013
ODI 11g in the Enterprise - BIWA 2013Mark Rittman
 
2-in-1 : RPD Magic and Hyperion Planning "Adapter"
2-in-1 : RPD Magic and Hyperion Planning "Adapter"2-in-1 : RPD Magic and Hyperion Planning "Adapter"
2-in-1 : RPD Magic and Hyperion Planning "Adapter"Gianni Ceresa
 
Demystifying Data Warehouse as a Service (DWaaS)
Demystifying Data Warehouse as a Service (DWaaS)Demystifying Data Warehouse as a Service (DWaaS)
Demystifying Data Warehouse as a Service (DWaaS)Kent Graziano
 
ODI11g, Hadoop and "Big Data" Sources
ODI11g, Hadoop and "Big Data" SourcesODI11g, Hadoop and "Big Data" Sources
ODI11g, Hadoop and "Big Data" SourcesMark Rittman
 
Big Data & Oracle Technologies
Big Data & Oracle TechnologiesBig Data & Oracle Technologies
Big Data & Oracle TechnologiesOleksii Movchaniuk
 
Big Data Integration Webinar: Getting Started With Hadoop Big Data
Big Data Integration Webinar: Getting Started With Hadoop Big DataBig Data Integration Webinar: Getting Started With Hadoop Big Data
Big Data Integration Webinar: Getting Started With Hadoop Big DataPentaho
 
GoldenGate and Oracle Data Integrator - A Perfect Match...
GoldenGate and Oracle Data Integrator - A Perfect Match...GoldenGate and Oracle Data Integrator - A Perfect Match...
GoldenGate and Oracle Data Integrator - A Perfect Match...Michael Rainey
 
UKOUG BIRT SIG 2014 – ODI for OWB Developers
UKOUG BIRT SIG 2014 –  ODI for OWB DevelopersUKOUG BIRT SIG 2014 –  ODI for OWB Developers
UKOUG BIRT SIG 2014 – ODI for OWB DevelopersJérôme Françoisse
 
DataEng Mad - 03.03.2020 - Tibero 30-min Presentation.pdf
DataEng Mad - 03.03.2020 - Tibero 30-min Presentation.pdfDataEng Mad - 03.03.2020 - Tibero 30-min Presentation.pdf
DataEng Mad - 03.03.2020 - Tibero 30-min Presentation.pdfMiguel Angel Fajardo
 

Similar a Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Architecture (20)

Deep-Dive into Big Data ETL with ODI12c and Oracle Big Data Connectors
Deep-Dive into Big Data ETL with ODI12c and Oracle Big Data ConnectorsDeep-Dive into Big Data ETL with ODI12c and Oracle Big Data Connectors
Deep-Dive into Big Data ETL with ODI12c and Oracle Big Data Connectors
 
UKOUG Tech'14 Super Sunday : Deep-Dive into Big Data ETL with ODI12c
UKOUG Tech'14 Super Sunday : Deep-Dive into Big Data ETL with ODI12cUKOUG Tech'14 Super Sunday : Deep-Dive into Big Data ETL with ODI12c
UKOUG Tech'14 Super Sunday : Deep-Dive into Big Data ETL with ODI12c
 
Real-Time Data Replication to Hadoop using GoldenGate 12c Adaptors
Real-Time Data Replication to Hadoop using GoldenGate 12c AdaptorsReal-Time Data Replication to Hadoop using GoldenGate 12c Adaptors
Real-Time Data Replication to Hadoop using GoldenGate 12c Adaptors
 
Leveraging Hadoop with OBIEE 11g and ODI 11g - UKOUG Tech'13
Leveraging Hadoop with OBIEE 11g and ODI 11g - UKOUG Tech'13Leveraging Hadoop with OBIEE 11g and ODI 11g - UKOUG Tech'13
Leveraging Hadoop with OBIEE 11g and ODI 11g - UKOUG Tech'13
 
Deploying OBIEE in the Cloud - Oracle Openworld 2014
Deploying OBIEE in the Cloud - Oracle Openworld 2014Deploying OBIEE in the Cloud - Oracle Openworld 2014
Deploying OBIEE in the Cloud - Oracle Openworld 2014
 
Using Endeca with Oracle Exalytics - Oracle France BI Customer Event, October...
Using Endeca with Oracle Exalytics - Oracle France BI Customer Event, October...Using Endeca with Oracle Exalytics - Oracle France BI Customer Event, October...
Using Endeca with Oracle Exalytics - Oracle France BI Customer Event, October...
 
UKOUG Tech 15 - Migration from Oracle Warehouse Builder to Oracle Data Integr...
UKOUG Tech 15 - Migration from Oracle Warehouse Builder to Oracle Data Integr...UKOUG Tech 15 - Migration from Oracle Warehouse Builder to Oracle Data Integr...
UKOUG Tech 15 - Migration from Oracle Warehouse Builder to Oracle Data Integr...
 
Ougn2013 high speed, in-memory big data analysis with oracle exalytics
Ougn2013   high speed, in-memory big data analysis with oracle exalyticsOugn2013   high speed, in-memory big data analysis with oracle exalytics
Ougn2013 high speed, in-memory big data analysis with oracle exalytics
 
OBIEE11g Seminar by Mark Rittman for OU Expert Summit, Dubai 2015
OBIEE11g Seminar by Mark Rittman for OU Expert Summit, Dubai 2015OBIEE11g Seminar by Mark Rittman for OU Expert Summit, Dubai 2015
OBIEE11g Seminar by Mark Rittman for OU Expert Summit, Dubai 2015
 
Part 2 - Hadoop Data Loading using Hadoop Tools and ODI12c
Part 2 - Hadoop Data Loading using Hadoop Tools and ODI12cPart 2 - Hadoop Data Loading using Hadoop Tools and ODI12c
Part 2 - Hadoop Data Loading using Hadoop Tools and ODI12c
 
ODI 11g in the Enterprise - BIWA 2013
ODI 11g in the Enterprise - BIWA 2013ODI 11g in the Enterprise - BIWA 2013
ODI 11g in the Enterprise - BIWA 2013
 
2-in-1 : RPD Magic and Hyperion Planning "Adapter"
2-in-1 : RPD Magic and Hyperion Planning "Adapter"2-in-1 : RPD Magic and Hyperion Planning "Adapter"
2-in-1 : RPD Magic and Hyperion Planning "Adapter"
 
Demystifying Data Warehouse as a Service (DWaaS)
Demystifying Data Warehouse as a Service (DWaaS)Demystifying Data Warehouse as a Service (DWaaS)
Demystifying Data Warehouse as a Service (DWaaS)
 
ODI11g, Hadoop and "Big Data" Sources
ODI11g, Hadoop and "Big Data" SourcesODI11g, Hadoop and "Big Data" Sources
ODI11g, Hadoop and "Big Data" Sources
 
Seed endeca
Seed endecaSeed endeca
Seed endeca
 
Big Data & Oracle Technologies
Big Data & Oracle TechnologiesBig Data & Oracle Technologies
Big Data & Oracle Technologies
 
Big Data Integration Webinar: Getting Started With Hadoop Big Data
Big Data Integration Webinar: Getting Started With Hadoop Big DataBig Data Integration Webinar: Getting Started With Hadoop Big Data
Big Data Integration Webinar: Getting Started With Hadoop Big Data
 
GoldenGate and Oracle Data Integrator - A Perfect Match...
GoldenGate and Oracle Data Integrator - A Perfect Match...GoldenGate and Oracle Data Integrator - A Perfect Match...
GoldenGate and Oracle Data Integrator - A Perfect Match...
 
UKOUG BIRT SIG 2014 – ODI for OWB Developers
UKOUG BIRT SIG 2014 –  ODI for OWB DevelopersUKOUG BIRT SIG 2014 –  ODI for OWB Developers
UKOUG BIRT SIG 2014 – ODI for OWB Developers
 
DataEng Mad - 03.03.2020 - Tibero 30-min Presentation.pdf
DataEng Mad - 03.03.2020 - Tibero 30-min Presentation.pdfDataEng Mad - 03.03.2020 - Tibero 30-min Presentation.pdf
DataEng Mad - 03.03.2020 - Tibero 30-min Presentation.pdf
 

Más de Mark Rittman

The Future of Analytics, Data Integration and BI on Big Data Platforms
The Future of Analytics, Data Integration and BI on Big Data PlatformsThe Future of Analytics, Data Integration and BI on Big Data Platforms
The Future of Analytics, Data Integration and BI on Big Data PlatformsMark Rittman
 
Using Oracle Big Data Discovey as a Data Scientist's Toolkit
Using Oracle Big Data Discovey as a Data Scientist's ToolkitUsing Oracle Big Data Discovey as a Data Scientist's Toolkit
Using Oracle Big Data Discovey as a Data Scientist's ToolkitMark Rittman
 
From lots of reports (with some data Analysis) 
to Massive Data Analysis (Wit...
From lots of reports (with some data Analysis) 
to Massive Data Analysis (Wit...From lots of reports (with some data Analysis) 
to Massive Data Analysis (Wit...
From lots of reports (with some data Analysis) 
to Massive Data Analysis (Wit...Mark Rittman
 
SQL-on-Hadoop for Analytics + BI: What Are My Options, What's the Future?
SQL-on-Hadoop for Analytics + BI: What Are My Options, What's the Future?SQL-on-Hadoop for Analytics + BI: What Are My Options, What's the Future?
SQL-on-Hadoop for Analytics + BI: What Are My Options, What's the Future?Mark Rittman
 
Social Network Analysis using Oracle Big Data Spatial & Graph (incl. why I di...
Social Network Analysis using Oracle Big Data Spatial & Graph (incl. why I di...Social Network Analysis using Oracle Big Data Spatial & Graph (incl. why I di...
Social Network Analysis using Oracle Big Data Spatial & Graph (incl. why I di...Mark Rittman
 
Using Oracle Big Data SQL 3.0 to add Hadoop & NoSQL to your Oracle Data Wareh...
Using Oracle Big Data SQL 3.0 to add Hadoop & NoSQL to your Oracle Data Wareh...Using Oracle Big Data SQL 3.0 to add Hadoop & NoSQL to your Oracle Data Wareh...
Using Oracle Big Data SQL 3.0 to add Hadoop & NoSQL to your Oracle Data Wareh...Mark Rittman
 
IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...
IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...
IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...Mark Rittman
 
IlOUG Tech Days 2016 - Unlock the Value in your Data Reservoir using Oracle B...
IlOUG Tech Days 2016 - Unlock the Value in your Data Reservoir using Oracle B...IlOUG Tech Days 2016 - Unlock the Value in your Data Reservoir using Oracle B...
IlOUG Tech Days 2016 - Unlock the Value in your Data Reservoir using Oracle B...Mark Rittman
 
OTN EMEA Tour 2016 : Deploying Full BI Platforms to Oracle Cloud
OTN EMEA Tour 2016 : Deploying Full BI Platforms to Oracle CloudOTN EMEA Tour 2016 : Deploying Full BI Platforms to Oracle Cloud
OTN EMEA Tour 2016 : Deploying Full BI Platforms to Oracle CloudMark Rittman
 
OTN EMEA TOUR 2016 - OBIEE12c New Features for End-Users, Developers and Sys...
OTN EMEA TOUR 2016  - OBIEE12c New Features for End-Users, Developers and Sys...OTN EMEA TOUR 2016  - OBIEE12c New Features for End-Users, Developers and Sys...
OTN EMEA TOUR 2016 - OBIEE12c New Features for End-Users, Developers and Sys...Mark Rittman
 
Enkitec E4 Barcelona : SQL and Data Integration Futures on Hadoop :
Enkitec E4 Barcelona : SQL and Data Integration Futures on Hadoop : Enkitec E4 Barcelona : SQL and Data Integration Futures on Hadoop :
Enkitec E4 Barcelona : SQL and Data Integration Futures on Hadoop : Mark Rittman
 
Gluent New World #02 - SQL-on-Hadoop : A bit of History, Current State-of-the...
Gluent New World #02 - SQL-on-Hadoop : A bit of History, Current State-of-the...Gluent New World #02 - SQL-on-Hadoop : A bit of History, Current State-of-the...
Gluent New World #02 - SQL-on-Hadoop : A bit of History, Current State-of-the...Mark Rittman
 
Oracle BI Hybrid BI : Mode 1 + Mode 2, Cloud + On-Premise Business Analytics
Oracle BI Hybrid BI : Mode 1 + Mode 2, Cloud + On-Premise Business AnalyticsOracle BI Hybrid BI : Mode 1 + Mode 2, Cloud + On-Premise Business Analytics
Oracle BI Hybrid BI : Mode 1 + Mode 2, Cloud + On-Premise Business AnalyticsMark Rittman
 
Riga dev day 2016 adding a data reservoir and oracle bdd to extend your ora...
Riga dev day 2016   adding a data reservoir and oracle bdd to extend your ora...Riga dev day 2016   adding a data reservoir and oracle bdd to extend your ora...
Riga dev day 2016 adding a data reservoir and oracle bdd to extend your ora...Mark Rittman
 
Big Data for Oracle Devs - Towards Spark, Real-Time and Predictive Analytics
Big Data for Oracle Devs - Towards Spark, Real-Time and Predictive AnalyticsBig Data for Oracle Devs - Towards Spark, Real-Time and Predictive Analytics
Big Data for Oracle Devs - Towards Spark, Real-Time and Predictive AnalyticsMark Rittman
 
OBIEE12c and Embedded Essbase 12c - An Initial Look at Query Acceleration Use...
OBIEE12c and Embedded Essbase 12c - An Initial Look at Query Acceleration Use...OBIEE12c and Embedded Essbase 12c - An Initial Look at Query Acceleration Use...
OBIEE12c and Embedded Essbase 12c - An Initial Look at Query Acceleration Use...Mark Rittman
 
Oracle Big Data Spatial & Graph 
Social Media Analysis - Case Study
Oracle Big Data Spatial & Graph 
Social Media Analysis - Case StudyOracle Big Data Spatial & Graph 
Social Media Analysis - Case Study
Oracle Big Data Spatial & Graph 
Social Media Analysis - Case StudyMark Rittman
 
Deploying Full BI Platforms to Oracle Cloud
Deploying Full BI Platforms to Oracle CloudDeploying Full BI Platforms to Oracle Cloud
Deploying Full BI Platforms to Oracle CloudMark Rittman
 
Deploying Full Oracle BI Platforms to Oracle Cloud - OOW2015
Deploying Full Oracle BI Platforms to Oracle Cloud - OOW2015Deploying Full Oracle BI Platforms to Oracle Cloud - OOW2015
Deploying Full Oracle BI Platforms to Oracle Cloud - OOW2015Mark Rittman
 

Más de Mark Rittman (19)

The Future of Analytics, Data Integration and BI on Big Data Platforms
The Future of Analytics, Data Integration and BI on Big Data PlatformsThe Future of Analytics, Data Integration and BI on Big Data Platforms
The Future of Analytics, Data Integration and BI on Big Data Platforms
 
Using Oracle Big Data Discovey as a Data Scientist's Toolkit
Using Oracle Big Data Discovey as a Data Scientist's ToolkitUsing Oracle Big Data Discovey as a Data Scientist's Toolkit
Using Oracle Big Data Discovey as a Data Scientist's Toolkit
 
From lots of reports (with some data Analysis) 
to Massive Data Analysis (Wit...
From lots of reports (with some data Analysis) 
to Massive Data Analysis (Wit...From lots of reports (with some data Analysis) 
to Massive Data Analysis (Wit...
From lots of reports (with some data Analysis) 
to Massive Data Analysis (Wit...
 
SQL-on-Hadoop for Analytics + BI: What Are My Options, What's the Future?
SQL-on-Hadoop for Analytics + BI: What Are My Options, What's the Future?SQL-on-Hadoop for Analytics + BI: What Are My Options, What's the Future?
SQL-on-Hadoop for Analytics + BI: What Are My Options, What's the Future?
 
Social Network Analysis using Oracle Big Data Spatial & Graph (incl. why I di...
Social Network Analysis using Oracle Big Data Spatial & Graph (incl. why I di...Social Network Analysis using Oracle Big Data Spatial & Graph (incl. why I di...
Social Network Analysis using Oracle Big Data Spatial & Graph (incl. why I di...
 
Using Oracle Big Data SQL 3.0 to add Hadoop & NoSQL to your Oracle Data Wareh...
Using Oracle Big Data SQL 3.0 to add Hadoop & NoSQL to your Oracle Data Wareh...Using Oracle Big Data SQL 3.0 to add Hadoop & NoSQL to your Oracle Data Wareh...
Using Oracle Big Data SQL 3.0 to add Hadoop & NoSQL to your Oracle Data Wareh...
 
IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...
IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...
IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...
 
IlOUG Tech Days 2016 - Unlock the Value in your Data Reservoir using Oracle B...
IlOUG Tech Days 2016 - Unlock the Value in your Data Reservoir using Oracle B...IlOUG Tech Days 2016 - Unlock the Value in your Data Reservoir using Oracle B...
IlOUG Tech Days 2016 - Unlock the Value in your Data Reservoir using Oracle B...
 
OTN EMEA Tour 2016 : Deploying Full BI Platforms to Oracle Cloud
OTN EMEA Tour 2016 : Deploying Full BI Platforms to Oracle CloudOTN EMEA Tour 2016 : Deploying Full BI Platforms to Oracle Cloud
OTN EMEA Tour 2016 : Deploying Full BI Platforms to Oracle Cloud
 
OTN EMEA TOUR 2016 - OBIEE12c New Features for End-Users, Developers and Sys...
OTN EMEA TOUR 2016  - OBIEE12c New Features for End-Users, Developers and Sys...OTN EMEA TOUR 2016  - OBIEE12c New Features for End-Users, Developers and Sys...
OTN EMEA TOUR 2016 - OBIEE12c New Features for End-Users, Developers and Sys...
 
Enkitec E4 Barcelona : SQL and Data Integration Futures on Hadoop :
Enkitec E4 Barcelona : SQL and Data Integration Futures on Hadoop : Enkitec E4 Barcelona : SQL and Data Integration Futures on Hadoop :
Enkitec E4 Barcelona : SQL and Data Integration Futures on Hadoop :
 
Gluent New World #02 - SQL-on-Hadoop : A bit of History, Current State-of-the...
Gluent New World #02 - SQL-on-Hadoop : A bit of History, Current State-of-the...Gluent New World #02 - SQL-on-Hadoop : A bit of History, Current State-of-the...
Gluent New World #02 - SQL-on-Hadoop : A bit of History, Current State-of-the...
 
Oracle BI Hybrid BI : Mode 1 + Mode 2, Cloud + On-Premise Business Analytics
Oracle BI Hybrid BI : Mode 1 + Mode 2, Cloud + On-Premise Business AnalyticsOracle BI Hybrid BI : Mode 1 + Mode 2, Cloud + On-Premise Business Analytics
Oracle BI Hybrid BI : Mode 1 + Mode 2, Cloud + On-Premise Business Analytics
 
Riga dev day 2016 adding a data reservoir and oracle bdd to extend your ora...
Riga dev day 2016   adding a data reservoir and oracle bdd to extend your ora...Riga dev day 2016   adding a data reservoir and oracle bdd to extend your ora...
Riga dev day 2016 adding a data reservoir and oracle bdd to extend your ora...
 
Big Data for Oracle Devs - Towards Spark, Real-Time and Predictive Analytics
Big Data for Oracle Devs - Towards Spark, Real-Time and Predictive AnalyticsBig Data for Oracle Devs - Towards Spark, Real-Time and Predictive Analytics
Big Data for Oracle Devs - Towards Spark, Real-Time and Predictive Analytics
 
OBIEE12c and Embedded Essbase 12c - An Initial Look at Query Acceleration Use...
OBIEE12c and Embedded Essbase 12c - An Initial Look at Query Acceleration Use...OBIEE12c and Embedded Essbase 12c - An Initial Look at Query Acceleration Use...
OBIEE12c and Embedded Essbase 12c - An Initial Look at Query Acceleration Use...
 
Oracle Big Data Spatial & Graph 
Social Media Analysis - Case Study
Oracle Big Data Spatial & Graph 
Social Media Analysis - Case StudyOracle Big Data Spatial & Graph 
Social Media Analysis - Case Study
Oracle Big Data Spatial & Graph 
Social Media Analysis - Case Study
 
Deploying Full BI Platforms to Oracle Cloud
Deploying Full BI Platforms to Oracle CloudDeploying Full BI Platforms to Oracle Cloud
Deploying Full BI Platforms to Oracle Cloud
 
Deploying Full Oracle BI Platforms to Oracle Cloud - OOW2015
Deploying Full Oracle BI Platforms to Oracle Cloud - OOW2015Deploying Full Oracle BI Platforms to Oracle Cloud - OOW2015
Deploying Full Oracle BI Platforms to Oracle Cloud - OOW2015
 

Último

Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...
Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...
Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...Delhi Call girls
 
Determinants of health, dimensions of health, positive health and spectrum of...
Determinants of health, dimensions of health, positive health and spectrum of...Determinants of health, dimensions of health, positive health and spectrum of...
Determinants of health, dimensions of health, positive health and spectrum of...shambhavirathore45
 
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...amitlee9823
 
Introduction-to-Machine-Learning (1).pptx
Introduction-to-Machine-Learning (1).pptxIntroduction-to-Machine-Learning (1).pptx
Introduction-to-Machine-Learning (1).pptxfirstjob4
 
Delhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip CallDelhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Callshivangimorya083
 
Capstone Project on IBM Data Analytics Program
Capstone Project on IBM Data Analytics ProgramCapstone Project on IBM Data Analytics Program
Capstone Project on IBM Data Analytics ProgramMoniSankarHazra
 
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...amitlee9823
 
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...Pooja Nehwal
 
Best VIP Call Girls Noida Sector 22 Call Me: 8448380779
Best VIP Call Girls Noida Sector 22 Call Me: 8448380779Best VIP Call Girls Noida Sector 22 Call Me: 8448380779
Best VIP Call Girls Noida Sector 22 Call Me: 8448380779Delhi Call girls
 
FESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdfFESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdfMarinCaroMartnezBerg
 
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779Delhi Call girls
 
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip CallDelhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Callshivangimorya083
 
Halmar dropshipping via API with DroFx
Halmar  dropshipping  via API with DroFxHalmar  dropshipping  via API with DroFx
Halmar dropshipping via API with DroFxolyaivanovalion
 
Call Girls 🫤 Dwarka ➡️ 9711199171 ➡️ Delhi 🫦 Two shot with one girl
Call Girls 🫤 Dwarka ➡️ 9711199171 ➡️ Delhi 🫦 Two shot with one girlCall Girls 🫤 Dwarka ➡️ 9711199171 ➡️ Delhi 🫦 Two shot with one girl
Call Girls 🫤 Dwarka ➡️ 9711199171 ➡️ Delhi 🫦 Two shot with one girlkumarajju5765
 
Generative AI on Enterprise Cloud with NiFi and Milvus
Generative AI on Enterprise Cloud with NiFi and MilvusGenerative AI on Enterprise Cloud with NiFi and Milvus
Generative AI on Enterprise Cloud with NiFi and MilvusTimothy Spann
 
Invezz.com - Grow your wealth with trading signals
Invezz.com - Grow your wealth with trading signalsInvezz.com - Grow your wealth with trading signals
Invezz.com - Grow your wealth with trading signalsInvezz1
 
100-Concepts-of-AI by Anupama Kate .pptx
100-Concepts-of-AI by Anupama Kate .pptx100-Concepts-of-AI by Anupama Kate .pptx
100-Concepts-of-AI by Anupama Kate .pptxAnupama Kate
 
ALSO dropshipping via API with DroFx.pptx
ALSO dropshipping via API with DroFx.pptxALSO dropshipping via API with DroFx.pptx
ALSO dropshipping via API with DroFx.pptxolyaivanovalion
 
Vip Model Call Girls (Delhi) Karol Bagh 9711199171✔️Body to body massage wit...
Vip Model  Call Girls (Delhi) Karol Bagh 9711199171✔️Body to body massage wit...Vip Model  Call Girls (Delhi) Karol Bagh 9711199171✔️Body to body massage wit...
Vip Model Call Girls (Delhi) Karol Bagh 9711199171✔️Body to body massage wit...shivangimorya083
 

Último (20)

Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...
Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...
Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...
 
Determinants of health, dimensions of health, positive health and spectrum of...
Determinants of health, dimensions of health, positive health and spectrum of...Determinants of health, dimensions of health, positive health and spectrum of...
Determinants of health, dimensions of health, positive health and spectrum of...
 
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
 
Introduction-to-Machine-Learning (1).pptx
Introduction-to-Machine-Learning (1).pptxIntroduction-to-Machine-Learning (1).pptx
Introduction-to-Machine-Learning (1).pptx
 
Delhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip CallDelhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
 
Capstone Project on IBM Data Analytics Program
Capstone Project on IBM Data Analytics ProgramCapstone Project on IBM Data Analytics Program
Capstone Project on IBM Data Analytics Program
 
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
Chintamani Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore ...
 
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
 
Best VIP Call Girls Noida Sector 22 Call Me: 8448380779
Best VIP Call Girls Noida Sector 22 Call Me: 8448380779Best VIP Call Girls Noida Sector 22 Call Me: 8448380779
Best VIP Call Girls Noida Sector 22 Call Me: 8448380779
 
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
FESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdfFESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdf
 
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
 
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip CallDelhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
 
Halmar dropshipping via API with DroFx
Halmar  dropshipping  via API with DroFxHalmar  dropshipping  via API with DroFx
Halmar dropshipping via API with DroFx
 
Call Girls 🫤 Dwarka ➡️ 9711199171 ➡️ Delhi 🫦 Two shot with one girl
Call Girls 🫤 Dwarka ➡️ 9711199171 ➡️ Delhi 🫦 Two shot with one girlCall Girls 🫤 Dwarka ➡️ 9711199171 ➡️ Delhi 🫦 Two shot with one girl
Call Girls 🫤 Dwarka ➡️ 9711199171 ➡️ Delhi 🫦 Two shot with one girl
 
Generative AI on Enterprise Cloud with NiFi and Milvus
Generative AI on Enterprise Cloud with NiFi and MilvusGenerative AI on Enterprise Cloud with NiFi and Milvus
Generative AI on Enterprise Cloud with NiFi and Milvus
 
Invezz.com - Grow your wealth with trading signals
Invezz.com - Grow your wealth with trading signalsInvezz.com - Grow your wealth with trading signals
Invezz.com - Grow your wealth with trading signals
 
100-Concepts-of-AI by Anupama Kate .pptx
100-Concepts-of-AI by Anupama Kate .pptx100-Concepts-of-AI by Anupama Kate .pptx
100-Concepts-of-AI by Anupama Kate .pptx
 
ALSO dropshipping via API with DroFx.pptx
ALSO dropshipping via API with DroFx.pptxALSO dropshipping via API with DroFx.pptx
ALSO dropshipping via API with DroFx.pptx
 
Vip Model Call Girls (Delhi) Karol Bagh 9711199171✔️Body to body massage wit...
Vip Model  Call Girls (Delhi) Karol Bagh 9711199171✔️Body to body massage wit...Vip Model  Call Girls (Delhi) Karol Bagh 9711199171✔️Body to body massage wit...
Vip Model Call Girls (Delhi) Karol Bagh 9711199171✔️Body to body massage wit...
 

Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Architecture

  • 1. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Rittman Mead BI Forum 2015 Masterclass
 Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Architecture
  • 2. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Part 1
 Designing the Data Reservoir & Data Factory
  • 3. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com The Oracle IM + Big Data Reference Architecture Actionable
 Events Event Engine Data 
 Reservoir Data Factory Enterprise Information Store Reporting Discovery Lab Actionable Information Actionable
 Insights Input Events Execution Innovation Discovery Output Events 
 & Data Structured
 Enterprise Data Other Data
  • 4. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 5. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com The Next-Gen BI Environment from this Architecture •Traditional RDBMS DW now complemented by a Hadoop/NoSQL-based data reservoir •“Data Factory” term used for ETL and loading processes that provide conduit between them •Some data may be loaded into the data reservoir and only exist there •Some will be further processed and loaded into the DW (“Enterprise Information Store”) •Some may get directly loaded into the RBDMS •Use best option to support business needs
  • 6. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Introducing … The “Data Reservoir”? •A reservoir is a lake than also can process and refine (your data) •Wide-ranging source of low-density, lower-value data to complement the DW
  • 7. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Today’s Layered Data Warehouse Architecture Virtualization&
 QueryFederation Enterprise Performance Management Pre-built & 
 Ad-hoc 
 BI Assets Information
 Services Data Ingestion Information Interpretation Access & Performance Layer Foundation Data Layer Raw Data Reservoir Data 
 Science Data Engines & 
 Poly-structured 
 sources Content Docs Web & Social Media SMS Structured Data
 Sources •Operational Data •COTS Data •Master & Ref. Data •Streaming & BAM Immutable raw data reservoir Raw data at rest is not interpreted Immutable modelled data. Business Process Neutral form. Abstracted from business process changes Past, current and future interpretation of enterprise data. Structured to support agile access & navigation Discovery Lab Sandboxes Rapid Development Sandboxes Project based data stores to support specific discovery objectives Project based data stored to facilitate rapid content / presentation delivery Data Sources
  • 8. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Combining Oracle RDBMS with Hadoop + NoSQL •High-value, high-density data goes into Oracle RDBMS •Better support for fast queries, summaries, referential integrity etc •Lower-value, lower-density data goes into Hadoop + NoSQL ‣Also provides flexible schema, more agile development •Successful next-generation BI+DW projects combine both - neither on their own is sufficient
  • 9. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Options for Implementing a Data Reservoir •Can add a Hadoop cluster, on commodity/existing server hardware, and link to Oracle DB ‣Use ODI etc for data transfer between Hadoop + Oracle •Can implement using VMs etc for prototyping exercise ‣But beware of shared/virtualized storage for real production usage •Approach taken by most of our “starter” 
 customers, and by us in development
  • 10. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Oracle’s Engineered System Data Reservoir Platform
  • 11. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com •Cloudera CDH ‣Used in Oracle Big Data Appliance, typically first to be supported with ODI etc •Hortonworks HDP ‣Usually second to be supported; supports Tez, but late with Spark etc •MapR ‣Some prefer this but rarely certified with Oracle products •Pivotal / ODP ‣Sometimes find in use with Banks etc, but also rarely certified •..etc Hadoop Distribution Options
  • 12. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Oracle’s Big Data Products •Oracle Big Data Appliance ‣Optimized hardware for Hadoop processing ‣Cloudera Distribution incl. Hadoop ‣Oracle Big Data Connectors, ODI etc •Oracle Big Data Connectors •Oracle Big Data SQL •Oracle NoSQL Database •Oracle Data Integrator •Oracle R Distribution •OBIEE, BI Publisher and 
 Endeca Info Discovery
  • 13. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Oracle Big Data Appliance •Engineered system for big data processing and analysis •Optimized for enterprise Hadoop workloads •288 Intel® Xeon® E5 Processors •1152 GB total memory •648TB total raw storage capacity ‣Cloudera Distribution of Hadoop ‣Cloudera Manager ‣Open-source R ‣Oracle NoSQL Database Community Edition ‣Oracle Enterprise Linux + Oracle JVM ‣New - Oracle Big Data SQL
  • 14. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Working with Oracle Big Data Appliance •Don’t underestimate the value of “pre-integrated” - massive time-saver for client ‣No need to integrate Big Data Connectors, ODI Agent etc with HDFS, Hive etc etc •Single support route - raise SR with Oracle, they will route to Cloudera if needed •Single patch process for whole cluster - OS, CDH etc etc •Full access to Cloudera Enterprise features •Otherwise … just another CDH cluster in terms of SSH access etc •We like it ;-)
  • 15. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Working with Cloudera Hadoop (CDH) - Observations •Very good product stack, enterprise-friendly, big community, can do lots with free edition •Cloudera have their favoured Hadoop technologies - Spark, Kafka •Also makes use of Cloudera-specific tools - Impala, Cloudera Manager etc •But ignores some tools that have value - Apache Tez for example •Easy for an Oracle developer to get productive with the CDH stack •But beware of some immature technologies / products ‣Hive != Oracle SQL ‣Spark is very much an “alpha” product ‣Limitations in things like LDAP integration, end-to-end security ‣Lots of products in stack = lots of places
 to go to diagnose issues
  • 16. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com CDH : Things That Work Well •HDFS as a low-cost, flexible
 data store / reservoir; Hive for
 SQL access to structured +
 semi-structured HDFS data •Pig, Spark, Python, R
 for data analysis and munging •Cloudera Manager and
 Hue for web-based
 admin + dev access Real-Time 
 Logs / Events RDBMS
 Imports File / 
 Unstructured
 Imports Hive Metastore /
 HCatalog HDFS Cluster Filesystem
  • 17. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Oracle Big Data Connectors •Oracle-licensed utilities to connect Hadoop to Oracle RBDMS ‣Bulk-extract data from Hadoop to Oracle, or expose HDFS / Hive data as external tables ‣Run R analysis and processing on Hadoop ‣Leverage Hadoop compute resources to offload ETL and other work from Oracle RBDMS ‣Enable Oracle SQL to access and load Hadoop data
  • 18. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Working with the Oracle Big Data Connectors •Oracle Loader for Hadoop, Oracle SQL Connector for HDFS - rarely used ‣Sqoop works both way (Oracle>Hadoop, Hadoop>Oracle) and is “good enough” ‣OSCH replaced by Oracle Big Data SQL for direct Oracle>Hive access •Oracle R Advanced Analytics for Hadoop has been very useful though ‣Run MapReduce jobs from R ‣Run R functions across Hive tables
  • 19. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Oracle R Advanced Analytics for Hadoop Key Features •Run R functions on Hive Dataframes •Write MapReduce functions in R
  • 20. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Oracle Big Data SQL •Part of Oracle Big Data 4.0 (BDA-only) ‣Also requires Oracle Database 12c, Oracle Exadata Database Machine •Extends Oracle Data Dictionary to cover Hive •Extends Oracle SQL and SmartScan to Hadoop •Extends Oracle Security Model over Hadoop ‣Fine-grained access control ‣Data redaction, data masking ‣Uses fast c-based readers where possible
 (vs. Hive MapReduce generation) ‣Map Hadoop parallelism to Oracle PQ ‣Big Data SQL engine works on top of YARN ‣Like Spark, Tez, MR2 Exadata
 Storage Servers Hadoop
 Cluster Exadata Database
 Server Oracle Big
 Data SQL SQL Queries SmartScan SmartScan
  • 21. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Still a Key Role for Data Integration, and BI Tools •Fast, scaleable low-cost / flexible-schema data capture using Hadoop + NoSQL (BDA) •Long-term storage of the most important downstream data - Oracle RBDMS (Exadata) •Fast analysis + business-friendly interface : OBIEE, Endeca (Exalytics), RTD etc
  • 22. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Productising the Next-Generation IM Architecture
  • 23. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com OBIEE for Enterprise Analysis Across all Data Sources •Dashboards, analyses, OLAP analytics, scorecards, 
 published reporting, mobile •Presented as an integrated business semantic model •Optional mid-tier query acceleration using 
 Oracle Exalytics In-Memory Machine •Access data from RBDMS, applications, 
 Hadoop, OLAP, ADF BCs etc Enterprise Semantic
 Business Model Business Presentation
 Layer (Reports, Dashboards) In-Memory Caching Layer Application
 Sources Hadoop /
 NoSQL
 Sources DW / OLAP Sources
  • 24. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Adding Search / Discovery Tools •For searching and cataloging data in the data reservoir •Typically use concepts of faceted search, and reading from Hive metastore •Options include Elasticsearch, Cloudera Search / Hue, Oracle Big Data Discovery
  • 25. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Bringing it All Together : Oracle Data Integrator 12c •ODI provides an excellent framework for running Hadoop ETL jobs ‣ELT approach pushes transformations down to Hadoop - leveraging power of cluster •Hive, HBase, Sqoop and OLH/ODCH KMs provide native Hadoop loading / transformation ‣Whilst still preserving RDBMS push-down ‣Extensible to cover Pig, Spark etc •Process orchestration •Data quality / error handling •Metadata and model-driven •New in 12.1.3.0.1 - ability to generate
 Pig and Spark jobs too
  • 26. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com How This Differs from the Discovery Lab •We’re still loading and storing into Hadoop and NoSQL, but… ‣There’s governance and change control ‣Data is secured ‣Data loading and pipelines are resilient and “industrialized” ‣We use ETL tools, BI tools and search tools to enable access by end-users ‣We think about design standards, file and directory layouts, metadata etc •Build on insights and models created in the Discovery Lab •Put them into production so the business can rely on them
  • 27. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Part 2
 Building the Data Reservoir & Data Factory
  • 28. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Typical RM Project BDA Topology •Starter BDA rack, or full rack •Kerberos-secured using
 included KDC server •Integration with corporate LDAP
 for Cloudera Manager, Hue etc •Developer access through Hue,
 Beeline, R Studio •End-user access through
 OBIEE, Endeca and other tools ‣With final datasets usually
 exported to Exadata or Exalytics
  • 29. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Typical RM Hadoop + BDD Development Environment •Development takes place on workstations, not
 directly on Hadoop / BDA nodes •ODI agent needs to be installed on a 
 Hadoop node, or just use Oozie scheduler •BDD typically runs on dedicated servers,
 can also be clustered •CDH5.3 is a good place to start in-terms
 of compatibility, being supported etc •Can usually use CDH Express, but full
 version can be trialled for 60 days ‣Useful for Cloudera Navigator,
 testing LDAP integration with CM
  • 30. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Components Required for Typical Production Environment •Hadoop cluster - typically 6-20 nodes, CDH or Hortonworks HDP with YARN / Hadoop 2.0 ‣Can deploy on-premise, or in cloud (AWS etc) using Cloudera Director •Oracle Database, ideally Exadata for Big Data SQL capabilities •ODI12c 12.1.3.0.1 with Big Data Options (additional license required over ODI EE) •Oracle Big Data Discovery ‣Currently only certified on CDH5.3, no Kerberos support yet •Oracle Business Intelligence 11g ‣Limited Hive compatibility with 11.1.1.7; 11.1.1.9 promises HiveServer2 + Impala support
  • 31. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Complete Oracle Big Data Product Stack
  • 32. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Typical Configuration Tasks Post-Install •Configure BDA directory structure, user access, LDAP integration etc •Connect ODI12c 12.1.3.0.1 to Hive, HDFS, Pig and Spark on Hadoop cluster •Connect OBIEE11g to Hive (and Impala) •Set up a developer workstation with client libraries, ODI Studio, OBIEE BI Administrator etc /user/mrittman/scratchpad /user/ryeardley/scratchpad /user/mpatel/scratchpad /user/mrittman/scratchpad /user/mrittman/scratchpad /data/rm_website_analysis/logfiles/incoming /data/rm_website_analysis/logfiles/archive /data/rm_website_analysis/tweets/incoming /data/rm_website_analysis/tweets/archive
  • 33. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Configuring Hadoop (BDA) for LDAP Integration •Both Cloudera Manager (with CDH Enterprise) and Hue can be linked to corporate LDAP •Hive, Impala etc also need to be configured if you want to use Apache Sentry
  • 34. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Configure HDFS Directory Structure, Permissions •Best practice is to create application-specific HDFS directories for shared data •Separate ETL out from archiving, store data in subdirectory partitions •Use POSIX security model to grant RO access to groups of users •Consider using new HDFS ACLs where appropriate (beware memory implications though) /user/mrittman/scratchpad /user/ryeardley/scratchpad /user/mpatel/scratchpad /user/mrittman/scratchpad /user/mrittman/scratchpad /data/rm_website_analysis/logfiles/incoming /data/rm_website_analysis/logfiles/archive /data/rm_website_analysis/tweets/incoming /data/rm_website_analysis/tweets/archive
  • 35. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Consider Access Control to Hive, Impala Tables •Usual access control strategy is to limit users to accessing data through Hive tables •Consider using Apache Sentry to provide RBAC over Hive and Impala tables ‣Column-based restrictions possible through SQL views ‣Requires Kerberos authentication and Hive/Impala LDAP integration as prerequisites •Oracle Big Data SQL potentially a more complete solution, if available
  • 36. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Configuring ODI12c 12.1.3.0.1 for Hadoop Data Integration •New Hadoop DS technology used for registering base cluster details •New WebLogic Hive drivers used for Hive table access •Pig and Spark datasources configured for Pig Latin / Spark execution •Either client workstation needs to be configured as Hadoop client,
 or ODI agent installed on a Hadoop node ‣To execute Pig, Hive etc mappings •Option now to use Oozie scheduler rather than ODI agent ‣Avoids need to install ODI agent on cluster ‣Integrates ODI workflows with other Hadoop scheduling
  • 37. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Configuring OBIEE for Cloudera Impala Access •Not officially supported with OBIEE 11.1.1.7, but does work •Only possible using Windows version of OBIEE (looser rules around unsupported drivers) •OBIEE 11.1.1.9 will come with Impala support •Use Cloudera ODBC drivers •Configure Database Type as Apache Hadoop •For earlier versions of Impala, may need to 
 disable ORDER BY in Database Features, 
 have the BI Server do sorting •Issue is that earlier versions of Impala 
 requires LIMIT with all ORDER BY clauses ‣OBIEE could use LIMIT, but doesn’t for Impala 
 at the moment (because not supported)
  • 38. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Configuring OBIEE to Access a Kerberos-Secured Cluster •Most production Hadoop clusters are Kerberos-secured •OBIEE can access secured clusters with appropriate ODBC drivers •Typically install Kerberos client on Windows workstation, and on server side •If OBIEE runs using a system service account, 
 ensure it can request a ticket too
  • 39. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Configuring Oracle Big Data Discovery •Configuration done during BDD installation, tied to a particular Hadoop cluster •Specify Cloudera Manager + Hadoop service URLs •May need to adjust RAM allocated to Spark Workers in Cloudera Manager ‣Currently only Spark Standalone
 (not YARN) supported
  • 40. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com End-to-End Oracle Big Data Example •Rittman Mead want to understand drivers and audience for their website ‣What is our most popular content? Who are the most in-demand blog authors? ‣Who are the influencers? What do they read? •Three data sources in scope: RM Website Logs Twitter Stream Website Posts, Comments etc
  • 41. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Two Analysis Scenarios : Reporting, and Data Discovery •Initial task will be to ingest data from webserver logs, Twitter firehose, site content + ref data •Land in Hadoop cluster, basic transform, format, store; then, analyse the data: Combine with Oracle Big Data SQL for structured OBIEE dashboard analysis Combine with site content, semantics, text enrichment Catalog and explore using Oracle Big Data Discovery What pages are people visiting? Who is referring to us on Twitter? What content has the most reach? Why is some content more popular? Does sentiment affect viewership? What content is popular, where?
  • 42. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Data Sources used for ETL Ingestion & Reporting Exercise Spark Hive HDFS Spark Hive HDFS Spark Hive HDFS Cloudera CDH5.3 BDA Hadoop Cluster Big Data SQL Exadata Exalytics Flume Flume Dim
 Attributes SQL for
 BDA Exec Filtered &
 Projected
 Rows / 
 Columns OBIEE TimesTen 12c In-Mem Ingest Process Publish
  • 43. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Apache Flume : Distributed Transport for Log Activity •Apache Flume is the standard way to transport log files from source through to target •Initial use-case was webserver log files, but can transport any file from A>B •Does not do data transformation, but can send to multiple targets / target types •Mechanisms and checks to ensure successful transport of entries •Has a concept of “agents”, “sinks” and “channels” •Agents collect and forward log data •Sinks store it in final destination •Channels store log data en-route •Simple configuration through INI files •Handled outside of ODI12c
  • 44. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Flume Source / Target Configuration •Conf file for source system agent •TCP port, channel size+type, source type •Conf settings for target agent, through CM •TCP port, channel size+type, sink type
  • 45. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Also - Apache Kafka : Reliable, Message-Based •Developed by LinkedIn, designed to address Flume issues around reliability, throughput ‣(though many of those issues have been addressed since) •Designed for persistent messages as the common use case ‣Website messages, events etc vs. log file entries •Consumer (pull) rather than Producer (push) model •Supports multiple consumers per message queue •More complex to set up than Flume, and can use
 Flume as a consumer of messages ‣But gaining popularity, especially 
 alongside Spark Streaming
  • 46. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Starting Flume Agents, Check Files Landing in HDFS Directory •Start the Flume agents on source and target (BDA) servers •Check that incoming file data starts appearing in HDFS ‣Note - files will be continuously written-to as 
 entries added to source log files ‣Channel size for source, target agents
 determines max no. of events buffered ‣If buffer exceeded, new events dropped
 until buffer < channel size
  • 47. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Adding Social Media Datasources to the Hadoop Dataset •The log activity from the Rittman Mead website tells us what happened, but not “why” •Common customer requirement now is to get a “360 degree view” of their activity ‣Understand what’s being said about them ‣External drivers for interest, activity ‣Understand more about customer intent, opinions •One example is to add details of social media mentions,
 likes, tweets and retweets etc to the transactional dataset ‣Correlate twitter activity with sales increases, drops ‣Measure impact of social media strategy ‣Gather and include textual, sentiment, contextual
 data from surveys, media etc
  • 48. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Accessing the Twitter “Firehose” •Twitter provides an API for developers to use to consume the Twitter “firehose” •Can specify keywords to limit the tweets consumed •Free service, but some limitations on actions (number of requests etc) •Install additional Flume source JAR (pre- built available, but best to compile from source) ‣https://github.com/cloudera/cdh-twitter- example •Specify Twitter developer API key and keyword filters in the Flume conf settings
  • 49. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Making the Webserver Log Data Available to ODI •Flume log data from webserver arrives as files in HDFS •Can either be accessed in that form by ODI, or presented as a Hive table to ODI using SerDe ‣Both are fine, but creating the Hive table in advance makes ODI developer job simpler
  • 50. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Creating a Hive Table over the Log Data, using SerDe •Hive works by defining a table structure over data in HDFS, typically plain text with delimiter •But can make use of SerDes (serializer-deserializers) to parse other formats •Takes semi-structured data (Apache Combined Log Format) and turns into structured (Hive) ‣Can also use IKM File to Hive with same SerDe definition, to do within ODI CREATE external TABLE apachelog_parsed( host STRING, identity STRING, user STRING, time STRING, request STRING, status STRING, size STRING, referer STRING, agent STRING) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe' WITH SERDEPROPERTIES ( "input.regex" = "([^]*) ([^]*) ([^]*) (-|[^]*]) 
 ([^ ”]*|"[^"]*")(-|[0-9]*) (-|[0-9]*)(?: ([^ "]
 *|".*") ([^ "]*|".*"))?" ) STORED AS TEXTFILE LOCATION '/user/flume/rm_website_logs;
  • 51. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Copying SerDe JAR Files to Hadoop Lib Directory •Make sure any SerDe files for parsing Hive table data are copied to Hadoop lib directory •Do this for all Hadoop nodes in the cluster sudo cp /usr/lib/hive/lib/hive-contrib-0.13.1-cdh5.3.0.jar /usr/lib/hadoop/lib
  • 52. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Making Twitter Data Available to ODI •Simplest approach again is to define a Hive table over the Twitter data •Arrives in files via Flume agent, but in JSON format •Potentially contains more fields than we are interested in - and in JSON format •Can address in ODI data load, but simpler to parse and select elements of interest beforehand
  • 53. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Two-Stage Hive Table Creation using JSON SerDe •Initial table uses JSON SerDe to parse all Twitter JSON documents in HDFS directory •Clone + build from https://github.com/cloudera/cdh-twitter-example/tree/master/hive-serdes CREATE EXTERNAL TABLE `tweets`( `id` bigint COMMENT 'from deserializer', `created_at` string COMMENT 'from deserializer', `source` string COMMENT 'from deserializer', `favorited` boolean COMMENT 'from deserializer', `retweeted_status` struct<text:string,user:struct<screen_name:string,name:string>,
 retweet_count:int> COMMENT 'from deserializer', `entities` struct<urls:array<struct<expanded_url:string>>,
 user_mentions:array<struct<screen_name:string,name:string>>,
 hashtags:array<struct<text:string>>> COMMENT 'from deserializer', `text` string COMMENT 'from deserializer', `user` struct<screen_name:string,name:string,friends_count:int,followers_count:int,
 statuses_count:int,verified:boolean,utc_offset:int,time_zone:string> COMMENT 'from deserializer', `in_reply_to_screen_name` string COMMENT 'from deserializer') ROW FORMAT SERDE 'com.cloudera.hive.serde.JSONSerDe' STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' LOCATION 'hdfs://bigdatalite.rittmandev.com:8020/user/oracle/data/tweets';
  • 54. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Two-Stage Hive Table Creation using JSON SerDe •Second table extracts the individual fields from STRUCT datatypes in first table ‣Could be done through a view, but Big Data Discovery doesn’t support them yet CREATE TABLE `tweets_expanded` AS select `tweets`.`id`, `tweets`.`created_at`, `tweets`.`user`.screen_name as `user_screen_name`, `tweets`.`user`.friends_count as `user_friends_count`, `tweets`.`user`.followers_count as `user_followers_count`, `tweets`.`user`.statuses_count as `user_tweets_count`, `tweets`.`text`, `tweets`.`in_reply_to_screen_name`, `tweets`.`favorited`, `tweets`.`retweeted_status`.user.screen_name as `retweet_user_screen_name`, `tweets`.`retweeted_status`.retweet_count as `retweet_count`, `tweets`.`entities`.urls[0].expanded_url as `url1`, `tweets`.`entities`.urls[1].expanded_url as `url2`, `tweets`.`entities`.hashtags[0].text as `hashtag1`, `tweets`.`entities`.hashtags[1].text as `hashtag2`, `tweets`.`entities`.hashtags[2].text as `hashtag3`, `tweets`.`entities`.hashtags[3].text as `hashtag4`, `tweets`.`entities`.user_mentions[0].screen_name as `user_mentions_screen_name1`, `tweets`.`entities`.user_mentions[1].screen_name as `user_mentions_screen_name2`, `tweets`.`entities`.user_mentions[2].screen_name as `user_mentions_screen_name3`, `tweets`.`entities`.user_mentions[3].screen_name as `user_mentions_screen_name4`, `tweets`.`entities`.user_mentions[4].screen_name as `user_mentions_screen_name5` from `tweets`;
  • 55. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Configuring the ODI12c 12.1.3.0.1 Hadoop Datasource •New feature in ODI12.1.3.0.1 with Big Data Extensions •Defines the physical server and Java library locations
 for other tools (Pig etc) to use ‣Namenode location ‣Working area in HDFS for ODI ‣Location on HDFS to store basic
 details of ODI installation / repo
  • 56. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Configuring the ODI12c 12.1.3.0.1 Hive Datasource •Used for reverse-engineering Hive table structures from Hadoop •Uses JDBC connection, new WLS-derived driver •Need to also either install Hadoop/Hive client on ODI Studio workstation, or 
 install ODI Agent on target Hadoop cluster to actually execute mappings ‣New option to use Oozie removes need for ODI Agent though
  • 57. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Import Hive Table Metadata into ODI Repository •Connections to Hive, Hadoop (and Pig) set up earlier •Define physical and logical schemas, reverse-engineer the table definitions into repository ‣Can be temperamental with tables using non-standard SerDes; make sure JARs registered 1 2 3
  • 58. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Data Flow through the Hadoop + Exadata Data Reservoir Spark Hive HDFS Spark Hive HDFS Spark Hive HDFS Cloudera CDH5.3 BDA Hadoop Cluster Big Data SQL Exadata Exalytics Flume Flume Dim
 Attributes SQL for
 BDA Exec Filtered &
 Projected
 Rows / 
 Columns OBIEE TimesTen 12c In-Mem Ingest Process Publish GG
  • 59. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Major ETL Steps 1. Join initial log data extract to additional reference data (already in Hive) 2. Supplement with additional Oracle RDBMS data (brought in via Sqoop) 3. Filter log data to leave just requests for blog pages 4. Take the Twitter data, and filter to just tweets referencing RM web pages 5. Join Twitter activity to page hits, to create aggregate for the two 6. Geocode page hits to determine 
 country + city of visitor 7. Sessionize the log data for use with 
 an R classification routine
  • 60. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com ETL Step 1 : Join Incoming Log Hive Table to Hive Ref Data •IKM Hive Append can be used to perform Hive table joins, filtering, agg. etc. •INSERT only, no DELETE, UPDATE etc •Join to other Hive tables, or combine with Sqoop KMs etc to bring in Oracle data •Supports most ODI operators ‣Filter ‣Aggregate ‣Join (ANSI-style) ‣etc
  • 61. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com ETL Step 1 : Join Incoming Log Hive Table to Hive Ref Data •ODI 12.1.3.0.1 replaces the previous template-style KMs (IKM Hive-to-Hive Control Append) with new component-style KMs ‣Makes it possible to mix-and-match sources ‣Enables logical mapping to generate Hive, Pig and Spark code
  • 62. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com ETL Step 1 : Join Incoming Log Hive Table to Hive Ref Data •Executing mapping generates HiveQL code, executed through an ODI Agent (or Oozie) •Code runs on Hadoop cluster, compiling down to Java MapReduce code
  • 63. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com ETL Step 2 : Supplement with Oracle Reference Data •In this step, the log data will be supplemented with additional reference data in Oracle •Uses Sqoop (LKM SQL to Hive Sqoop) to extract Oracle data into Hive staging table •Join temporary Hive table to the main log Hive table ‣Logical mapping just references the
 Oracle source table, no need for
 mapping designer to consider Sqoop
  • 64. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com ETL Step 2 : Supplement with Oracle Reference Data •Mapping physical details specify Sqoop KM for extract (LKM SQL to Hive Sqoop) •IKM Hive Append used for join and load into Hive target
  • 65. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com ETL Step 2 : Supplement with Oracle Reference Data •Mapping execution then runs in three stages: ‣Create temporary Hive table for staging data ‣Generate and run Sqoop job to export reference data out of Oracle RBDMS ‣Join incoming reference Hive table to log data Hive table
  • 66. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Alternative to Batch Replication using Sqoop : GoldenGate •Oracle GoldenGate 12c for Big Data can replicate database transactions into Hadoop •Load directly into Hive / HDFS, or feed transactions into Apache Flume as flume events •Provides a way to replicate Oracle + other RBDMS data into the data reservoir ‣Works with Flume to provide a single streaming route into the the data reservoir
  • 67. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Enabling Oracle Database 12c for GoldenGate Replication •Oracle GoldenGate 11gR2 for Oracle Database introduced Integrated Capture Mode ‣Integrated with database, just enable with alter system set enable_goldengate_replication=true ‣Required for Oracle Database 12c container databases (as found on Big Data Lite 4.1 VM)
  • 68. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Oracle RDBMS to Hive via Flume Configuration Steps 1. Configure the source database for ARCHIVELOG mode, integrated capture and supplementary logging 2. Create data source definition file to specify the database schema / tables to replicate 3. Set up the database capture (extract) process to write transactions to the trail file 4. Configure the GoldenGate Flume adapter to send transactions written to the trail file to a Flume Adapter, via Avro RPC messages 5. Set up and configure a Flume Adapter to receive those messages, and write them in Hive data storage format to HDFS for the target Hive table Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNING EXTRACT RUNNING FLUME 00:00:00 00:00:02 EXTRACT RUNNING ORAEXT 00:00:10 00:00:02 select CONCAT('Rows loaded from gg_Test.logs into HDFS via Flume: '
 , count(*)) from gg_test.logs; … Rows loaded from gg_Test.logs into HDFS via Flume: 100 sqlplus gg_test@orcl/welcome1 begin 
 P_GENERATE_LOGS(100); 
 end; 2 1 3
  • 69. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com ETL Step 3 : Filter Log Data to Retain Just Blog Page Views •Same approach as with first mapping, Hive source to Hive target •Uses Filter operator to add WHERE clause to HiveQL SELECT statement
  • 70. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com ETL Step 4 : Filter Tweets to Just Leave RM Blog References •Same process as previous step; extract from Hive source, filter, load into Hive target •Filter on two URL columns as tweet can contain multiple URL references ‣Two picked as arbitrary limit to URL extraction
  • 71. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Mapping Variant : Generate as Pig Latin vs. HiveQL •ODI 12.1.3.0.1 comes with the ability to generate Pig Latin as well as HiveQL •Alternative to Hive, defines data manipulation as dataflow steps (like an execution plan) •Start with one or more data sources, add steps to apply filters, group, project columns •Generates MapReduce to execute data flow, similar to Hive; extensible through UDFs a = load '/user/oracle/pig_demo/marriott_wifi.txt'; b = foreach a generate flatten(TOKENIZE((chararray)$0)) as word; c = group b by word; d = foreach c generate COUNT(b), group; store d into '/user/oracle/pig_demo/pig_wordcount'; [oracle@bigdatalite ~]$ hadoop fs -ls /user/oracle/pig_demo/pig_wordcount Found 2 items -rw-r--r-- 1 oracle oracle 0 2014-10-11 11:48 /user/oracle/pig_demo/pig_wordcount/_SUCCESS -rw-r--r-- 1 oracle oracle 1965 2014-10-11 11:48 /user/oracle/pig_demo/pig_wordcount/part-r-00000 [oracle@bigdatalite ~]$ hadoop fs -cat /user/oracle/pig_demo/pig_wordcount/part-r-00000 2 . 1 I 6 a ... 2 1 3
  • 72. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Configuring the ODI12c 12.1.3.0.1 Pig Datasource •A way of linking a Pig execution environment to a previously-defined Hadoop DS •Also gives ability to define additional JARs to use with Pig - DataFu, Piggybank etc •Can be defined as either Local (running Pig code on workstation) or MapReduce
  • 73. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Configuring a Mapping for Pig Latin Code Generation •On the logical mapping, set the Staging Location Hint to the Pig logical schema •For the mapping operators, set the Execute on Hint to Staging Set as property
 for whole mapping
  • 74. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Creating a Physical Mapping Configured for Pig Latin •Create additional deployment specification for Pig physical mapping •Mapping operators will use Pig component KMs •Set KM for target table or file to <Default> (from original IKM Hive Append)
  • 75. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Executing a Pig Latin Mapping •Can either run in Local, or MapReduce mode ‣Local usually faster for unit testing, MapReduce runs on full Hadoop cluster
  • 76. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com ETL Step 5 : Join Tweets to Log Entries, Aggregate •Simple join between two Hive tables, after aggregating their contents ‣Previous transformations in earlier mappings standardised the URL format •Add page view and tweet totals to list of blog pages accessed
  • 77. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com ETL Step 6 : Geocode Log Entries using IP Address •Another requirement we have is to “geocode” the webserver log entries •Based on the fact that IP ranges can usually be attributed to specific countries •Not functionality normally found in Hive etc, but can be done with add-on APIs •Approach used by Google Analytics etc to show where visitors are located
  • 78. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com How GeoIP Geocoding Works •Uses free Geocoding API and database from Maxmind •Convert IP address to an integer •Find which integer range our IP address sits within •But Hive can’t use BETWEEN in a join… •Solution : Expose PAGEVIEWS Hive table using Big Data SQL, then join to lookup table
 in Oracle database
  • 79. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Oracle Big Data SQL and Data Integration •Gives us the ability to easily bring in Hadoop (Hive) data into Oracle-based mappings •Allows us to create Hive-based mappings that use Oracle SQL for transforms, joins •Faster access to Hive data for real-time ETL scenarios •Through Hive, bring NoSQL and semi-structured data access to Oracle ETL projects •For our scenario - join weblog + customer data in Oracle RDBMS, no need to stage in Hive
  • 80. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Using Big Data SQL in an ODI12c Mapping •By default, Hive table has to be exposed as an ORACLE_HIVE external table in Oracle first •Then register that Oracle external table in ODI repository + model External table creation in Oracle Logical Mapping using just Oracle tables 1 2 Register in ODI Model 3
  • 81. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com New KM : LKM Hive to Oracle (Big Data SQL) •New KM works in similar way to Sqoop KM : Creates temporary ORACLE_HIVE table
 to expose Hive data in Oracle environment ‣Allows Hive+Oracle joins by auto-creating ORACLE_HIVE extttab 
 definition to enable Big Data SQL Hive table access
  • 82. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com ODI12c Mapping Creates Temp Exttab, Joins to Oracle 1 2 Register in ODI Model 3 4 Hive table AP uses LKM Hive to Oracle (Big Data SQL) IKM Oracle Insert Big Data SQL Hive External Table created as temp object Main integration SQL routines uses regular Oracle SQL join
 (including use of advanced SQL functions, e.g. REGEXP_SUBSTR)
  • 83. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com ETL Step 7 : Sessionize Log Data, for R Classification Model •Discovery Lab part of the masterclass created a classification model using R •Used as input a sessionized version of the log activity, grouping page views within 60s •Sessionization routine was written as Pig script, using DataFu and Piggybank UDFs ‣DataFu is a library of Pig functions initially developed by LinkedIn, now an Apache project ‣Piggybank is a community-created library of Pig UDFs and store/load routines •So why was Pig used for this sessionization task?
  • 84. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Apache Pig Characteristics vs. Hive •Ability to load data into a defined schema, or use schema-less (access fields by position) •Fields can contain nested fields (tuples) •Grouping records on a key doesn’t aggregate them, it creates a nested set of rows in column •Uses “lazy execution” - only evaluates data flow once final output has been requests •Makes Pig an excellent language for interactive data exploration vs.
  • 85. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Pig Data Processing Example : Count Page Request Totals raw_logs =LOAD '/user/oracle/rm_logs/' USING TextLoader AS (line:chararray); logs_base = FOREACH raw_logs GENERATE FLATTEN ( REGEX_EXTRACT_ALL ( line, '^(S+) (S+) (S+) [([w:/]+s[+-]d{4})] "(.+?)" (S+) (S+) "([^"]*)" "([^"]*)"' ) ) AS ( remoteAddr: chararray, remoteLogname: chararray, user: chararray, time: chararray, request: chararray, status: chararray, bytes_string: chararray, referrer: chararray, browser: chararray ); page_requests = FOREACH logs_base GENERATE SUBSTRING(time,3,6) as month, FLATTEN(STRSPLIT(request,' ',5)) AS (method:chararray, request_page:chararray, protocol:chararray); page_requests_short = FOREACH page_requests GENERATE $0,$2; page_requests_short_filtered = FILTER page_requests_short BY (request_page is not null AND SUBSTRING(request_page,0,3) == '/20'); page_request_group = GROUP page_requests_short_filtered BY request_page; page_request_group_count = FOREACH page_request_group GENERATE $0, COUNT(page_requests_short_filtered) as total_hits; page_request_group_count_sorted = ORDER page_request_group_count BY $1 DESC; page_request_group_count_limited = LIMIT page_request_group_count_sorted 10;
  • 86. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Pig Data Processing Example : Join to Post Titles, Authors •Pig allows aliases (datasets) to be joined to each other •Example below adds details of post names, authors; outputs top pages dataset to file raw_posts = LOAD '/user/oracle/pig_demo/posts_for_pig.csv' USING TextLoader AS (line:chararray); posts_line = FOREACH raw_posts GENERATE FLATTEN ( STRSPLIT(line,';',10) ) AS ( post_id: chararray, title: chararray, post_date: chararray, type: chararray, author: chararray, post_name: chararray, url_generated: chararray ); posts_and_authors = FOREACH posts_line GENERATE title,author,post_name,CONCAT(REPLACE(url_generated,'"',''),'/') AS (url_generated:chararray); pages_and_authors_join = JOIN posts_and_authors BY url_generated, page_request_group_count_limited BY group; pages_and_authors = FOREACH pages_and_authors_join GENERATE url_generated, post_name, author, total_hits; top_pages_and_authors = ORDER pages_and_authors BY total_hits DESC; STORE top_pages_and_authors into '/user/oracle/pig_demo/top-pages-and-authors.csv' USING PigStorage(‘,');
  • 87. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Pig Extensibility through UDFs and Streaming •Similar to Apache Hive, Pig can be programatically extended through UDFs •Example below uses Function defined in Python script to geocode IP addresses #!/usr/bin/python import sys sys.path.append('/usr/lib/python2.6/site-packages/') import pygeoip @outputSchema("country:chararray") def getCountry(ip): gi = pygeoip.GeoIP('/home/nelio/GeoIP.dat') country = gi.country_name_by_addr(ip) return country register 'python_geoip.py' using jython as pythonGeoIP; raw_logs =LOAD '/user/root/logs/' USING TextLoader AS (line:chararray); logs_base = FOREACH raw_logs GENERATE FLATTEN ( REGEX_EXTRACT_ALL ( line, '^(S+) (S+) (S+) [([w:/]+s[+-]d{4})] 
 "(.+?)" (S+) (S+) "([^"]*)" "([^"]*)"' ) ) AS ( remoteAddr: chararray, remoteLogname: chararray, user: chararray, time: chararray, request: chararray, 
 status: int, bytes_string: chararray, referrer: chararray, 
 browser: chararray ); ipaddress = FOREACH logs_base GENERATE remoteAddr; clean_ip = FILTER ipaddress BY 
 (remoteAddr matches '^.*?((?:d{1,3}.){3}d{1,3}).*?$'); country_by_ip = FOREACH clean_ip 
 GENERATE pythonGeoIP.getCountry(remoteAddr);
  • 88. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Pig Sessionization Script used in Discovery Lab register /opt/cloudera/parcels/CDH/lib/pig/datafu.jar; register /opt/cloudera/parcels/CDH/lib/pig/piggybank.jar; DEFINE Sessionize datafu.pig.sessions.Sessionize('60m'); DEFINE Median datafu.pig.stats.StreamingMedian(); DEFINE Quantile datafu.pig.stats.StreamingQuantile('0.9','0.95'); DEFINE VAR datafu.pig.VAR(); DEFINE CustomFormatToISO org.apache.pig.piggybank.evaluation.datetime.convert.CustomFormatToISO(); DEFINE ISOToUnix org.apache.pig.piggybank.evaluation.datetime.convert.ISOToUnix(); -------------------------------------------------------------------------------- -- Import and clean logs raw_logs = LOAD '/user/flume/rm_logs/apache_access_combined' USING TextLoader AS (line:chararray); -- Extract individual fields logs_base = FOREACH raw_logs GENERATE FLATTEN (REGEX_EXTRACT_ALL(line,'^(S+) (S+) (S+) [([w:/]+s[+-]d{4})] 
 "(.+?)" (S+) (S+) "([^"]*)" "([^"]*)"')) AS (remoteAddr: chararray, remoteLogName: chararray, user: chararray, time: chararray, 
 request: chararray, status: chararray, bytes_string: chararray, referrer:chararray, browser: chararray); -- Remove Bots and convert timestamp logs_base_nobots = FILTER logs_base BY NOT (browser matches '.*(spider|robot|bot|slurp|Bot|monitis|
 Baiduspider|AhrefsBot|EasouSpider|HTTrack|Uptime|FeedFetcher|dummy).*'); -- Remove uselesss columns and convert timestamp clean_logs = FOREACH logs_base_nobots GENERATE CustomFormatToISO(time,'dd/MMM/yyyy:HH:mm:ss Z') as time, 
 remoteAddr, request, status, bytes_string, referrer, browser; -------------------------------------------------------------------------------- -- Sessionize the data clean_logs_sessionized = FOREACH (GROUP clean_logs BY remoteAddr) { ordered = ORDER clean_logs BY time; GENERATE FLATTEN(Sessionize(ordered)) AS (time, remoteAddr, request, status, bytes_string, referrer, browser, sessionId); }; -- The following steps will generate a tsv file in your home directory to download and work with in R store clean_logs_sessionized into '/user/jmeyer/clean_logs' using PigStorage('t','-schema');
  • 89. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Converting the Pig Script to an ODI Mapping •Not an obvious translation - Pig data flows don’t map 1:1 with Hive set-based transformations ‣Pig aliases use lazy execution: intermediate results aren’t materialised as Hive tables ‣Some concepts - GENERATE FLATTEN etc - don’t translate to SQL expressions ‣DataFu and Piggybank UDFs don’t have equivalent Hive versions clean_logs_sessionized = FOREACH (GROUP clean_logs BY remoteAddr) { ordered = ORDER clean_logs BY time; GENERATE FLATTEN(Sessionize(ordered)) AS (time, remoteAddr, request, status, bytes_string, referrer, browser, sessionId); }; select sum(f.flights)
 from flight_performance f 
 join origin o on (f.origin = o.origin)
 where o.origin = 'SFO'; vs.
  • 90. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com ODI 12.1.3.0.1 Logical Mapping for Log Sessionization Expression operator used instead of Hive table target;
 generated as ALIAS when deployed as Pig Latin mapping Table Function operator used to generate another ALIAS
 by running input attributes through arbitrary Pig Latin script Only data materialised is in Hive table,
 at end of dataflow 3 2 1
  • 91. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Expression Mapping Operator Used to Create Next Alias •Using Expression rather than datastore operator creates transformation “in-line” •With Pig execution, generates expression as ALIAS •Allows use of expressions (e.g. CustomFormatToISO Piggybank UDF) •Filters etc included in ALIAS definition
  • 92. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Table Function Operator used for Executing Pig Commands •Table function operator processes input attributes through arbitrary script •In pig mappings, allows use of more complex Pig transformations ‣GENERATE FLATTEN, use of DataFu Sessionize UDF •Final ALIAS defined within Pig Latin script has to match name of Table Function operator
  • 93. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Pig Latin Generated Script for Sessionization Task •Creates single dataflow using series of ALIASes •Includes Pig Latin commands added through Table Function •Matches logic and approach of original hand-coded Pig script, but now managed within ODI
  • 94. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Create ODI Package for Processing Steps, and Execute •Create ODI Package or Load Plan to run steps in sequence ‣With load plan, can also add exceptions and recoverability •Execute package to load data into final Hive tables
  • 95. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Summary : Data Processing Phase •We’ve now processed the incoming data, filtering it and transforming to required state •Joined (“mashed-up”) datasets from website activity, and social media mentions •Ingestion and the load/processing stages are now complete •Now we want to make the Hadoop 
 output available to a wider, 
 non-technical audience…
  • 96. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Part 3
 Reporting and Dashboards across the Data Reservoir using Oracle Big Data SQL + OBIEE
  • 97. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Options for Sharing Data Reservoir Data with Users •Several options for reporting on the content in the data reservoir and DW ‣Using a reporting & dashboarding tool compatible with Hive + DW, e.g. OBIEE11g ‣Using a search/data discovery tool, for example Big Data Discovery ‣Export Hadoop/Hive data into Oracle
 and report from there Actionable Events Event.Engine Enterprise. Information.Store Reporting Discovery.Lab Actionable Information Actionable Insights Input Events Execution Innovation Discovery. Output Events. &.Data Structured Enterprise. Data Other Data Data. Reservoir Data.Factory
  • 98. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Alternative to Reporting Against Hadoop : Export to Data Mart •In most cases, for general reporting access, exporting into RDBMS makes sense •Export Hive data from Hadoop into Oracle Data Mart or Data Warehouse •Use Oracle RDBMS for high-value data analysis, full access to RBDMS optimisations •Potentially use Exalytics for in-memory RBDMS access Loading
 Stage Processing Stage Store / Export Stage Real-Time 
 Logs / Events RDBMS
 Imports File / 
 Unstructured
 Imports RDBMS
 Exports File
 Exports
  • 99. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Using the Right Server for the Right Job •Hadoop for large scale, high-speed data ingestion and processing •Oracle RDBMS and Exadata for long-term storage of high-value data •Oracle Exalytics for speed-of-though analytics in TimesTen and Oracle Essbase
  • 100. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Oracle Business Intelligence and Big Data Sources •OBIEE 11g from 11.1.1.7 can connect to Hadoop sources ‣OBIEE 11.1.1.7+ supports Hive/Hadoop as a data source, via specific Hive ODBC drivers
 and Apache Hive Physical Layer database type ‣But practically, it comes with limitations ‣Current 11.1.1.7 version of OBIEE only ships with HiveServer1 ODBC drivers ‣HiveQL is a limited subset of ISO/Oracle SQL ‣… and Hive access is really slow
  • 101. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Configuring OBIEE for Hive Access •As of OBIEE 11.1.1.7, access is through Oracle-supplied Data Direct Drivers ‣Not compatible with HiveServer2 protocol used by CDH4+ ‣As workaround, use Windows version of OBIEE and Cloudera ODBC drivers ‣OBIEE 11.1.1.9 will come with HiveServer2 drivers (hopefully) •Need to configure on both server, and BI Administration workstation
  • 102. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Setting up the ODBC Connection to Hadoop Environment •Example uses OBIEE 11.1.1.7 on Windows, to allow use of Cloudera Hive ODBC drivers (HiveServer2) ‣Linux OBIEE 11g version only allows use of Oracle-supplied HiveServer1 drivers •Install ODBC drivers, create system DSN •Use username/password authentication, or Kerberos if required
  • 103. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Importing Hive Metadata 1. Use BI Administration tool, File > Import Metadata 2. Select DSN previously created for Hive datasource 3. Import table metadata from correct Hive database 4. Set Database Type to Apache Hadoop 3 2 1
  • 104. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Testing Hive Connection & Data Retrieval •Confirm that Hive table data can be returned by the BI Administration tool ‣Basic check before carrying on; should also check with the RPD online too (for BI Server)
  • 105. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Building an Initial Business Model from Hive Tables •Main fact table is based on page requests (ACCESS_PER_POST) •Pages dimension table (POSTS) •Simple counts of pages viewed per author, post category etc
  • 106. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Federated Hive and Oracle Data via BI Server •Oracle Database has a table containing HTTP status codes •Import into RPD to include in business model
  • 107. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Join Hive Fact (Log) Data to Oracle Reference Data •BI Server issues two separate queries; one to Hive, one to Oracle •Returned datasets then joined (stitch-join) by BI Server and returned as single resultset
  • 108. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com How Can This Be Improved On? •Gives the ability to supplement Hadoop data with reference data from Oracle, Excel etc •But response time is still quite slow •What about faster versions of Hive - Cloudera Impala for example? •Cloudera’s answer to Hive query response time issues •MPP SQL query engine running on Hadoop, bypasses MapReduce for direct data access •Mostly in-memory, but spills to disk if required •Uses Hive metastore to access Hive table metadata •Similar SQL dialect to Hive - not as rich though and no support for Hive SerDes, storage handlers etc
  • 109. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com How Impala Works •A replacement for Hive, but uses Hive concepts and
 data dictionary (metastore) •MPP (Massively Parallel Processing) query engine
 that runs within Hadoop ‣Uses same file formats, security,
 resource management as Hadoop •Processes queries in-memory •Accesses standard HDFS file data •Option to use Apache AVRO, RCFile,
 LZO or Parquet (column-store) •Designed for interactive, real-time
 SQL-like access to Hadoop Impala Hadoop HDFS etc BI Server Presentation Svr Cloudera Impala
 ODBC Driver Impala Hadoop HDFS etc Impala Hadoop HDFS etc Impala Hadoop HDFS etc Impala Hadoop HDFS etc Multi-Node
 Hadoop Cluster
  • 110. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Enabling Hive Tables for Impala •Log into Impala Shell, run INVALIDATE METADATA command to refresh Impala table list •Run SHOW TABLES Impala SQL command to view tables available •Run COUNT(*) on main ACCESS_PER_POST table to see typical response time [oracle@bigdatalite ~]$ impala-shell Starting Impala Shell without Kerberos authentication [bigdatalite.localdomain:21000] > invalidate metadata; Query: invalidate metadata Fetched 0 row(s) in 2.18s [bigdatalite.localdomain:21000] > show tables; Query: show tables +-----------------------------------+ | name | +-----------------------------------+ | access_per_post | | access_per_post_cat_author | | … | | posts | |——————————————————————————————————-+ Fetched 45 row(s) in 0.15s [bigdatalite.localdomain:21000] > select count(*) 
 from access_per_post; Query: select count(*) from access_per_post +----------+ | count(*) | +----------+ | 343 | +----------+ Fetched 1 row(s) in 2.76s
  • 111. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Setting up an ODBC Connection to Impala •Download ODBC drivers for Impala from Cloudera Website ‣Windows, Linux, Mac, AIX •Create system DSN as normal, use port 21050 •Configure authentication ‣For unsecured cluster, use “No Authentication” ‣For secured, use Kerberos, etc •Test datasource to check successful connectivity •Complete on both Windows workstation, and server hosting BI Server component |
  • 112. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Recreate Business Model, Re-run Basic Report •Significant improvement over Hive response time •Now makes Hadoop suitable for ad-hoc querying | Logical Query Summary Stats: Elapsed time 2, Response time 1, Compilation time 0 (seconds) Logical Query Summary Stats: Elapsed time 50, Response time 49, Compilation time 0 (seconds) Simple Two-Table Join against Hive Data Only Simple Two-Table Join against Impala Data Only vs
  • 113. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Re-Create Oracle Query Federation, and Retest •Add Oracle HTTP Status table to business model sourced from Impala data •Join HTTP Status table to Impala fact table in Physical layer •Recreate query to compare response time to Hive + Oracle version Logical Query Summary Stats: Elapsed time 102, Response time 102, Compilation time 0 (seconds) Logical Query Summary Stats: Elapsed time 1, Response time 1, Compilation time 0 (seconds) Federated Query joining Hive + Oracle Data Federated Query joining Impala + Oracle Data vs
  • 114. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Any Way We Can Improve This Further? •If available, use Oracle Big Data SQL to query Hive data only, or federated Hive + Oracle •Access Hive data through Big Data SQL SmartScan feature, for Exadata-type response time •Use standard Oracle SQL across both Hive and Oracle data •Also extends to data in Oracle NoSQL database
  • 115. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Oracle Big Data SQL •Part of Oracle Big Data 4.0 (BDA-only) ‣Also requires Oracle Database 12c, Oracle Exadata Database Machine •Extends Oracle Data Dictionary to cover Hive •Extends Oracle SQL and SmartScan to Hadoop •Extends Oracle Security Model over Hadoop ‣Fine-grained access control ‣Data redaction, data masking ‣Uses fast c-based readers where possible
 (vs. Hive MapReduce generation) ‣Map Hadoop parallelism to Oracle PQ ‣Big Data SQL engine works on top of YARN ‣Like Spark, Tez, MR2 Exadata
 Storage Servers Hadoop
 Cluster Exadata Database
 Server Oracle Big
 Data SQL SQL Queries SmartScan SmartScan
  • 116. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com View Hive Table Metadata in the Oracle Data Dictionary •Oracle Database 12c 12.1.0.2.0 with Big Data SQL option can view Hive table metadata ‣Linked by Exadata configuration steps to one or more BDA clusters •DBA_HIVE_TABLES and USER_HIVE_TABLES exposes Hive metadata •Oracle SQL*Developer 4.0.3, with Cloudera Hive drivers, can connect to Hive metastore SQL> col database_name for a30 SQL> col table_name for a30 SQL> select database_name, table_name 2 from dba_hive_tables; DATABASE_NAME TABLE_NAME ------------------------------ ------------------------------ default access_per_post default access_per_post_categories default access_per_post_full default apachelog default categories default countries default cust default hive_raw_apache_access_log
  • 117. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Hive Access through Oracle External Tables + Hive Driver •Big Data SQL accesses Hive tables through external table mechanism ‣ORACLE_HIVE external table type imports Hive metastore metadata ‣ORACLE_HDFS requires metadata to be specified •Access parameters cluster and tablename specify Hive table source and BDA cluster CREATE TABLE access_per_post_categories( hostname varchar2(100), request_date varchar2(100), post_id varchar2(10), title varchar2(200), author varchar2(100), category varchar2(100), ip_integer number) organization external (type oracle_hive default directory default_dir access parameters(com.oracle.bigdata.tablename=default.access_per_post_categories));
  • 118. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Big Data SQL Server Dataflow •Read data from HDFS Data Node ‣Direct-path reads ‣C-based readers when possible ‣Use native Hadoop classes otherwise
 •Translate bytes to Oracle
 •Apply SmartScan to Oracle bytes ‣Apply filters ‣Project columns ‣Parse JSON/XML ‣Score models Disks% Data$Node$ Big$Data$SQL$Server$ External$Table$Services$ Smart$Scan$ RecordReader% SerDe% 10110010%10110010%10110010% 1% 2% 3% 1 2 3
  • 119. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Use Rich Oracle SQL Dialect over Hadoop (Hive) Data •Ranking Functions ‣rank, dense_rank, cume_dist, percent_rank, ntile •Window Aggregate Functions ‣Avg, sum, min, max, count, variance, first_value, last_value •LAG/LEAD Functions •Reporting Aggregate Functions ‣Sum, Avg, ratio_to_report •Statistical Aggregates ‣Correlation, linear regression family, covariance •Linear Regression ‣Fitting of ordinary-least-squares regression line to set of number pairs •Descriptive Statistics •Correlations ‣Pearson’s correlation coefficients •Crosstabs ‣Chi squared, phi coefficinet •Hypothesis Testing ‣Student t-test, Bionomal test •Distribution ‣Anderson-Darling test - etc.
  • 120. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Leverages Hive Metastore for Hadoop Java Access Classes •As with other next-gen SQL access layers, uses common Hive metastore table metadata •Provides route to underlying Hadoop data for Oracle Big Data SQL c-based SmartScan
  • 121. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Extending SmartScan, and Oracle SQL, Across All Data •Brings query-offloading features of Exadata
 to Oracle Big Data Appliance •Query across both Oracle and Hadoop sources •Intelligent query optimisation applies SmartScan
 close to ALL data •Use same SQL dialect across both sources •Apply same security rules, policies, 
 user access rights across both sources
  • 122. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Example Usage : Use Big Data SQL for Geocoding Exercise •Earlier on we used ODI and Big Data SQL to join incoming log data to Geocoding table •Big Data SQL used as it enabled Hive data to use BETWEEN join •We will now reproduce using OBIEE environment •Benefit is doing on the fly, outside of ETL Hive Weblog Activity table Oracle Geocoding lookup tables Combined output
 in report form
  • 123. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Create ORACLE_HIVE External Table over Hive Table •Use the ORACLE_HIVE access driver type to create Oracle external table over Hive table •ACCESS_PER_POST_EXTTAB and POSTS_EXTTAB now appear in Oracle data dictionary
  • 124. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Import Oracle Tables, Create RPD joining Tables Together •No need to use Hive ODBC drivers - Oracle OCI connection instead •No issue around HiveServer1 vs HiveServer2 •Big Data SQL handles authentication
 with Hadoop cluster in background, Kerberos etc •Transparent to OBIEE - all appear as Oracle tables •Join across schemas if required
  • 125. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Create Physical Data Model from Imported Table Metadata •Join ORACLE_HIVE external tables to reference table from Oracle DB
  • 126. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Recreate Business Model, All Sourced From Oracle •Map incoming physical tables into a star schema •Add aggregation method for fact measures •Add logical keys for logical dimension tables •Remove columns from fact table that aren’t measures
  • 127. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Create Report against Oracle + Big Data SQL Tables •BI Server thinks that all data sourced from Oracle •Uses full Oracle SQL features, guarantees all Oracle-sourced reports will work if DW data offloaded to Hadoop (Hive) •Fast access through SmartScan feature WITH SAWITH0 AS (select count(T45134.TIME) as c1, T45146.POST_AUTHOR as c2, T44832.DSC as c3 from BDA_OUTPUT.POSTS_EXTTAB T45146, BLOG_REFDATA.HTTP_STATUS_CODES T44832, BDA_OUTPUT.ACCESS_PER_POST_EXTTAB T45134 where ( T44832.STATUS = T45134.STATUS and T45134.POST_ID = T45146.POST_ID ) group by T44832.DSC, T45146.POST_AUTHOR) select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3, D1.c4 as c4 from ( select distinct 0 as c1, D1.c2 as c2, D1.c3 as c3, D1.c1 as c4 from SAWITH0 D1 order by c3, c2 ) D1 where rownum <= 65001
  • 128. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Uses Concept of Query Franchising vs Query Federation •Oracle Database handles all queries for client tool, then offloads to Hive if needed •Contrast with Query federation - BI Server has to issue separate
 SQL queries for each source, then stitch-join results ‣And be aware of different SQL dialects, DB features etc WITH SAWITH0 AS (select count(T45134.TIME) as c1, T45146.POST_AUTHOR as c2, T44832.DSC as c3 from BDA_OUTPUT.POSTS_EXTTAB T45146, BLOG_REFDATA.HTTP_STATUS_CODES T44832, BDA_OUTPUT.ACCESS_PER_POST_EXTTAB T45134 where ( T44832.STATUS = T45134.STATUS and T45134.POST_ID = T45146.POST_ID ) group by T44832.DSC, T45146.POST_AUTHOR) select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3, D1.c4 as c4 from ( select distinct 0 as c1, D1.c2 as c2, D1.c3 as c3, D1.c1 as c4 from SAWITH0 D1 order by c3, c2 ) D1 where rownum <= 65001
  • 129. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Uses Concept of Query Franchising vs Query Federation •Oracle Database handles all queries for client tool, then offloads to Hive if needed •Contrast with Query federation - BI Server has to issue separate
 SQL queries for each source, then stitch-join results ‣And be aware of different SQL dialects, DB features etc •Only columns (projection) and rows (filtering) required to 
 answer query sent back to Exadata •Storage Indexes used on both Exadata Storage Servers 
 and BDA nodes to skip block reads for irrelevant data •HDFS caching used to speed-up
 access to commonly-used
 HDFS data
  • 130. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Create Initial Analyses Against Combined Dataset •Create analyses using
 full SQL features •Access to Oracle RDBMS
 Advanced Analytics functions
 through EVALUATE,
 EVALUATE_AGGR etc •Big Data SQL SmartScan feature
 provides fast, ad-hoc access
 to Hive data, avoiding MapReduce
  • 131. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Prepare Physical Model for Big Data SQL Join to GEOIP Data •Create SELECT table view in RPD over ACCESS_PER_POST_EXTTAB table
 to derive IP address integer from hostname IP address ‣Also add in a conversion of access date field - for later… •Import GEOIP_COUNTRY reference table into RPD •Join on
  • 132. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Access to Full Set of Oracle Join Types •No longer restricted to HiveQL equi-joins - Big Data SQL supports all Oracle join operators •Use to join Hive data (using View over external table) to the IP range country lookup table
 using BETWEEN join operator
  • 133. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Reports Now Include Country Data via IP Geocoding •Makes use of Oracle SQL’s BETWEEN join operator •Underlying log + posts data still sourced from Hive, via Big Data SQL Query Franchising
  • 134. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Add In Time Dimension Table •Enables time-series reporting; pre-req for forecasting (linear regression-type queries) •Map to Date field in view over ORACLE_HIVE table ‣Convert incoming Hive STRING field to Oracle DATE for better time-series manipulation
  • 135. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Now Enables Time-Series Reporting and Country Lookups
  • 136. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Use Exalytics In-Memory Aggregate Cache if Required •If further query acceleration is required, Exalytics In-Memory Cache can be used •Enabled through Summary Advisor, caches commonly-used aggregates in in-memory cache •Options for TimesTen or Oracle Database 12c In-Memory Option •Returns aggregated data “at the speed of thought”
  • 137. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Part 4
 Discovering and Analyzing the Data Reservoir using Oracle Big Data Discovery
  • 138. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Enable Incoming Site Activity Data for Data Discovery •Another use-case for Hadoop data is “data discovery” ‣Load data into the data reservoir ‣Catalog and understand separate datasets ‣Enrich data using graphical tools ‣Join separate datasets together ‣Present textual data alongside measures
 and key attributes ‣Explore and analyse using faceted search 2 Combine with site content, semantics, text enrichment Catalog and explore using Oracle Big Data Discovery Why is some content more popular? Does sentiment affect viewership? What content is popular, where?
  • 139. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Oracle Big Data Discovery •“The Visual Face of Hadoop” - cataloging, analysis and discovery for the data reservoir •Runs on Cloudera CDH5.3+ (Hortonworks support coming soon) •Combines Endeca Server + Studio technology with Hadoop-native (Spark) transformations
  • 140. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Data Sources used for Data Discovery Exercise Spark Hive HDFS Spark Hive HDFS Spark Hive HDFS Cloudera CDH5.3 BDA Hadoop Cluster Hive Client HDFS Client BDD
 DGraph
 Gateway Hive Client BDD Studio
 Web UI BDD Node BDD Data Processing BDD Data Processing BDD Data Processing Ingest semi-
 process logs
 (1m rows) Ingest processed
 Twitter activity Write-back
 Transformations
 to full
 datasets Upload
 Site page and
 comment contents Persist uploaded DGraph
 content in Hive / HDFS Data Discovery using Studio web-based app
  • 141. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Oracle Big Data Discovery Architecture •Adds additional nodes into the CDH5.3 cluster, for running DGraph and Studio •DGraph engine based on Endeca Server technology, can also be clustered •Hive (HCatalog) used for reading table metadata,
 mapping back to underlying HDFS files •Apache Spark then used to upload (ingest)
 data into DGraph, typically 1m row sample •Data then held for online analysis in DGraph •Option to write-back transformations to
 underlying Hive/HDFS files using Apache Spark
  • 142. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Ingesting & Sampling Datasets for the DGraph Engine •Datasets in Hive have to be ingested into DGraph engine before analysis, transformation •Can either define an automatic Hive table detector process, or manually upload •Typically ingests 1m row random sample ‣1m row sample provides > 99% confidence that answer is within 2% of value shown
 no matter how big the full dataset (1m, 1b, 1q+) ‣Makes interactivity cheap - representative dataset Amount'of'data'queried The'100%'premium Cost Accuracy
  • 143. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Ingesting Site Activity and Tweet Data into DGraph •Two output datasets from ODI process have to be ingested into DGraph engine •Upload triggered by manual call to BDD Data Processing CLI ‣Runs Oozie job in the background to profile,
 enrich and then ingest data into DGraph [oracle@bddnode1 ~]$ cd /home/oracle/Middleware/BDD1.0/dataprocessing/edp_cli [oracle@bddnode1 edp_cli]$ ./data_processing_CLI -t access_per_post_cat_author [oracle@bddnode1 edp_cli]$ ./data_processing_CLI -t rm_linked_tweets Hive Apache Spark pageviews X rows pageviews >1m rows Profiling pageviews >1m rows Enrichment pageviews >1m rows BDD pageviews >1m rows { "@class" : "com.oracle.endeca.pdi.client.config.workflow.
 ProvisionDataSetFromHiveConfig", "hiveTableName" : "rm_linked_tweets", "hiveDatabaseName" : "default", "newCollectionName" : “edp_cli_edp_a5dbdb38-b065…”, "runEnrichment" : true, "maxRecordsForNewDataSet" : 1000000, "languageOverride" : "unknown" } 1 2 3
  • 144. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Ingesting Site Activity and Tweet Data into DGraph •Two output datasets from ODI process have to be ingested into DGraph engine •Upload triggered by manual call to BDD Data Processing CLI ‣Runs Oozie job in the background to profile,
 enrich and then ingest data into DGraph Hive Apache Spark Full Table Sampled
 Table Profiling Profiled
 Sampled Tbl Enrichment Enriched
 Sampled Tbl BDD BDD
 Dataset 1 2
  • 145. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Ingesting and Sampling Hive Data into Big Data Discovery [oracle@bigdatalite ~]$ cd /home/oracle/movie/Middleware/BDD1.0/dataprocessing/edp_cli [oracle@bigdatalite edp_cli]$ ./data_processing_CLI -t access_per_post_cat_author [oracle@bigdatalite edp_cli]$ ./data_processing_CLI -t rm_linked_tweets
  • 146. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com View Ingested Datasets, Create New Project •Ingested datasets are now visible in Big Data Discovery Studio •Create new project from first dataset, then add second
  • 147. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Automatic Enrichment of Ingested Datasets •Ingestion process has automatically geo-coded host IP addresses •Other automatic enrichments run after initial discovery step, based on datatypes, content
  • 148. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Initial Data Exploration On Uploaded Dataset Attributes •For the ACCESS_PER_POST_CAT_AUTHORS dataset, 18 attributes now available •Combination of original attributes, and derived attributes added by enrichment process
  • 149. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Explore Attribute Values, Distribution using Scratchpad •Click on individual attributes to view more details about them •Add to scratchpad, automatically selects most relevant data visualisation 1 2
  • 150. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Filter (Refine) Visualizations in Scratchpad •Click on the Filter button to display a refinement list
  • 151. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Display Refined Data Visualization •Select refinement (filter) values from refinement pane •Visualization in scratchpad now filtered by that attribute ‣Repeat to filter by multiple attribute values
  • 152. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Save Scratchpad Visualization to Discovery Page •For visualisations you want to keep, you can add them to Discovery page •Dashboard / faceted search part of BDD Studio - we’ll see more later 1 2
  • 153. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Select Multiple Attributes for Same Visualization •Select AUTHOR attribute, see
 initial ordered values, distribution •Add attribute POST_DATE ‣choose between multiple 
 instances of first attribute 
 split by second ‣or one visualisation with 
 multiple series
  • 154. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Data Transformation & Enrichment •Data ingest process automatically applies some enrichments - geocoding etc •Can apply others from Transformation page - simple transformations & Groovy expressions
  • 155. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Standard Transformations - Simple & Using Editor •Group and bin attribute values; filter on attribute values, etc •Use Transformation Editor for custom transformations (Groovy, incl. enrichment functions)
  • 156. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Datatype Conversion Example : String to Date / Time •Datatypes can be converted into other datatypes, with data transformed if required •Example : convert Apache Combined Format Log date/time to Java date/time
  • 157. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Transformations using Text Enrichment / Parsing •Uses Salience text engine under the covers •Extract terms, sentiment, noun groups, positive / negative words etc
  • 158. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Create New Attribute using Derived (Transformed) Values •Choose option to Create New Attribute, to add derived attribute to dataset •Preview changes, then save to transformation script 12 3
  • 159. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Commit Transforms to DGraph, or Create New Hive Table •Transformation changes have to be committed to DGraph sample of dataset ‣Project transformations kept separate from other project copies of dataset •Transformations can also be applied to full dataset, using Apache Spark ‣Creates new Hive table of complete dataset
  • 160. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
 +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Upload Additional Datasets •Users can upload their own datasets into BDD, from MS Excel or CSV file •Uploaded data is first loaded into Hive table, then sampled/ingested as normal 1 2 3