SlideShare a Scribd company logo
1 of 19
Ming Yuan / Alyssa Romeo
Capital One
May 24th, 2016
Simplifying Apache Cascading
2
Apache Cascading
• Open source framework implementing the “chain of responsibility” design pattern
• Abstraction over MapReduce, Tez, or Flink processing engine when transforming big
data sets on Hadoop
• APIs for constructing and executing data-processing flows
3
PDS Framework on Cascading
A light-weight layer on top of Apache Cascading to
– Manage metadata for inputs and outputs in properties files
– Define data processing rules in properties files
– Support development in a parallel manner
– Make testing easier and more flexible PDS Framework
4
Case Studies
Source code Directly use Cascading After rewritten on the framework
TranOptimizerTrxnDtl.java 473 134
TrxnDtlTransformation.java 278 81
PlanTypeCdeCalculation.java 152 144
MyMain.java 12
Total 903 371
Source code Directly use Cascading After rewritten on the framework
PmsmJoin.java 210 87
JoinFunc.java 257 38
MyMain.java 12
Total 467 137
Cascading application 1 – 60% code reduction
Cascading application 2 – 70% code reduction
5
Root configuration
Data Processing Step
Sources SinkData-Processing Rules
Schema file Schema fileProcessing rules
6
Managing Multiple Steps on the Framework
1
2 3
Processing rules
4
5
Root configuration Schema files
6
Processing rules
Transformation step
Transformation step
Application ControllerApplication Initiator
5
7
Root Configuration
Root file entries configure application level components, including
– Hadoop configurations
– Global configuration entries for the application
– Definitions for File Taps (location and schema)
– Definitions for Hive Taps
ATPT_SCHEME_PATH=/devl/rwa/prtnrshp/prtntshp_whirl/whirl_atpt_mntry_dq_schema.txt
ATPT_RETAIN_FIELDS_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/ATPT_retain_schema.txt
ATPT_DATA_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/whirl_atpt_mntry_vldtd_hive_extract_us
ATGT_SCHEME_PATH=/devl/rwa/prtnrshp/prtntshp_whirl/whirl_atgt_mntry_dq_schema.txt
ATGT_RETAIN_FIELDS_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/ATGT_retain_schema.txt
ATGT_DATA_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/whirl_atgt_mntry_vldtd_hive_extract_us
HADOOP_PROPS_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/hadoop.properties
FIRST_HIVE_TAP=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/hiveone.properties
SECOND_HIVE_TAP=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/hivetwo.properties
Root configuration
8
ATPT_SCHEME_PATH=/devl/rwa/prtnrshp/prtntshp_whirl/whirl_atpt_mntry_dq_schema.txt
ATPT_DATA_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/whirl_atpt_mntry_vldtd_hive_extract_us
ATGT_SCHEME_PATH=/devl/rwa/prtnrshp/prtntshp_whirl/whirl_atgt_mntry_dq_schema.txt
ATGT_RETAIN_FIELDS_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/ATGT_retain_schema.txt
ATGT_DATA_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/whirl_atgt_mntry_vldtd_hive_extract_us
HADOOP_PROPS_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/hadoop.properties
FIRST_HIVE_TAP=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/hiveone.properties
SECOND_HIVE_TAP=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/hivetwo.properties
Schema Configuration – FileTap
atgt_org|decimal|FALSE|1|NA
atgt_acct|string|FALSE|1|NA
atgt_rec_nbr|decimal|FALSE|1|NA
atgt_logo|decimal|FALSE|1|NA
atgt_type|string|FALSE|1|NA
atgt_mt_eff_date|decimal|FALSE|1|NA
atgt_org|
atgt_acct|
atgt_rec_nbr|
atgt_logo|
atgt_type|
atgt_mt_eff_date|
Schema file
Tap pmsmTap = new Hfs(
getTextDelimitedFromConfig("ATPT_SCHEME_PATH", null, false, “ ”),
getFromConfigure("ATPT_DATA_PATH")
);
9
ATPT_SCHEME_PATH=/devl/rwa/prtnrshp/prtntshp_whirl/whirl_atpt_mntry_dq_schema.txt
ATPT_DATA_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/whirl_atpt_mntry_vldtd_hive_extract_us
ATGT_SCHEME_PATH=/devl/rwa/prtnrshp/prtntshp_whirl/whirl_atgt_mntry_dq_schema.txt
ATGT_RETAIN_FIELDS_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/ATGT_retain_schema.txt
ATGT_DATA_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/whirl_atgt_mntry_vldtd_hive_extract_us
HADOOP_PROPS_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/hadoop.properties
FIRST_HIVE_TAP=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/hiveone.properties
SECOND_HIVE_TAP=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/hivetwo.properties
Schema Configuration – HiveTap
Schema file
DATA_BASE=dhdp_coaf
APP_COLUMN_NAMES=app_id, created_dt,…
APP_COLUMN_TYPES=Bigint, String, …
TABLE=MyTable
PARTITION_KEYS=odate
SER_LIB=org.apache.hadoop… (optional, by default it is ParquetHiveSerDe)
APP_PATH=hdfs://….
HiveTap hiveTap = getHiveTapFromConfig(“SECOND_HIVE_TAP”, sinkMode, booleanValue);
10
Data Processing Rules
• Processing rules are documented as properties
• Out-of-box macros define the transformation logic
• Framework translates the processing rules to Cascading API calls on the fly
ARRMT_ID_CHAIN obj(atpt_chain)
TRXN_SEQ_NUM atpt_mt_hi_tran_trk_id
POST_DT str(atpt_mt_posting_date)
TRXN_CD int(atpt_mt_txn_code)
AGT_ID substr(atpt_mt_hi_rep_id, 2, 4)
result.set(outputFields.getPos("ARRMT_ID_CHAIN"), argument.getObject(new Fields ("atpt_chain")));
result.set(outputFields.getPos("TRXN_SEQ_NUM"), argument.getObject(new Fields ("atpt_mt_hi_tran_trk_id")));
result.set(outputFields.getPos("POST_DT"), argument.getString(new Fields ("atpt_mt_posting_date")));
result.set(outputFields.getPos("TRXN_CD"), argument.getInteger(new Fields ("atpt_mt_txn_code")));
result.set(outputFields.getPos("AGT_ID"), argument.getString(new Fields ("atpt_mt_hi_rep_id")).substring(2,4));
Processing rules
11
Data Processing Rules -- Macros
Macro Names Syntax Functionality
obj TARGET obj(SOURCE) result.set(outputFields.getPos("TARGET"), argument.getObject(new Fields("SOURCE")));
default TARGET SOURCE result.set(outputFields.getPos("TARGET"), argument.getObject(new Fields ("SOURCE")));
as-is TARGET asis(default) result.set(outputFields.getPos("TARGET"), default));
string TARGET str(SOURCE) result.set(outputFields.getPos("TARGET"), argument.getString(new Fields ("SOURCE")));
int TARGET int(SOURCE) result.set(outputFields.getPos("TARGET"), argument.getInteger(new Fields("SOURCE")));
sub-string TARGET substr(SOURCE, 2, 4)
result.set(outputFields.getPos("TARGET"), argument.getString(new Fields
("SOURCE")).subString(2,4);
replace
TARGET replace(SOURCE, A,
B, C, D, default)
String rawValue = argument.getString(new Fields ("SOURCE"));
if (A equalto rawValue) then
result.set(outputFields.getPos("TARGET"), B);
if (C equalto rawValue) then
result.set(outputFields.getPos("TARGET"), D);
result.set(outputFields.getPos("TARGET"), “default”);
replace null
TARGET repnull(SOURCE,
default)
String rawValue = argument.getString(new Fields ("SOURCE"));
if (null equalto rawValue) then
result.set(outputFields.getPos("TARGET"), “default”);
else
result.set(outputFields.getPos("TARGET"), rawValue);
replace null
with
whitespace
TARGET repnullws(SOURCE)
String rawValue = argument.getString(new Fields ("SOURCE"));
if (null equalto rawValue) then
result.set(outputFields.getPos("TARGET"), " ");
else
result.set(outputFields.getPos("TARGET"), rawValue);
not null TARGET notnull(SOURCE)
String rawValue = argument.getString(new Fields ("SOURCE"));
if (null equalto rawValue) then
throw RuntimeException;
else result.set(outputFields.getPos("TARGET"), rawValue);
convert date
TARGET dateconv(SOURCE,
yyyymmdd, dd-mm-yyyy)
String rawValue = argument.getString(new Fields ("SOURCE"));
targetValue = rawValue from yyyymmdd to dd-mm-yyyy;
result.set(outputFields.getPos("TARGET"), targetValue);
move decimal TARGET movedeci(SOURCE,-2)
String rawValue = argument.getDouble(new Fields ("SOURCE"));
result.set(outputFields.getPos("TARGET"), rawValue / (10 ^ -2));
12
Exception Handling
“Whenever an operation fails and throws an exception, if there is an
associated trap, the offending Tuple is saved to the resource specified by the
trap Tap.”
-- Cascading documentation
FlowDef flowDef = FlowDef.flowDef().addSource(ipAmcpPipe, ipAmcpInTap)
.addSource(ipAtptPipe, ipAtptInTap)
.addTailSink(transformPipe, outTap)
.addTrap(ipAtptPipe, badRecordsTap);
}
13
How to Adopt the Framework
• Create a root configuration file
• Create a schema file for each input and output (or reuse DQ schema files)
• Define processing rules
• Add all of the files to HDFS
• Subclass the PDSBaseFuntion per processing step
@Override
protected void operate(FlowProcess flowProcess, FunctionCall<Tuple> functionCall) {
this.populateTupleSet(functionCall);
TupleEntry argument = functionCall.getArguments();
Tuple result = functionCall.getContext();
Fields outputFields = functionCall.getDeclaredFields();
result.set(outputFields.getPos("CHK_NUM"), check_number_calculation(argument));
functionCall.getOutputCollector().add(result);
}
@Override
protected String getConfigPath() {
return “/path/to/rulesfile”;
}
14
How to Adopt the Framework
• Subclass the PDSBaseDriver class and implement the “transform” method
• Create a “main” class
• Run tests
public class TestHarness {
public static void main(String[] args) {
new MyDriverImp().process("/path/to/rootconfig");
}
}
@Override
protected FlowDef transform() {
Fields pmamfields = getFieldsFromConfigEntry("PMAM_SCHEME_PATH");
String apparrFilePath = this.getFromConfigure("OUTPUT_DATA_PATH");
Tap pmsmTap = new Hfs( this.getTextDelimitedFromConfig("PMSM_SCHEME_PATH", null, false,
fieldDelimiter), apparrFilePath);
FlowDef flowDef = FlowDef.flowDef()
.addSource(ipAmcpPipe, ipAmcpInTap)
.addTailSink(transformPipe, outTap)
.addTrap(ipAtptPipe, badRecordsTap);
return flowDef;
}
Key Words in the
Root config file
15
Conclusion
• Benefits
– Reduce the total effort of developing and testing Cascading applications
• Provide a re-usable layer to reduce the amount of “plumbing” code
• Make Cascading modules configurable
– Improve the code quality
• Modularize Cascading applications and support best practices in Java coding
• Support additional features (such as logging and exception handling)
– Build an open architecture for future extension and integration
• Technical specification
– Compatible with JDK 1.5 and above; Jar file was compiled with JDK 1.7
– Tested with Cascading 2.5
16
For questions, please reach out to Ming.Yuan@capitalone.com
17
Appendix: PDSBaseDriver Class
Method Functionality Override
process(String path)
This method takes the path to the root configuration file, initializes all required
configurations, invokes "transform()" in its subclass, and executes Cascading
flows.
No
init(String path)
This method takes the path to the root configuration file, parses the file, and
stores configuration entries accordingly.
No
getFromConfig(String key)
This method takes a String-typed key, and returns a string-typed value is the key
has been used in the root configuration file. It returns null, otherwise.
No
getFieldsFromConfigEntry(
String key)
This method takes a String-typed key. In the root configuration file, if the key has
been assigned to a path to a schema file, the method returns a Fields object
based on all column names in the schema file. This Fields will be automatically
cached.
No
getFieldsFromConfigEntry(
String key, String[]
appendences)
This method takes a String-typed key in the root configuration file. If the key has
been assigned to a path to a schema file, it returns a Fields object with all
column names in the schema file and all names in the input string array. This
Fields will NOT be cached.
No
getTextDelimitedFromConfig(
String key,
String[] appendences,
boolean hasHeader,
String delimiter)
This methods creates and returns a TextDelimited object from a configuration
key in the root configuration files. You can use the second parameter to append
any column names programmatically. The third and forth parameters are for
input/output files.
No
transform()
Subclass should build a Flowdef object with application processing flow in this
method.
Yes
18
Appendix: PDSBaseFunction Class
Method Functionality Override
prepare(FlowProcess f,
OperationCall<Tuple> call)
This method overrides the same function in Cascading BaseOperation class. No
cleanup(FlowProcess f,
OperationCall<Tuple> call)
This method overrides the same function in Cascading BaseOperation class. No
init(String key, String
filePath)
This method parses a mapping rules file, and initializes the PDSBaseFunction
object.
No
populateTupleSet(
FunctionCall<Tuple> call)
This method populates values in its output tuple based on input values and pre-
defined processing rules.
No
getConfigPath() This method returns the path pointing to processing rules file in HDFS. Yes
Operate(
FlowProcess flowProcess,
FunctionCall<Tuple>
functionCall)
This methods should invoke the "populateTupleSet()" method in order to
execute pre-defined transformation rules, and it should invoke any additional
custom transformation methods for complex logic.
Yes
19
Appendix: Class Diagram
* Yellow color indicates components from Cascading package

More Related Content

What's hot

Programming Hive Reading #4
Programming Hive Reading #4Programming Hive Reading #4
Programming Hive Reading #4
moai kids
 
Modus operandi of Spark Streaming - Recipes for Running your Streaming Applic...
Modus operandi of Spark Streaming - Recipes for Running your Streaming Applic...Modus operandi of Spark Streaming - Recipes for Running your Streaming Applic...
Modus operandi of Spark Streaming - Recipes for Running your Streaming Applic...
DataWorks Summit
 
Easy, scalable, fault tolerant stream processing with structured streaming - ...
Easy, scalable, fault tolerant stream processing with structured streaming - ...Easy, scalable, fault tolerant stream processing with structured streaming - ...
Easy, scalable, fault tolerant stream processing with structured streaming - ...
Databricks
 

What's hot (20)

Bringing the Semantic Web closer to reality: PostgreSQL as RDF Graph Database
Bringing the Semantic Web closer to reality: PostgreSQL as RDF Graph DatabaseBringing the Semantic Web closer to reality: PostgreSQL as RDF Graph Database
Bringing the Semantic Web closer to reality: PostgreSQL as RDF Graph Database
 
10 things i wish i'd known before using spark in production
10 things i wish i'd known before using spark in production10 things i wish i'd known before using spark in production
10 things i wish i'd known before using spark in production
 
Tajo Seoul Meetup-201501
Tajo Seoul Meetup-201501Tajo Seoul Meetup-201501
Tajo Seoul Meetup-201501
 
Scaling Spark Workloads on YARN - Boulder/Denver July 2015
Scaling Spark Workloads on YARN - Boulder/Denver July 2015Scaling Spark Workloads on YARN - Boulder/Denver July 2015
Scaling Spark Workloads on YARN - Boulder/Denver July 2015
 
Programming Hive Reading #4
Programming Hive Reading #4Programming Hive Reading #4
Programming Hive Reading #4
 
Introduction to Apache Hive
Introduction to Apache HiveIntroduction to Apache Hive
Introduction to Apache Hive
 
Spark Cassandra Connector: Past, Present, and Future
Spark Cassandra Connector: Past, Present, and FutureSpark Cassandra Connector: Past, Present, and Future
Spark Cassandra Connector: Past, Present, and Future
 
Debugging & Tuning in Spark
Debugging & Tuning in SparkDebugging & Tuning in Spark
Debugging & Tuning in Spark
 
Modus operandi of Spark Streaming - Recipes for Running your Streaming Applic...
Modus operandi of Spark Streaming - Recipes for Running your Streaming Applic...Modus operandi of Spark Streaming - Recipes for Running your Streaming Applic...
Modus operandi of Spark Streaming - Recipes for Running your Streaming Applic...
 
Why your Spark Job is Failing
Why your Spark Job is FailingWhy your Spark Job is Failing
Why your Spark Job is Failing
 
Using Spark to Load Oracle Data into Cassandra
Using Spark to Load Oracle Data into CassandraUsing Spark to Load Oracle Data into Cassandra
Using Spark to Load Oracle Data into Cassandra
 
Cassandra Community Webinar: Apache Cassandra Internals
Cassandra Community Webinar: Apache Cassandra InternalsCassandra Community Webinar: Apache Cassandra Internals
Cassandra Community Webinar: Apache Cassandra Internals
 
Spark Summit East 2017: Apache spark and object stores
Spark Summit East 2017: Apache spark and object storesSpark Summit East 2017: Apache spark and object stores
Spark Summit East 2017: Apache spark and object stores
 
Advanced Spark Programming - Part 1 | Big Data Hadoop Spark Tutorial | CloudxLab
Advanced Spark Programming - Part 1 | Big Data Hadoop Spark Tutorial | CloudxLabAdvanced Spark Programming - Part 1 | Big Data Hadoop Spark Tutorial | CloudxLab
Advanced Spark Programming - Part 1 | Big Data Hadoop Spark Tutorial | CloudxLab
 
Apache Spark Architecture
Apache Spark ArchitectureApache Spark Architecture
Apache Spark Architecture
 
Apache Spark overview
Apache Spark overviewApache Spark overview
Apache Spark overview
 
Intro to Apache Spark
Intro to Apache SparkIntro to Apache Spark
Intro to Apache Spark
 
Easy, scalable, fault tolerant stream processing with structured streaming - ...
Easy, scalable, fault tolerant stream processing with structured streaming - ...Easy, scalable, fault tolerant stream processing with structured streaming - ...
Easy, scalable, fault tolerant stream processing with structured streaming - ...
 
Apache Spark RDDs
Apache Spark RDDsApache Spark RDDs
Apache Spark RDDs
 
Scalding - the not-so-basics @ ScalaDays 2014
Scalding - the not-so-basics @ ScalaDays 2014Scalding - the not-so-basics @ ScalaDays 2014
Scalding - the not-so-basics @ ScalaDays 2014
 

Similar to Simplifying Apache Cascading

Writing Continuous Applications with Structured Streaming Python APIs in Apac...
Writing Continuous Applications with Structured Streaming Python APIs in Apac...Writing Continuous Applications with Structured Streaming Python APIs in Apac...
Writing Continuous Applications with Structured Streaming Python APIs in Apac...
Databricks
 
Parallel programming patterns - Олександр Павлишак
Parallel programming patterns - Олександр ПавлишакParallel programming patterns - Олександр Павлишак
Parallel programming patterns - Олександр Павлишак
Igor Bronovskyy
 
Accelerating Real Time Analytics with Spark Streaming and FPGAaaS with Prabha...
Accelerating Real Time Analytics with Spark Streaming and FPGAaaS with Prabha...Accelerating Real Time Analytics with Spark Streaming and FPGAaaS with Prabha...
Accelerating Real Time Analytics with Spark Streaming and FPGAaaS with Prabha...
Databricks
 
Writing Continuous Applications with Structured Streaming PySpark API
Writing Continuous Applications with Structured Streaming PySpark APIWriting Continuous Applications with Structured Streaming PySpark API
Writing Continuous Applications with Structured Streaming PySpark API
Databricks
 

Similar to Simplifying Apache Cascading (20)

From Zero to Stream Processing
From Zero to Stream ProcessingFrom Zero to Stream Processing
From Zero to Stream Processing
 
Spark streaming
Spark streamingSpark streaming
Spark streaming
 
Writing Continuous Applications with Structured Streaming Python APIs in Apac...
Writing Continuous Applications with Structured Streaming Python APIs in Apac...Writing Continuous Applications with Structured Streaming Python APIs in Apac...
Writing Continuous Applications with Structured Streaming Python APIs in Apac...
 
Meetup spark structured streaming
Meetup spark structured streamingMeetup spark structured streaming
Meetup spark structured streaming
 
Parallel programming patterns (UA)
Parallel programming patterns (UA)Parallel programming patterns (UA)
Parallel programming patterns (UA)
 
Parallel programming patterns - Олександр Павлишак
Parallel programming patterns - Олександр ПавлишакParallel programming patterns - Олександр Павлишак
Parallel programming patterns - Олександр Павлишак
 
Accelerating Real Time Analytics with Spark Streaming and FPGAaaS with Prabha...
Accelerating Real Time Analytics with Spark Streaming and FPGAaaS with Prabha...Accelerating Real Time Analytics with Spark Streaming and FPGAaaS with Prabha...
Accelerating Real Time Analytics with Spark Streaming and FPGAaaS with Prabha...
 
The Cascading (big) data application framework
The Cascading (big) data application frameworkThe Cascading (big) data application framework
The Cascading (big) data application framework
 
The Cascading (big) data application framework - André Keple, Sr. Engineer, C...
The Cascading (big) data application framework - André Keple, Sr. Engineer, C...The Cascading (big) data application framework - André Keple, Sr. Engineer, C...
The Cascading (big) data application framework - André Keple, Sr. Engineer, C...
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache Spark
 
Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...
Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...
Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...
 
20170126 big data processing
20170126 big data processing20170126 big data processing
20170126 big data processing
 
Data Analytics Service Company and Its Ruby Usage
Data Analytics Service Company and Its Ruby UsageData Analytics Service Company and Its Ruby Usage
Data Analytics Service Company and Its Ruby Usage
 
Osd ctw spark
Osd ctw sparkOsd ctw spark
Osd ctw spark
 
Clug 2011 March web server optimisation
Clug 2011 March  web server optimisationClug 2011 March  web server optimisation
Clug 2011 March web server optimisation
 
Recipes for Running Spark Streaming Applications in Production-(Tathagata Das...
Recipes for Running Spark Streaming Applications in Production-(Tathagata Das...Recipes for Running Spark Streaming Applications in Production-(Tathagata Das...
Recipes for Running Spark Streaming Applications in Production-(Tathagata Das...
 
Writing Continuous Applications with Structured Streaming PySpark API
Writing Continuous Applications with Structured Streaming PySpark APIWriting Continuous Applications with Structured Streaming PySpark API
Writing Continuous Applications with Structured Streaming PySpark API
 
Writing Continuous Applications with Structured Streaming in PySpark
Writing Continuous Applications with Structured Streaming in PySparkWriting Continuous Applications with Structured Streaming in PySpark
Writing Continuous Applications with Structured Streaming in PySpark
 
Pegasus - automate, recover, and debug scientific computations
Pegasus - automate, recover, and debug scientific computationsPegasus - automate, recover, and debug scientific computations
Pegasus - automate, recover, and debug scientific computations
 
Spark (Structured) Streaming vs. Kafka Streams
Spark (Structured) Streaming vs. Kafka StreamsSpark (Structured) Streaming vs. Kafka Streams
Spark (Structured) Streaming vs. Kafka Streams
 

More from Ming Yuan (7)

Cloud and Analytics -- 2020 sparksummit
Cloud and Analytics -- 2020 sparksummitCloud and Analytics -- 2020 sparksummit
Cloud and Analytics -- 2020 sparksummit
 
Forrester2019
Forrester2019Forrester2019
Forrester2019
 
R & Python on Hadoop
R & Python on HadoopR & Python on Hadoop
R & Python on Hadoop
 
SSO with sfdc
SSO with sfdcSSO with sfdc
SSO with sfdc
 
Singleton
SingletonSingleton
Singleton
 
Rest and beyond
Rest and beyondRest and beyond
Rest and beyond
 
Building calloutswithoutwsdl2apex
Building calloutswithoutwsdl2apexBuilding calloutswithoutwsdl2apex
Building calloutswithoutwsdl2apex
 

Recently uploaded

Schema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfSchema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdf
Lars Albertsson
 
Determinants of health, dimensions of health, positive health and spectrum of...
Determinants of health, dimensions of health, positive health and spectrum of...Determinants of health, dimensions of health, positive health and spectrum of...
Determinants of health, dimensions of health, positive health and spectrum of...
shambhavirathore45
 
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
FESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdfFESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdf
MarinCaroMartnezBerg
 
Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...
Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...
Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...
amitlee9823
 

Recently uploaded (20)

Mature dropshipping via API with DroFx.pptx
Mature dropshipping via API with DroFx.pptxMature dropshipping via API with DroFx.pptx
Mature dropshipping via API with DroFx.pptx
 
BabyOno dropshipping via API with DroFx.pptx
BabyOno dropshipping via API with DroFx.pptxBabyOno dropshipping via API with DroFx.pptx
BabyOno dropshipping via API with DroFx.pptx
 
Schema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfSchema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdf
 
BPAC WITH UFSBI GENERAL PRESENTATION 18_05_2017-1.pptx
BPAC WITH UFSBI GENERAL PRESENTATION 18_05_2017-1.pptxBPAC WITH UFSBI GENERAL PRESENTATION 18_05_2017-1.pptx
BPAC WITH UFSBI GENERAL PRESENTATION 18_05_2017-1.pptx
 
Introduction-to-Machine-Learning (1).pptx
Introduction-to-Machine-Learning (1).pptxIntroduction-to-Machine-Learning (1).pptx
Introduction-to-Machine-Learning (1).pptx
 
VidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptxVidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptx
 
Determinants of health, dimensions of health, positive health and spectrum of...
Determinants of health, dimensions of health, positive health and spectrum of...Determinants of health, dimensions of health, positive health and spectrum of...
Determinants of health, dimensions of health, positive health and spectrum of...
 
Invezz.com - Grow your wealth with trading signals
Invezz.com - Grow your wealth with trading signalsInvezz.com - Grow your wealth with trading signals
Invezz.com - Grow your wealth with trading signals
 
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort ServiceBDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
 
Best VIP Call Girls Noida Sector 22 Call Me: 8448380779
Best VIP Call Girls Noida Sector 22 Call Me: 8448380779Best VIP Call Girls Noida Sector 22 Call Me: 8448380779
Best VIP Call Girls Noida Sector 22 Call Me: 8448380779
 
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
 
FESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdfFESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdf
 
Market Analysis in the 5 Largest Economic Countries in Southeast Asia.pdf
Market Analysis in the 5 Largest Economic Countries in Southeast Asia.pdfMarket Analysis in the 5 Largest Economic Countries in Southeast Asia.pdf
Market Analysis in the 5 Largest Economic Countries in Southeast Asia.pdf
 
Accredited-Transport-Cooperatives-Jan-2021-Web.pdf
Accredited-Transport-Cooperatives-Jan-2021-Web.pdfAccredited-Transport-Cooperatives-Jan-2021-Web.pdf
Accredited-Transport-Cooperatives-Jan-2021-Web.pdf
 
Smarteg dropshipping via API with DroFx.pptx
Smarteg dropshipping via API with DroFx.pptxSmarteg dropshipping via API with DroFx.pptx
Smarteg dropshipping via API with DroFx.pptx
 
Call me @ 9892124323 Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
Call me @ 9892124323  Cheap Rate Call Girls in Vashi with Real Photo 100% SecureCall me @ 9892124323  Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
Call me @ 9892124323 Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
 
Discover Why Less is More in B2B Research
Discover Why Less is More in B2B ResearchDiscover Why Less is More in B2B Research
Discover Why Less is More in B2B Research
 
VIP Model Call Girls Hinjewadi ( Pune ) Call ON 8005736733 Starting From 5K t...
VIP Model Call Girls Hinjewadi ( Pune ) Call ON 8005736733 Starting From 5K t...VIP Model Call Girls Hinjewadi ( Pune ) Call ON 8005736733 Starting From 5K t...
VIP Model Call Girls Hinjewadi ( Pune ) Call ON 8005736733 Starting From 5K t...
 
Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...
Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...
Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...
 

Simplifying Apache Cascading

  • 1. Ming Yuan / Alyssa Romeo Capital One May 24th, 2016 Simplifying Apache Cascading
  • 2. 2 Apache Cascading • Open source framework implementing the “chain of responsibility” design pattern • Abstraction over MapReduce, Tez, or Flink processing engine when transforming big data sets on Hadoop • APIs for constructing and executing data-processing flows
  • 3. 3 PDS Framework on Cascading A light-weight layer on top of Apache Cascading to – Manage metadata for inputs and outputs in properties files – Define data processing rules in properties files – Support development in a parallel manner – Make testing easier and more flexible PDS Framework
  • 4. 4 Case Studies Source code Directly use Cascading After rewritten on the framework TranOptimizerTrxnDtl.java 473 134 TrxnDtlTransformation.java 278 81 PlanTypeCdeCalculation.java 152 144 MyMain.java 12 Total 903 371 Source code Directly use Cascading After rewritten on the framework PmsmJoin.java 210 87 JoinFunc.java 257 38 MyMain.java 12 Total 467 137 Cascading application 1 – 60% code reduction Cascading application 2 – 70% code reduction
  • 5. 5 Root configuration Data Processing Step Sources SinkData-Processing Rules Schema file Schema fileProcessing rules
  • 6. 6 Managing Multiple Steps on the Framework 1 2 3 Processing rules 4 5 Root configuration Schema files 6 Processing rules Transformation step Transformation step Application ControllerApplication Initiator 5
  • 7. 7 Root Configuration Root file entries configure application level components, including – Hadoop configurations – Global configuration entries for the application – Definitions for File Taps (location and schema) – Definitions for Hive Taps ATPT_SCHEME_PATH=/devl/rwa/prtnrshp/prtntshp_whirl/whirl_atpt_mntry_dq_schema.txt ATPT_RETAIN_FIELDS_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/ATPT_retain_schema.txt ATPT_DATA_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/whirl_atpt_mntry_vldtd_hive_extract_us ATGT_SCHEME_PATH=/devl/rwa/prtnrshp/prtntshp_whirl/whirl_atgt_mntry_dq_schema.txt ATGT_RETAIN_FIELDS_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/ATGT_retain_schema.txt ATGT_DATA_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/whirl_atgt_mntry_vldtd_hive_extract_us HADOOP_PROPS_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/hadoop.properties FIRST_HIVE_TAP=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/hiveone.properties SECOND_HIVE_TAP=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/hivetwo.properties Root configuration
  • 8. 8 ATPT_SCHEME_PATH=/devl/rwa/prtnrshp/prtntshp_whirl/whirl_atpt_mntry_dq_schema.txt ATPT_DATA_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/whirl_atpt_mntry_vldtd_hive_extract_us ATGT_SCHEME_PATH=/devl/rwa/prtnrshp/prtntshp_whirl/whirl_atgt_mntry_dq_schema.txt ATGT_RETAIN_FIELDS_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/ATGT_retain_schema.txt ATGT_DATA_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/whirl_atgt_mntry_vldtd_hive_extract_us HADOOP_PROPS_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/hadoop.properties FIRST_HIVE_TAP=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/hiveone.properties SECOND_HIVE_TAP=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/hivetwo.properties Schema Configuration – FileTap atgt_org|decimal|FALSE|1|NA atgt_acct|string|FALSE|1|NA atgt_rec_nbr|decimal|FALSE|1|NA atgt_logo|decimal|FALSE|1|NA atgt_type|string|FALSE|1|NA atgt_mt_eff_date|decimal|FALSE|1|NA atgt_org| atgt_acct| atgt_rec_nbr| atgt_logo| atgt_type| atgt_mt_eff_date| Schema file Tap pmsmTap = new Hfs( getTextDelimitedFromConfig("ATPT_SCHEME_PATH", null, false, “ ”), getFromConfigure("ATPT_DATA_PATH") );
  • 9. 9 ATPT_SCHEME_PATH=/devl/rwa/prtnrshp/prtntshp_whirl/whirl_atpt_mntry_dq_schema.txt ATPT_DATA_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/whirl_atpt_mntry_vldtd_hive_extract_us ATGT_SCHEME_PATH=/devl/rwa/prtnrshp/prtntshp_whirl/whirl_atgt_mntry_dq_schema.txt ATGT_RETAIN_FIELDS_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/ATGT_retain_schema.txt ATGT_DATA_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/whirl_atgt_mntry_vldtd_hive_extract_us HADOOP_PROPS_PATH=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/hadoop.properties FIRST_HIVE_TAP=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/hiveone.properties SECOND_HIVE_TAP=/devl/rwa/prtnrshp/prtnrshp_whirl_trxn_optmzn/hivetwo.properties Schema Configuration – HiveTap Schema file DATA_BASE=dhdp_coaf APP_COLUMN_NAMES=app_id, created_dt,… APP_COLUMN_TYPES=Bigint, String, … TABLE=MyTable PARTITION_KEYS=odate SER_LIB=org.apache.hadoop… (optional, by default it is ParquetHiveSerDe) APP_PATH=hdfs://…. HiveTap hiveTap = getHiveTapFromConfig(“SECOND_HIVE_TAP”, sinkMode, booleanValue);
  • 10. 10 Data Processing Rules • Processing rules are documented as properties • Out-of-box macros define the transformation logic • Framework translates the processing rules to Cascading API calls on the fly ARRMT_ID_CHAIN obj(atpt_chain) TRXN_SEQ_NUM atpt_mt_hi_tran_trk_id POST_DT str(atpt_mt_posting_date) TRXN_CD int(atpt_mt_txn_code) AGT_ID substr(atpt_mt_hi_rep_id, 2, 4) result.set(outputFields.getPos("ARRMT_ID_CHAIN"), argument.getObject(new Fields ("atpt_chain"))); result.set(outputFields.getPos("TRXN_SEQ_NUM"), argument.getObject(new Fields ("atpt_mt_hi_tran_trk_id"))); result.set(outputFields.getPos("POST_DT"), argument.getString(new Fields ("atpt_mt_posting_date"))); result.set(outputFields.getPos("TRXN_CD"), argument.getInteger(new Fields ("atpt_mt_txn_code"))); result.set(outputFields.getPos("AGT_ID"), argument.getString(new Fields ("atpt_mt_hi_rep_id")).substring(2,4)); Processing rules
  • 11. 11 Data Processing Rules -- Macros Macro Names Syntax Functionality obj TARGET obj(SOURCE) result.set(outputFields.getPos("TARGET"), argument.getObject(new Fields("SOURCE"))); default TARGET SOURCE result.set(outputFields.getPos("TARGET"), argument.getObject(new Fields ("SOURCE"))); as-is TARGET asis(default) result.set(outputFields.getPos("TARGET"), default)); string TARGET str(SOURCE) result.set(outputFields.getPos("TARGET"), argument.getString(new Fields ("SOURCE"))); int TARGET int(SOURCE) result.set(outputFields.getPos("TARGET"), argument.getInteger(new Fields("SOURCE"))); sub-string TARGET substr(SOURCE, 2, 4) result.set(outputFields.getPos("TARGET"), argument.getString(new Fields ("SOURCE")).subString(2,4); replace TARGET replace(SOURCE, A, B, C, D, default) String rawValue = argument.getString(new Fields ("SOURCE")); if (A equalto rawValue) then result.set(outputFields.getPos("TARGET"), B); if (C equalto rawValue) then result.set(outputFields.getPos("TARGET"), D); result.set(outputFields.getPos("TARGET"), “default”); replace null TARGET repnull(SOURCE, default) String rawValue = argument.getString(new Fields ("SOURCE")); if (null equalto rawValue) then result.set(outputFields.getPos("TARGET"), “default”); else result.set(outputFields.getPos("TARGET"), rawValue); replace null with whitespace TARGET repnullws(SOURCE) String rawValue = argument.getString(new Fields ("SOURCE")); if (null equalto rawValue) then result.set(outputFields.getPos("TARGET"), " "); else result.set(outputFields.getPos("TARGET"), rawValue); not null TARGET notnull(SOURCE) String rawValue = argument.getString(new Fields ("SOURCE")); if (null equalto rawValue) then throw RuntimeException; else result.set(outputFields.getPos("TARGET"), rawValue); convert date TARGET dateconv(SOURCE, yyyymmdd, dd-mm-yyyy) String rawValue = argument.getString(new Fields ("SOURCE")); targetValue = rawValue from yyyymmdd to dd-mm-yyyy; result.set(outputFields.getPos("TARGET"), targetValue); move decimal TARGET movedeci(SOURCE,-2) String rawValue = argument.getDouble(new Fields ("SOURCE")); result.set(outputFields.getPos("TARGET"), rawValue / (10 ^ -2));
  • 12. 12 Exception Handling “Whenever an operation fails and throws an exception, if there is an associated trap, the offending Tuple is saved to the resource specified by the trap Tap.” -- Cascading documentation FlowDef flowDef = FlowDef.flowDef().addSource(ipAmcpPipe, ipAmcpInTap) .addSource(ipAtptPipe, ipAtptInTap) .addTailSink(transformPipe, outTap) .addTrap(ipAtptPipe, badRecordsTap); }
  • 13. 13 How to Adopt the Framework • Create a root configuration file • Create a schema file for each input and output (or reuse DQ schema files) • Define processing rules • Add all of the files to HDFS • Subclass the PDSBaseFuntion per processing step @Override protected void operate(FlowProcess flowProcess, FunctionCall<Tuple> functionCall) { this.populateTupleSet(functionCall); TupleEntry argument = functionCall.getArguments(); Tuple result = functionCall.getContext(); Fields outputFields = functionCall.getDeclaredFields(); result.set(outputFields.getPos("CHK_NUM"), check_number_calculation(argument)); functionCall.getOutputCollector().add(result); } @Override protected String getConfigPath() { return “/path/to/rulesfile”; }
  • 14. 14 How to Adopt the Framework • Subclass the PDSBaseDriver class and implement the “transform” method • Create a “main” class • Run tests public class TestHarness { public static void main(String[] args) { new MyDriverImp().process("/path/to/rootconfig"); } } @Override protected FlowDef transform() { Fields pmamfields = getFieldsFromConfigEntry("PMAM_SCHEME_PATH"); String apparrFilePath = this.getFromConfigure("OUTPUT_DATA_PATH"); Tap pmsmTap = new Hfs( this.getTextDelimitedFromConfig("PMSM_SCHEME_PATH", null, false, fieldDelimiter), apparrFilePath); FlowDef flowDef = FlowDef.flowDef() .addSource(ipAmcpPipe, ipAmcpInTap) .addTailSink(transformPipe, outTap) .addTrap(ipAtptPipe, badRecordsTap); return flowDef; } Key Words in the Root config file
  • 15. 15 Conclusion • Benefits – Reduce the total effort of developing and testing Cascading applications • Provide a re-usable layer to reduce the amount of “plumbing” code • Make Cascading modules configurable – Improve the code quality • Modularize Cascading applications and support best practices in Java coding • Support additional features (such as logging and exception handling) – Build an open architecture for future extension and integration • Technical specification – Compatible with JDK 1.5 and above; Jar file was compiled with JDK 1.7 – Tested with Cascading 2.5
  • 16. 16 For questions, please reach out to Ming.Yuan@capitalone.com
  • 17. 17 Appendix: PDSBaseDriver Class Method Functionality Override process(String path) This method takes the path to the root configuration file, initializes all required configurations, invokes "transform()" in its subclass, and executes Cascading flows. No init(String path) This method takes the path to the root configuration file, parses the file, and stores configuration entries accordingly. No getFromConfig(String key) This method takes a String-typed key, and returns a string-typed value is the key has been used in the root configuration file. It returns null, otherwise. No getFieldsFromConfigEntry( String key) This method takes a String-typed key. In the root configuration file, if the key has been assigned to a path to a schema file, the method returns a Fields object based on all column names in the schema file. This Fields will be automatically cached. No getFieldsFromConfigEntry( String key, String[] appendences) This method takes a String-typed key in the root configuration file. If the key has been assigned to a path to a schema file, it returns a Fields object with all column names in the schema file and all names in the input string array. This Fields will NOT be cached. No getTextDelimitedFromConfig( String key, String[] appendences, boolean hasHeader, String delimiter) This methods creates and returns a TextDelimited object from a configuration key in the root configuration files. You can use the second parameter to append any column names programmatically. The third and forth parameters are for input/output files. No transform() Subclass should build a Flowdef object with application processing flow in this method. Yes
  • 18. 18 Appendix: PDSBaseFunction Class Method Functionality Override prepare(FlowProcess f, OperationCall<Tuple> call) This method overrides the same function in Cascading BaseOperation class. No cleanup(FlowProcess f, OperationCall<Tuple> call) This method overrides the same function in Cascading BaseOperation class. No init(String key, String filePath) This method parses a mapping rules file, and initializes the PDSBaseFunction object. No populateTupleSet( FunctionCall<Tuple> call) This method populates values in its output tuple based on input values and pre- defined processing rules. No getConfigPath() This method returns the path pointing to processing rules file in HDFS. Yes Operate( FlowProcess flowProcess, FunctionCall<Tuple> functionCall) This methods should invoke the "populateTupleSet()" method in order to execute pre-defined transformation rules, and it should invoke any additional custom transformation methods for complex logic. Yes
  • 19. 19 Appendix: Class Diagram * Yellow color indicates components from Cascading package