SlideShare una empresa de Scribd logo
1 de 33
Descargar para leer sin conexión
Hadoop and Cassandra
July 2013
Giannis Neokleous
www.giann.is
@yiannis_n
Cassandra at Knewton
● Student Events (At different processing stages)
● Parameters for models
● Course graphs
● Deployed in many environments ~ 14 clusters
Getting data in and out of
Cassandra in bulk efficiently
Why?
● Lots of data sitting in shiny new clusters
○ Want to run Analytics
● You suddenly realize your schema is not so great
● The data you're storing could be more efficient
● Think you've discovered an awesome new metric
C*
Stuck!
How do you get data out
efficiently and fast?
No slow-downs?
Solutions
● Cassandra comes packaged with sstable2json tool.
● Using the thrift API for bulk mutations, gets.
○ Can distribute reads or writes to multiple machines.
● ColumnFamily[Input|Output]Format - Using Hadoop
○ Needs a live cluster
○ Still uses the thrift API
+
Why is MapReduce a good fit for C*?
● SSTables are sorted
○ MapReduce likes sorted stuff
● SSTables are immutable
○ Easy to identify what has been processed
● Data is essentially key/value pairs
● MapReduce can partition stuff
○ Just like you partition data in your Cassandra cluster
● MapReduce is Cool, so is Cassandra
Does it work?
Yes! But where?
● Been using bulk reading in production for a few months now
- Works great!
● Been using bulk writing into Cassandra for almost two years
- Works great too!
How?!!1
Reading in bulk
A little about SSTables
● Sorted
○ Both row keys and columns
● Key Value pairs
○ Rows:
■ Row value: Key
■ Columns: Value
○ Columns:
■ Column name: Key
■ Column value: Value
● Immutable
● Consist of 4 parts
○ ColumnFamily-hd-3549-Data.db
Column family name Version Table
number
Type (One table consists of 4 or 5 types)
One of: Data, Index, Filter, Statistics, [CompressionInfo]
A little about MapReduce
● InputFormat
○ Figure out where the data is, what to read and how to read them
○ Divides the data to record readers
● RecordReader
○ Instantiated by InputFormats
○ Do the actual reading
● Mapper
○ Key/Value pairs get passed in by the record readers
● Reducer
○ Key/Value pairs get passed in from the mappers
○ All the same keys end up in the same reducer
A little about MapReduce
● OutputFormat
○ Figure out where and how to write the data
○ Divides the data to record writers
○ What to do after the data has been written
● RecordWriter
○ Instantiated by OutputFormats
○ Do the actual writing
SSTableInputFormat
● An input format specifically for SSTables.
○ Extends from FileInputFormat
● Includes a DataPathFilter for filtering through files for *-Data.db
files
● Expands all subdirectories of input - Filters for ColumnFamily
● Configures Comparator, Subcomparator and Partitioner classes used in
ColumnFamily.
● Two types: FileInputFormat
SSTableInputFormat
SSTableColumnInputFormat SSTableRowInputFormat
SSTableRecordReader
● A record reader specifically for SSTables.
● On init:
○ Copies the table locally. (Decompresses it, if using Priam)
○ Opens the table for reading. (Only needs Data, Index and
CompressionInfo tables)
○ Creates a TableScanner from the reader
● Two types:
RecordReader
SSTableRecordReader
SSTableColumnRecordReader SSTableRowRecordReader
Data Flow
C*
SSTable
SSTable
SSTable
SSTable
Hadoop Cluster
SSTable
InputFormatPull
SSTable
RecordReader
SSTable
RecordReader
SSTable
RecordReader
Output
Reduce
Reduce
Reduce
Map
Map
Map
P
u
s
h
OutputFormat
Record
Writer
Record
Writer
Record
Writer
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = new Job(conf);
ClassLoader loader = SSTableMRExample.class.getClassLoader();
conf.addResource(loader.getResource("knewton-site.xml"));
SSTableInputFormat.setPartitionerClass(RandomPartitioner.class.getName(), job);
SSTableInputFormat.setComparatorClass(LongType.class.getName(), job);
SSTableInputFormat.setColumnFamilyName("StudentEvents", job);
job.setOutputKeyClass(LongWritable.class);
job.setOutputValueClass(StudentEventWritable.class);
job.setMapperClass(StudentEventMapper.class);
job.setReducerClass(StudentEventReducer.class);
job.setInputFormatClass(SSTableColumnInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
SSTableInputFormat.addInputPaths(job, args[0]);
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);
}
public class StudentEventMapper extends SSTableColumnMapper<Long, StudentEvent, LongWritable, StudentEventWritable> {
@Override
public void performMapTask(Long key, StudentEvent value, Context context) {
//do stuff here
}
// Some other omitted trivial methods
}
Load additional properties from a conf
file
Define mappers/reducers. The only thing
you have to write.
Read each column as a separate record.
Sees row key/column pairs.
Remember to skip deleted columns
(tombstones)
Replication factor
● Data replication is a thing
● Have to deal with it:
○ In the reducer
○ Only read (num tokens) / (replication factor) - if you're feeling
brave
Priam
● Incremental backups
○ No need to read everything all the time
● Priam usually snappy compresses tables
● Works good if you want to use EMR
○ Backups already on S3
Writing in bulk
Writing in bulk
● Define custom output format
● Define custom record writer
○ Uses the SSTableSimpleWriter
■ Expects keys in sorted order (Tricky with MapReduce - More
about this later)
● Nothing special on the Mapper or Reducer part
What happens in the OutputFormat?
● Not much...
○ Instantiates and does basic configuration for RecordWriter
public abstract RecordWriter<K, V> getRecordWriter(TaskAttemptContext context)
throws IOException, InterruptedException;
What happens in the RecordWriter?
● Writes Columns, ExpiringColumns (ttl), CounterColumns,
SuperColumns
○ With the right abstractions you can reuse almost all of the code in
multiple Jobs
● On close SSTables written by the record writer get sent** to Cassandra
public class RecordWriter<K, V> {
public abstract void write(K key, V value)
throws IOException, InterruptedException;
public abstract void close(TaskAttemptContext context)
throws IOException, InterruptedException;
.....
}
public SSTableSimpleWriter(File directory,
CFMetaData metadata, IPartitioner partitioner)
protected void writeRow(DecoratedKey key,
ColumnFamily columnFamily)
private void addColumn(Column column)
How exactly do you send the
SSTables to Cassandra?
How do SSTables get sent? - Part I
● sstableloader introduced in 0.8 using the BulkLoader class
○ Starts a gossiper occupies ports
○ Needs coordination - Locking
● Not convenient to incorporate the BulkLoader class in the code
● Gossiper connects the sender to the cluster
○ Not part of the ring
○ Bug in 0.8 persisted the node in the cluster
How do SSTables get sent? - Part II
● Smart partitioning in Hadoop, then scp
○ No Gossiper
○ No coordination
○ Each reducer is responsible for handling keys specific to 1 node in
the ring.
● Needs ring information beforehand
○ Can be configured
■ Offline in conf
■ Right before the job starts
Decorated Keys
● Keys are decorated before they're stored.
○ Faster to work with - Compares, sorts etc.
○ RandomPartitioner uses MD5 hash.
● Depends on your partitioner.
○ Random Partitioner
○ OrderPreservingPartitioner
○ etc? custom?
● When reading the SSTableScanner de-decorates keys.
● Tricky part is when writing tables.
Decorated Keys
● Columns and keys are sorted - After they're decorated.
● Don't partition your keys in MapReduce before you decorate them.
○ Unless you're using the unsorted table writer.
R1 R2 R3
W1 W2 W3
c
b
a
f
e
d
i
h
g
0cc175b9c0f1b6a831c399e26977266
1
92eb5ffee6ae2fec3ad71c777531578f
4a8a08f09d37b73795649038408b5f3
3
...
...
...
...
...
...
KassandraMRHelper
● Open sourcing today!
○ github.com/knewton/KassandraMRHelper
● Has all you need to get started on bulk reading SSTables with Hadoop.
● Includes an example job that reads "student events"
● Handles compressed tables
● Use Priam? Even better can snappy decompress priam backups.
● Don't have a cluster up or a table handy?
○ Use com.knewton.mapreduce.cassandra.WriteSampleSSTable in the
test source directory to generate one.
usage: WriteSampleSSTable [OPTIONS] <output_dir>
-e,--studentEvents <arg> The number of student events per student to be
generated. Default value is 10
-h,--help Prints this help message.
-s,--students <arg> The number of students (rows) to be generated.
Default value is 100.
Thank you
Questions?
Giannis Neokleous
www.giann.is
@yiannis_n

Más contenido relacionado

Similar a Hadoop and cassandra

An Introduction to Apache Cassandra
An Introduction to Apache CassandraAn Introduction to Apache Cassandra
An Introduction to Apache CassandraSaeid Zebardast
 
Data Infra Meetup | ByteDance's Native Parquet Reader
Data Infra Meetup | ByteDance's Native Parquet ReaderData Infra Meetup | ByteDance's Native Parquet Reader
Data Infra Meetup | ByteDance's Native Parquet ReaderAlluxio, Inc.
 
Big Data processing with Apache Spark
Big Data processing with Apache SparkBig Data processing with Apache Spark
Big Data processing with Apache SparkLucian Neghina
 
Mongo nyc nyt + mongodb
Mongo nyc nyt + mongodbMongo nyc nyt + mongodb
Mongo nyc nyt + mongodbDeep Kapadia
 
The evolution of Netflix's S3 data warehouse (Strata NY 2018)
The evolution of Netflix's S3 data warehouse (Strata NY 2018)The evolution of Netflix's S3 data warehouse (Strata NY 2018)
The evolution of Netflix's S3 data warehouse (Strata NY 2018)Ryan Blue
 
Piano Media - approach to data gathering and processing
Piano Media - approach to data gathering and processingPiano Media - approach to data gathering and processing
Piano Media - approach to data gathering and processingMartinStrycek
 
The Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization OpportunitiesThe Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization OpportunitiesDatabricks
 
Improving PySpark performance: Spark Performance Beyond the JVM
Improving PySpark performance: Spark Performance Beyond the JVMImproving PySpark performance: Spark Performance Beyond the JVM
Improving PySpark performance: Spark Performance Beyond the JVMHolden Karau
 
Software Design Practices for Large-Scale Automation
Software Design Practices for Large-Scale AutomationSoftware Design Practices for Large-Scale Automation
Software Design Practices for Large-Scale AutomationHao Xu
 
Distributed systems and consistency
Distributed systems and consistencyDistributed systems and consistency
Distributed systems and consistencyseldo
 
S_MapReduce_Types_Formats_Features_07.pptx
S_MapReduce_Types_Formats_Features_07.pptxS_MapReduce_Types_Formats_Features_07.pptx
S_MapReduce_Types_Formats_Features_07.pptxRajiArun7
 
Dirty Data? Clean it up! - Rocky Mountain DataCon 2016
Dirty Data? Clean it up! - Rocky Mountain DataCon 2016Dirty Data? Clean it up! - Rocky Mountain DataCon 2016
Dirty Data? Clean it up! - Rocky Mountain DataCon 2016Dan Lynn
 
Presto Summit 2018 - 09 - Netflix Iceberg
Presto Summit 2018  - 09 - Netflix IcebergPresto Summit 2018  - 09 - Netflix Iceberg
Presto Summit 2018 - 09 - Netflix Icebergkbajda
 
How to build TiDB
How to build TiDBHow to build TiDB
How to build TiDBPingCAP
 
When is Myrocks good? 2020 Webinar Series
When is Myrocks good? 2020 Webinar SeriesWhen is Myrocks good? 2020 Webinar Series
When is Myrocks good? 2020 Webinar SeriesAlkin Tezuysal
 
Elasticsearch as a time series database
Elasticsearch as a time series databaseElasticsearch as a time series database
Elasticsearch as a time series databasefelixbarny
 
BlaBlaCar Elastic Search Feedback
BlaBlaCar Elastic Search FeedbackBlaBlaCar Elastic Search Feedback
BlaBlaCar Elastic Search Feedbacksinfomicien
 
Introduction to AWS Big Data
Introduction to AWS Big Data Introduction to AWS Big Data
Introduction to AWS Big Data Omid Vahdaty
 
Ledingkart Meetup #2: Scaling Search @Lendingkart
Ledingkart Meetup #2: Scaling Search @LendingkartLedingkart Meetup #2: Scaling Search @Lendingkart
Ledingkart Meetup #2: Scaling Search @LendingkartMukesh Singh
 

Similar a Hadoop and cassandra (20)

An Introduction to Apache Cassandra
An Introduction to Apache CassandraAn Introduction to Apache Cassandra
An Introduction to Apache Cassandra
 
Data Infra Meetup | ByteDance's Native Parquet Reader
Data Infra Meetup | ByteDance's Native Parquet ReaderData Infra Meetup | ByteDance's Native Parquet Reader
Data Infra Meetup | ByteDance's Native Parquet Reader
 
Big Data processing with Apache Spark
Big Data processing with Apache SparkBig Data processing with Apache Spark
Big Data processing with Apache Spark
 
Mongo nyc nyt + mongodb
Mongo nyc nyt + mongodbMongo nyc nyt + mongodb
Mongo nyc nyt + mongodb
 
The evolution of Netflix's S3 data warehouse (Strata NY 2018)
The evolution of Netflix's S3 data warehouse (Strata NY 2018)The evolution of Netflix's S3 data warehouse (Strata NY 2018)
The evolution of Netflix's S3 data warehouse (Strata NY 2018)
 
Piano Media - approach to data gathering and processing
Piano Media - approach to data gathering and processingPiano Media - approach to data gathering and processing
Piano Media - approach to data gathering and processing
 
Druid
DruidDruid
Druid
 
The Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization OpportunitiesThe Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization Opportunities
 
Improving PySpark performance: Spark Performance Beyond the JVM
Improving PySpark performance: Spark Performance Beyond the JVMImproving PySpark performance: Spark Performance Beyond the JVM
Improving PySpark performance: Spark Performance Beyond the JVM
 
Software Design Practices for Large-Scale Automation
Software Design Practices for Large-Scale AutomationSoftware Design Practices for Large-Scale Automation
Software Design Practices for Large-Scale Automation
 
Distributed systems and consistency
Distributed systems and consistencyDistributed systems and consistency
Distributed systems and consistency
 
S_MapReduce_Types_Formats_Features_07.pptx
S_MapReduce_Types_Formats_Features_07.pptxS_MapReduce_Types_Formats_Features_07.pptx
S_MapReduce_Types_Formats_Features_07.pptx
 
Dirty Data? Clean it up! - Rocky Mountain DataCon 2016
Dirty Data? Clean it up! - Rocky Mountain DataCon 2016Dirty Data? Clean it up! - Rocky Mountain DataCon 2016
Dirty Data? Clean it up! - Rocky Mountain DataCon 2016
 
Presto Summit 2018 - 09 - Netflix Iceberg
Presto Summit 2018  - 09 - Netflix IcebergPresto Summit 2018  - 09 - Netflix Iceberg
Presto Summit 2018 - 09 - Netflix Iceberg
 
How to build TiDB
How to build TiDBHow to build TiDB
How to build TiDB
 
When is Myrocks good? 2020 Webinar Series
When is Myrocks good? 2020 Webinar SeriesWhen is Myrocks good? 2020 Webinar Series
When is Myrocks good? 2020 Webinar Series
 
Elasticsearch as a time series database
Elasticsearch as a time series databaseElasticsearch as a time series database
Elasticsearch as a time series database
 
BlaBlaCar Elastic Search Feedback
BlaBlaCar Elastic Search FeedbackBlaBlaCar Elastic Search Feedback
BlaBlaCar Elastic Search Feedback
 
Introduction to AWS Big Data
Introduction to AWS Big Data Introduction to AWS Big Data
Introduction to AWS Big Data
 
Ledingkart Meetup #2: Scaling Search @Lendingkart
Ledingkart Meetup #2: Scaling Search @LendingkartLedingkart Meetup #2: Scaling Search @Lendingkart
Ledingkart Meetup #2: Scaling Search @Lendingkart
 

Último

New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 

Último (20)

New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 

Hadoop and cassandra

  • 1. Hadoop and Cassandra July 2013 Giannis Neokleous www.giann.is @yiannis_n
  • 2. Cassandra at Knewton ● Student Events (At different processing stages) ● Parameters for models ● Course graphs ● Deployed in many environments ~ 14 clusters
  • 3.
  • 4. Getting data in and out of Cassandra in bulk efficiently
  • 5. Why? ● Lots of data sitting in shiny new clusters ○ Want to run Analytics ● You suddenly realize your schema is not so great ● The data you're storing could be more efficient ● Think you've discovered an awesome new metric C*
  • 6. Stuck! How do you get data out efficiently and fast? No slow-downs?
  • 7. Solutions ● Cassandra comes packaged with sstable2json tool. ● Using the thrift API for bulk mutations, gets. ○ Can distribute reads or writes to multiple machines. ● ColumnFamily[Input|Output]Format - Using Hadoop ○ Needs a live cluster ○ Still uses the thrift API
  • 8. +
  • 9. Why is MapReduce a good fit for C*? ● SSTables are sorted ○ MapReduce likes sorted stuff ● SSTables are immutable ○ Easy to identify what has been processed ● Data is essentially key/value pairs ● MapReduce can partition stuff ○ Just like you partition data in your Cassandra cluster ● MapReduce is Cool, so is Cassandra
  • 11. Yes! But where? ● Been using bulk reading in production for a few months now - Works great! ● Been using bulk writing into Cassandra for almost two years - Works great too!
  • 14. A little about SSTables ● Sorted ○ Both row keys and columns ● Key Value pairs ○ Rows: ■ Row value: Key ■ Columns: Value ○ Columns: ■ Column name: Key ■ Column value: Value ● Immutable ● Consist of 4 parts ○ ColumnFamily-hd-3549-Data.db Column family name Version Table number Type (One table consists of 4 or 5 types) One of: Data, Index, Filter, Statistics, [CompressionInfo]
  • 15. A little about MapReduce ● InputFormat ○ Figure out where the data is, what to read and how to read them ○ Divides the data to record readers ● RecordReader ○ Instantiated by InputFormats ○ Do the actual reading ● Mapper ○ Key/Value pairs get passed in by the record readers ● Reducer ○ Key/Value pairs get passed in from the mappers ○ All the same keys end up in the same reducer
  • 16. A little about MapReduce ● OutputFormat ○ Figure out where and how to write the data ○ Divides the data to record writers ○ What to do after the data has been written ● RecordWriter ○ Instantiated by OutputFormats ○ Do the actual writing
  • 17. SSTableInputFormat ● An input format specifically for SSTables. ○ Extends from FileInputFormat ● Includes a DataPathFilter for filtering through files for *-Data.db files ● Expands all subdirectories of input - Filters for ColumnFamily ● Configures Comparator, Subcomparator and Partitioner classes used in ColumnFamily. ● Two types: FileInputFormat SSTableInputFormat SSTableColumnInputFormat SSTableRowInputFormat
  • 18. SSTableRecordReader ● A record reader specifically for SSTables. ● On init: ○ Copies the table locally. (Decompresses it, if using Priam) ○ Opens the table for reading. (Only needs Data, Index and CompressionInfo tables) ○ Creates a TableScanner from the reader ● Two types: RecordReader SSTableRecordReader SSTableColumnRecordReader SSTableRowRecordReader
  • 20. public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = new Job(conf); ClassLoader loader = SSTableMRExample.class.getClassLoader(); conf.addResource(loader.getResource("knewton-site.xml")); SSTableInputFormat.setPartitionerClass(RandomPartitioner.class.getName(), job); SSTableInputFormat.setComparatorClass(LongType.class.getName(), job); SSTableInputFormat.setColumnFamilyName("StudentEvents", job); job.setOutputKeyClass(LongWritable.class); job.setOutputValueClass(StudentEventWritable.class); job.setMapperClass(StudentEventMapper.class); job.setReducerClass(StudentEventReducer.class); job.setInputFormatClass(SSTableColumnInputFormat.class); job.setOutputFormatClass(TextOutputFormat.class); SSTableInputFormat.addInputPaths(job, args[0]); FileOutputFormat.setOutputPath(job, new Path(args[1])); job.waitForCompletion(true); } public class StudentEventMapper extends SSTableColumnMapper<Long, StudentEvent, LongWritable, StudentEventWritable> { @Override public void performMapTask(Long key, StudentEvent value, Context context) { //do stuff here } // Some other omitted trivial methods } Load additional properties from a conf file Define mappers/reducers. The only thing you have to write. Read each column as a separate record. Sees row key/column pairs. Remember to skip deleted columns (tombstones)
  • 21. Replication factor ● Data replication is a thing ● Have to deal with it: ○ In the reducer ○ Only read (num tokens) / (replication factor) - if you're feeling brave
  • 22. Priam ● Incremental backups ○ No need to read everything all the time ● Priam usually snappy compresses tables ● Works good if you want to use EMR ○ Backups already on S3
  • 24. Writing in bulk ● Define custom output format ● Define custom record writer ○ Uses the SSTableSimpleWriter ■ Expects keys in sorted order (Tricky with MapReduce - More about this later) ● Nothing special on the Mapper or Reducer part
  • 25. What happens in the OutputFormat? ● Not much... ○ Instantiates and does basic configuration for RecordWriter public abstract RecordWriter<K, V> getRecordWriter(TaskAttemptContext context) throws IOException, InterruptedException;
  • 26. What happens in the RecordWriter? ● Writes Columns, ExpiringColumns (ttl), CounterColumns, SuperColumns ○ With the right abstractions you can reuse almost all of the code in multiple Jobs ● On close SSTables written by the record writer get sent** to Cassandra public class RecordWriter<K, V> { public abstract void write(K key, V value) throws IOException, InterruptedException; public abstract void close(TaskAttemptContext context) throws IOException, InterruptedException; ..... } public SSTableSimpleWriter(File directory, CFMetaData metadata, IPartitioner partitioner) protected void writeRow(DecoratedKey key, ColumnFamily columnFamily) private void addColumn(Column column)
  • 27. How exactly do you send the SSTables to Cassandra?
  • 28. How do SSTables get sent? - Part I ● sstableloader introduced in 0.8 using the BulkLoader class ○ Starts a gossiper occupies ports ○ Needs coordination - Locking ● Not convenient to incorporate the BulkLoader class in the code ● Gossiper connects the sender to the cluster ○ Not part of the ring ○ Bug in 0.8 persisted the node in the cluster
  • 29. How do SSTables get sent? - Part II ● Smart partitioning in Hadoop, then scp ○ No Gossiper ○ No coordination ○ Each reducer is responsible for handling keys specific to 1 node in the ring. ● Needs ring information beforehand ○ Can be configured ■ Offline in conf ■ Right before the job starts
  • 30. Decorated Keys ● Keys are decorated before they're stored. ○ Faster to work with - Compares, sorts etc. ○ RandomPartitioner uses MD5 hash. ● Depends on your partitioner. ○ Random Partitioner ○ OrderPreservingPartitioner ○ etc? custom? ● When reading the SSTableScanner de-decorates keys. ● Tricky part is when writing tables.
  • 31. Decorated Keys ● Columns and keys are sorted - After they're decorated. ● Don't partition your keys in MapReduce before you decorate them. ○ Unless you're using the unsorted table writer. R1 R2 R3 W1 W2 W3 c b a f e d i h g 0cc175b9c0f1b6a831c399e26977266 1 92eb5ffee6ae2fec3ad71c777531578f 4a8a08f09d37b73795649038408b5f3 3 ... ... ... ... ... ...
  • 32. KassandraMRHelper ● Open sourcing today! ○ github.com/knewton/KassandraMRHelper ● Has all you need to get started on bulk reading SSTables with Hadoop. ● Includes an example job that reads "student events" ● Handles compressed tables ● Use Priam? Even better can snappy decompress priam backups. ● Don't have a cluster up or a table handy? ○ Use com.knewton.mapreduce.cassandra.WriteSampleSSTable in the test source directory to generate one. usage: WriteSampleSSTable [OPTIONS] <output_dir> -e,--studentEvents <arg> The number of student events per student to be generated. Default value is 10 -h,--help Prints this help message. -s,--students <arg> The number of students (rows) to be generated. Default value is 100.