This document discusses processing tweets about Black Friday using serverless data architecture on Google Cloud Platform. It describes:
1) Using Google Cloud Pub/Sub to ingest tweets in real-time and guarantee delivery at scale.
2) Running a Python application that filters tweets and publishes them to a Pub/Sub topic using containers and Kubernetes for scalability.
3) Building a Cloud Dataflow pipeline that reads from Pub/Sub, formats tweets, analyzes sentiment with Natural Language API, and writes results to BigQuery for querying and visualization.
4. Black Friday (ˈblæk fraɪdɪ)
noun
The day following Thanksgiving Day in the
United States. Since 1932, it has been
regarded as the beginning of the Christmas
shopping season.
5. Black Friday in the US
2012 - 2016
source: Google Trends, November 23rd 2016
6. Black Friday in Italy
2012 - 2016
source: Google Trends, November 23rd 2016
7. What are we doing
Processing
+ analytics
Tweets about
black friday
insights
11. What is Google Cloud Pub/Sub?
● Google Cloud Pub/Sub is a
fully-managed real-time
messaging service.
○ Guaranteed delivery
■ “At least once” semantics
○ Reliable at scale
■ Messages are replicated in
different zones
12. From Twitter to Pub/Sub
$ gcloud beta pubsub topics create blackfridaytweets
Created topic [blackfridaytweets].
SHELL
13. From Twitter to Pub/Sub
?
Pub/Sub Topic
Subscription A
Subscription B
Subscription C
Consumer A
Consumer B
Consumer C
14. From Twitter to Pub/Sub
● Simple Python application using the TweePy library
# somewhere in the code, track a given set of keywords
stream = Stream(auth, listener)
stream.filter(track=['blackfriday', [...]])
[...]
# somewhere else, write messages to Pub/Sub
for line in data_lines:
pub = base64.urlsafe_b64encode(line)
messages.append({'data': pub})
body = {'messages': messages}
resp = client.projects().topics().publish(
topic='blackfridaytweets', body=body).execute(num_retries=NUM_RETRIES)
PYTHON
22. What is Kubernetes (K8S)?
● An orchestration tool for managing a
cluster of containers across multiple
hosts
○ Scaling, rolling upgrades, A/B testing, etc.
● Declarative – not procedural
○ Auto-scales and self-heals to desired
state
● Supports multiple container runtimes,
currently Docker and CoreOS Rkt
● Open-source: github.com/kubernetes
33. What is Google Cloud Dataflow?
● Cloud Dataflow is a collection
of open source SDKs to
implement parallel processing
pipelines.
○ same programming model for
streaming and batch pipelines
● Cloud Dataflow is a managed
service to run parallel
processing pipelines on
Google Cloud Platform
34. What is Google BigQuery?
● Google BigQuery is a fully-
managed Analytic Data
Warehouse solution allowing
real-time analysis of Petabyte-
scale datasets.
● Enterprise-grade features
○ Batch and streaming (100K
rows/sec) data ingestion
○ JDBC/ODBC connectors
○ Rich SQL-2011-compliant query
language
○ Supports updates and deletes
new!
new!
35. From Pub/Sub to BigQuery
Pub/Sub Topic
Subscription
Read tweets
from
Pub/Sub
Format
tweets for
BigQuery
Write tweets
on BigQuery
BigQuery
Table
Dataflow Pipeline
36. From Pub/Sub to BigQuery
● A Dataflow pipeline is a Java program.
// TwitterProcessor.java
public static void main(String[] args) {
Pipeline p = Pipeline.create();
PCollection<String> tweets = p.apply(PubsubIO.Read.topic("...blackfridaytweets"));
PCollection<TableRow> formattedTweets = tweets.apply(ParDo.of(new DoFormat()));
formattedTweets.apply(BigQueryIO.Write.to(tableReference));
p.run();
}
JAVA
37. From Pub/Sub to BigQuery
● A Dataflow pipeline is a Java program.
// TwitterProcessor.java
// Do Function (to be used within a ParDo)
private static final class DoFormat extends DoFn<String, TableRow> {
private static final long serialVersionUID = 1L;
@Override
public void processElement(DoFn<String, TableRow>.ProcessContext c) {
c.output(createTableRow(c.element()));
}
}
// Helper method
private static TableRow createTableRow(String tweet) throws IOException {
return JacksonFactory.getDefaultInstance().fromString(tweet, TableRow.class);
}
JAVA
38. From Pub/Sub to BigQuery
● Use Maven to build, deploy or update the Pipeline.
$ mvn compile exec:java -Dexec.mainClass=it.noovle.dataflow.TwitterProcessor
-Dexec.args="--streaming"
[...]
INFO: To cancel the job using the 'gcloud' tool, run:
> gcloud alpha dataflow jobs --project=codemotion-2016-demo cancel 2016-11-
19_15_49_53-5264074060979116717
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 18.131s
[INFO] Finished at: Sun Nov 20 00:49:54 CET 2016
[INFO] Final Memory: 28M/362M
[INFO] ------------------------------------------------------------------------
SHELL
39. From Pub/Sub to BigQuery
● You can monitor your pipelines from Cloud Console.
40. From Pub/Sub to BigQuery
● Data start flowing into BigQuery tables. You can run queries
from the CLI or the Web Interface.
47. Sentiment Analysis with Natural Language API
Polarity: [-1,1]
Magnitude: [0,+inf)
Text
sentiment = polarity x magnitude
48. Sentiment Analysis with Natural Language API
Pub/Sub Topic
Read tweets
from
Pub/Sub
Write tweets
on BigQuery BigQuery
Tables
Dataflow Pipeline
Filter and
Evaluate
sentiment
Format
tweets for
BigQuery
Write tweets
on BigQuery
Format
tweets for
BigQuery
49. From Pub/Sub to BigQuery
● We just add the additional necessary steps.
// TwitterProcessor.java
public static void main(String[] args) {
Pipeline p = Pipeline.create();
PCollection<String> tweets = p.apply(PubsubIO.Read.topic("...blackfridaytweets"));
PCollection<String> sentTweets = tweets.apply(ParDo.of(new DoFilterAndProcess()));
PCollection<TableRow> formSentTweets = sentTweets.apply(ParDo.of(new DoFormat()));
formSentTweets.apply(BigQueryIO.Write.to(sentTableReference));
PCollection<TableRow> formattedTweets = tweets.apply(ParDo.of(new DoFormat()));
formattedTweets.apply(BigQueryIO.Write.to(tableReference));
p.run();
}
JAVA
PCollection<String> sentTweets = tweets.apply(ParDo.of(new DoFilterAndProcess()));
PCollection<TableRow> formSentTweets = sentTweets.apply(ParDo.of(new DoFormat()));
formSentTweets.apply(BigQueryIO.Write.to(sentTableReference));
50. From Pub/Sub to BigQuery
● The update process preserves all in-flight data.
$ mvn compile exec:java -Dexec.mainClass=it.noovle.dataflow.TwitterProcessor
-Dexec.args="--streaming --update --jobName=twitterprocessor-lorenzo-1107222550"
[...]
INFO: To cancel the job using the 'gcloud' tool, run:
> gcloud alpha dataflow jobs --project=codemotion-2016-demo cancel 2016-11-
19_15_49_53-5264074060979116717
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 18.131s
[INFO] Finished at: Sun Nov 20 00:49:54 CET 2016
[INFO] Final Memory: 28M/362M
[INFO] ------------------------------------------------------------------------
SHELL
Black Friday is the biggest selling event in the US, and since 1932 it demarcated the begin of the Christmas shopping season.
Interest about Black Friday in the US remained unchanged in the last years, according to Google Trends.
However, if we perform the same analysis in Italy, we can see that interest about Black Friday in Italy grew exponentially.
That’s why there is no company (even Worldwide) who can ignore this day.
Companies can take advantage of Black Friday to advertise themselves and sell more
We are going to step into the shoes of a company that wants to propose some deals specific to Black Friday, so the problem is: how to make and on which channels we have to advertise the deals to maximize revenues?
Social Networks like Twitter can help a lot about analyzing people trends and opinions and supporting us to making the right decision.
So today we are focusing on Twitter. <selected hashtags>
This is how we want to do this. The story is more or less always the same:
we get some data
we process it (removing unnecessary things, transforming others)
we store the data in a format that is good for analysis.
Complexities:
We do not have so much time
We have to make it work even if we don’t know the traffic we will have to handle (how high is the peak we saw before?)
Our solution is to adopt a serverless architecture:
We want to use services that allow us to concentrate on our solution, rather than config files and boilerplate code
We do not have to configure or manage the infrastructure
We choose Google Cloud Platform because its Data Analytics offering is based exactly on these foundations.
Today we are going to explore almost all the tools of GCP for Data Analytics.
So, let’s start this whirlwind tour!
Let’s start from the beginning. For the ingestion part we are going to use two technologies:
Google Container Engine, the technology that powers Kubernetes-as-a-service (who knows Kubernetes? Containers/Docker?) on GCP
Google Cloud Pub/Sub, a middleware solution on the Cloud
Pub/Sub is a fully managed real time messaging service.
I create a topic, I can send messages to a topic, if I’m interested in a topic i can subscribe to it and I start receiving messages.
Nothing new, other technologies do this. However, Pub/Sub has a few strong points:
It is a service, i do not have to configure a cluster
It is reliable by design
It keeps being reliable at scale
How do I create a Pub/Sub topic?
Without going much into detail, it is a one-liner.
gcloud is the command line tool that manages all Google Cloud Platform resources.
This is how we are going to use Pub/Sub: we implement something that converts tweets into messages, and by means of Pub/Sub we can distribute these tweets to several subscribers with ease.
Pub/Sub decouples producers and consumers: they do not have to know each others
It improves the reliability of the overall system, acting as a shock-absorber even if some parts of the following infrastructure has problems.
We have a missing part here: how do we capture tweets and transform them in messages?
We write a simple Python app that uses the TweePy library to interact with Twitter Streaming API
Somewhere we use the stream.filter method to track a list of keywords
somewhere else (in the listener of TweePy events) we collect tweets, packaging them and sending them out as Pub/Sub messages
(note the Pub/Sub topic name)
We wrote the app, we tested it.
Now we have to deploy it (and its library) somewhere. Our first temptation would be...
To start a Virtual Machine, install python on it and make it run there. However...
This is not the solution we want.
It doesn’t scale
It is hard to make fault-tolerant (if the VM crashes it doesn’t restart)
It is difficult to deploy and to update (no rolling update)
A much better solution is to use containers.
Containers provide an higher level of abstraction (OS-level rather than HW-level), that allows us to create portable and isolated deployments that can be installed easily on on-prem or Cloud environments.
We create a docker image using a dockerfile, which is a sequence of instruction that, starting from a base image, add some pieces to build our personal solution. In this case we:
Install necessary libraries
Add our Python files
Invoke our Python executable file (the container will run as long as this command does)
We build an image based on the dockerfile and we are done.
But, a container solves the problem of deploy and portability, but not the one of scaling and management.
We need a further layer of abstraction, and this level of abstraction is provided by Kubernetes.
Kubernetes is an open source orchestration tool for managing clusters of containers.
It introduces all those features that are missing from “standard” container deployments.
A cool thing about Kubernetes is that it is completely declarative - you do not specify that you want one more node or one less pod, but you define a desired state and the Kubernetes Master works to reach and maintain that state.
This is what we deploy on Kubernetes: a ReplicationController (or a ReplicaSet/Deployment in recent versions) is the definition of a group of container replicas that you want concurrently running.
For the sake of our example we need only one replica, but also in this case a ReplicationController is useful - as it ensures that this single replica is always up and running.
So we wrap our container into a Pod. The Pod is the replica unit of Kubernetes.
Each Pod runs on a cluster node, but...
...more than one Pod can run on a single node. The allocation of Pods on nodes are managed by the Kubernetes Master, which is a particular cluster node.
In Container Engine the K8S Master is completely managed (and free!)
Since version 1.3 Kubernetes supports also autoscaling of nodes.
If there isn’t sufficient resources available to keep up with Pods scaling, node pool is enlarged.
Creating a Kubernetes cluster is easy:
1) we create the cluster
2) we acquire Kubernetes credentials using gcloud
3) we use kubectl (opensource CLI) to submit commands to the Kubernetes Master
Once the cluster has been created, we can monitor all worker nodes from the Cloud Console.
Here we have one node,
that contains one Pod,
that contains one Container,
that contains our application, that is transforming Tweets in Pub/Sub messages.
Cool! We have implemented the first piece of our processing chain. What’s next?
For the processing we want something equally scalable, so we are going to use a technology named Google Cloud Dataflow and...
...for the storage we are going to use Google BigQuery.
Google Cloud Dataflow is two things:
A collection of open source SDKs to implement parallel processing pipelines. The cool thing of being open source is that it means that runners for Dataflow pipelines have already been implemented for other opensource processing technologies, like Apache Spark or Apache Flink.(all the code I’ve written for that demo could run in an open source environment)The project itself is now an Apache Incubator project called Apache Beam.
Cloud Dataflow is also a managed service on Google Cloud Platform that runs Apache Beam pipelines.
Google BigQuery is an analytic data warehouse with impressive (almost magical) performances.
It comes with a series of features that make it a valid choice as an enterprise-grade DWH:
The ability to ingest streaming and batch data
JDBC and ODBC connectors to guarantee interoperability
A rich query language, which has now been renewed to support standard ANSI SQL-2011
A new Data Manipulation Language that supports updates and deletes
How we are going to make use of these tools?
We will build a simple Dataflow pipeline that is composed by three steps:
Read tweets from Pub/Sub
Transform tweets so as to conform with BigQuery API
Write tweets on BigQuery
For “tweet” I do not mean only the text, but all the informations that are returned by Twitter APIs (infos about the user,etc)
The implementation is very easy: this is one of the best parts of Cloud Dataflow wrt existing processing technologies like MapReduce.
First, we create a Pipeline object
First operation is performed invoking an apply method to the Pipeline object, and using a Source to create collections of data called PCollections. In this case, we are using a PubSub Source to create a so-called unbounded PCollection (that is, a PCollection without a limited number of elements)
All subsequent operations are performed by invoking apply methods on PCollections, which in turn generate other PCollections
The simplest operation you can apply on a PCollection is a ParDo (ParallelDo), that process every element of the PCollection independently from the others.
We write data by applying a transform
At the end, we tell the system to run the pipeline.
The source (PubSubIO) determines if the pipeline is a streaming or a batch one. All the other components (like BigQueryIO) adapt themselves consequently, e.g. BigQueryIO uses Streaming APIs in streaming mode and Load Jobs in batch mode.
The simplest operation you can apply on a PCollection is a ParDo (ParallelDo), that process every element of the PCollection independently from the others.
The argument of a ParDo is a DoFn object, we need to redefine the processElement method to instruct the system to do the right thing.
The easiest way to deploy a Datalab Pipeline is using Maven. (hidden some complexity here, like the choice of the runner, the staging location)
Once your pipeline is deployed, you can monitor its execution from the Cloud Console.
You can check if data are actually being processed by querying the destination BigQuery table.
It works! We built a very simple processing pipeline that streams data in real-time to our DWH and allows us to query results right as they are coming in.
What now?
Now we have to find some interesting analyses that we can evaluate on our data, represent them in a readable and shareable manner
Google Data Studio is a BI solution that allows the creation of dashboards and graphs from several sources, including BigQuery.
Here you see an example showing the number of tweets per state in the US.
Not very fancy. In fact, we soon realize that the informations we have from raw data don’t give us very “smart” insights.
We need to enrich our data model in some way. The good news is that Google released a series of APIs exposing ready-to-use Machine Learning algorithms and models. The one that seems to fit our case is...
...Natural Language APIs. These APIs can perform several different tasks on text strings:
extract the syntactic structure of sentences
extract entities that are mentioned within a text
and even perform sentiment analysis.
The Sentiment analysis API takes a text in input and returns two float values:
Polarity (float ranging from -1 to 1) expresses the mood of the text: positive values denote positive moods
Magnitude (float ranging from 0 to +inf) expresses the intensity of the feeling. Higher values denote stronger feelings.
Our personal simplistic definition of “sentiment” will be “polarity times magnitude”.
Let’s modify our pipeline. For illustration purposes we will maintain the old flow adding another one to implement the sentiment analysis.
The evaluation of the sentiment will happen only for a subset of tweets (those that explicitly contain the words “blackfriday”)
How does this reflect on the Pipeline code? We only have to add three lines of code (I’m lying!)
Note how we start from the “tweets” PCollection both for the processing and the write of raw data. Note also how we can reuse the DoFormat function for both flows.
Updating a pipeline is easy if the update doesn’t modify the existing structure (we are only adding new pieces). We only have to provide the name of the job we want to update. Dataflow will take care of draining the existing pipeline before shutting it down.
The Cloud Console shows the updated pipeline, and new “enriched” data is immediately available in a BigQuery table.
We did it! We built a serverless scalable data solution based on Google Cloud Platform. One interesting aspect about this architecture is that it is completely no-ops, and...
...it has integrated logging, monitoring and alerting thanks to Google Stackdriver. And we didn’t have to do anything!
Let me show you the final solution. We will see how easy it is to
query data,
monitor the infrastructure,
and we will give a look to some dashboards.
When you detect an anomaly in one of the trends, you can drill down in BigQuery to explore the reasons.
Walmart popularity is not so high mainly due to their decision of starting Black Friday sales at 6 PM on Thanksgiving Day
Amazon popularity dropped down right after they announced their first “Black Friday Week” deals, which apparently did not meet customers’ expectations (they are recovering, though :)