4. Data Design
tf.data
TF Datasets
Model Design
Keras
Estimators
Training
Distribution Strategy
Tooling
TensorBoard: Analysis
XProf: Performance
TFMA: Explainability
TFI: Fairness
Serialization
SavedModel
Model Repository
TensorFlow Hub
Cloud, On-prem
TensorFlow Serving
Android, iOS, Raspberry Pi
TensorFlow Lite
Browser and Node
TensorFlow.js
Other Language Bindings
C, Java, Go, C#, Rust, R,
...
Training Deployment
CP
U
GP
U
TPU
5. TF 2.0 TF 2.1 TF 2.2
TensorFlow Ecosystem is committed to 2.x
TF 2.3
● Eager execution
by default
● Tight integration
with Keras
● TPU training
support
● Performance
improvements
across both
graph
(tf.function) and
eager execution
● New TF Profiler
for CPU/GPU/TPU
● Experimental
tf.data as a service
● Experimental
Keras
Preprocessing for
data
preprocessing
● Improved TF
Profiler, stable
TPUStrategy
6. Improvements to Distributed Training:
● Robust Multi-Worker Mirrored Strategy
● Experimental support for Parameter
Server training
TF now runs with CUDA 11 and CuDNN8 and
uses TF-32 by default.
Adds experimental support for a subset of
numpy APIs `tf.experimental.numpy`.
Lands major internal refactor of the Keras
Functional API improving reliability, stability
and performance.
Mirrored variables performance
improvements, enabled by default for TPUs,
improving the startup time.
2.4
9. Real-time sign language
detection model
Google Research: Real-Time Sign Language
Detection using Human Pose Estimation a
Unified Text-to-Text Transformer
Massively Scaling RL with
SEED RL architecture
Arxiv.org: SEED RL: Scalable and Efficient
Deep-RL with Accelerated Central Inference
Cutting edge ML research with TensorFlow
2.x
RL with Quantum Variational
Circuits
Arxiv.org: Reinforcement Learning with
Quantum Variational Circuits
10. Add-ons and extensions in the TF
ecosystem
● TF Probability
● TF Graphics
● Mesh TensorFlow
● TF Model Garden
● TF Agents
● TF Text
● Swift for TensorFlow
● Sonnet
● Neural Structured Learning
● TF Quantum
● ...and more on tensorflow.org!
11. Research models migrating to TF 2.x
Object Detection API
TF Blog: TF 2 meets the OD API
TF Recommenders (TFRS)
TF Blog: Introducing TF Recommenders
12. TensorFlow Recommenders
End-to-end recommender systems
● Built on TensorFlow 2.0 and Keras.
● Flexible, multi-objective retrieval and ranking models.
● Easy path to production via TensorFlow Serving and
approximate nearest neighbour retrieval.
12
http://tensorflow.org/recommenders
14. TensorFlow Cloud
From Local Flexibility to Cloud Scalability
Python package with APIs to go from local debugging to distributed Cloud training
Minimal Setup - No changes to your model
import tensorflow_cloud as tfc
tfc.run()
19. Latest on-device ML with TensorFlow Lite
Hold For Me on new
Pixel 5/4a
OCR translation in
Google Translate
HDR+ on Pixel 4a
20. Improvements to TensorFlow Lite
Conversion Performance Model
Optimization
High-level APIs
Improve
conversion -
LSTM/RNN
MLIR based
converter
Reduced binary
size when using
TF Ops in TFLite
Faster CPU
inference with
XNN pack
Faster GPU
inference with
OpenCL
Memory
optimization -
shared buffers
Core ML
Delegate for iOS
Keras quantization
aware training
Post training
Dynamic Range
Quant
LSTM Quantization
Weight Clustering
TF Lite task library -
easier inference
Model Maker - model
customization
Newer SOTA models
and ref apps
Codegen - studio
model import
23. Define
Problem
Construct
and Prepare
Data
Build and
Train Model
Deploy Iterate
How is my model
performing?
Am I using a
representative
dataset?
Who is my ML
system for?
What can I do to
improve the model?
Where do I get
relevant features
in a privacy
preserving way?
How does my data affect
model performance?
Should I deploy
my model?
Are there any
privacy
considerations?
Is there real-world /
human bias in my
data?
Are test
users
diverse?
Are there
complex
feedback loops?
29. Collaborative ML with TensorBoard.dev
https://twitter.com/zacharynado/status/1276252206967160834
30. Jupyter notebook community translation
support (currently 13 languages)
Translation of guides, tutorials and other
documentation into Japanese, Korean, and
Simplified Chinese (with more to come!)
Translation and Localization
31. TensorFlow User
Groups
Get Involved
80 TFUGs around the globe
● Grassroot communities
● For example, TFUG India Summit &
Tensorflow Everywhere India was a
huge success!
Interested in creating one? Email tfug-
help@tensorflow.org
32. Google Developer
Experts
Get Involved
165 ML GDES Globally
● In 2020, they gave 961
techtalk/workshops and 620
videos/articles
● Great collaboration projects like
Background Stylizer and AI vs
COVID,
33. Topics:
● Basic ML programming concepts in
TensorFlow
● Text Classification
● Computer Vision
● Sequences and Prediction
*Stipends available
TensorFlow.org/certificate
Education
Certificate in
TensorFlow
Development
Congrats to our 1K developers
from 50+ countries who have
passed!
Hi everyone. Welcome to the 2020 TensorFlow Everywhere event.
I’m [Name], [Title]. Thanks for tuning in. TensorFlow Everywhere is a series of global events led by TensorFlow and machine learning communities around the world.
We are dropping into these events around the world over the next few weeks, and while we can’t meet in person, we’re hoping these virtual events are more accessible than before.
Since the TFDS in March, the last time our global community came together virtually, a lot has happened...
A global pandemic
A resurfaced fight for racial justice is happening around the world
Many of us are experiencing changing personal circumstances
And a different way of working.
In times of uncertainty, it’s core to who we are to turn to technology, like machine learning, to help. We have found comfort in knowing we are part of an important infrastructure for the world that has the ability to help others.
For example, within Google during these times, we’ve seen how TensorFlow played a timely role in supporting Google Meet, our video conferencing software - you can see it being used as an education platform in the bottom middle photo. TensorFlow helps the software adapt to changing bandwidth and enhances picture quality. Since making Meet's advanced features free for all G Suite and G Suite for Education users in March, we've seen daily usage grow by 30x, with Meet hosting 3 billion minutes of video meetings daily (blog post with more details here).
This is just one example of how TensorFlow is helping us adapt and support new forms of communication and collaboration.
For TensorFlow, we are seeing unprecedented scale and growth.
TensorFlow crossed 100 million downloads with over 10 million in May (or June?) alone.
45M tutorial and guide views
More than 8M articles read on the TF blog and 8.1M views on YT
And 500K learnings on Coursera and Udacity
This is a testament to the community, so we want to say thank you for continuing to support TensorFlow.
We have some exciting new updates, but first we want to recap what we launched the last time we got together at the TensorFlow Dev Summit.
TensorFlow evolved from a single library to a full ecosystem of tools:
Everything you need to build a model: Data Design, Model Design, Distribution Strategy, Tooling
A model store
And everything you need to deploy a model, anywhere
At the heart of our Ecosystem, TF 2.x is our new API, which we built following your feedback. You needed a simpler, easier way to build your ML systems.
We launched TF2.x a year ago, and we’ve been hard at work getting it to cover all the features you need
2.0 was about eager execution, and offering a simpler high-level API with Keras
The 2.1 brought TPU training with same code, 2.2 brought significant performance wins, and 2.3 focused on better data and performance tooling.
And Just last week, we released version 2.4, which builds on this momentum.
The major focus in this release is on distributed training...
It will make multi-worker synchronous training with mirrored strategy more robust against deadlocks.
There will also be experimental support for parameter server training in tf.keras.
Plus there are more fixes and improved documentation for better usability.
TensorFlow 2.4 will also add support for CUDA11, allowing support for NVIDIA’s latest GPUs.
And lastly there is tf.experimental.numpy: which will add support for accelerating some numpy ops using TensorFlow runtime, and allow you to use TensorFlow APIs in your numpy code.
One of the advantages of the TensorFlow ecosystem is that it’s flexible enough to meet different needs.
If you’re a researcher, it gives you the control and flexibility for experimentation,
For applied ML engineers or data scientists, you get tools that help your models have real-world impact.
Finally, there are libraries in the ecosystem that can help create better AI experiences no matter where they are running.
All of this is underscored by our common goal of building AI responsibly.
I want to now pass it to Sarah who’s going to show you how the TF ecosystem in 2.x is touching upon these areas.
Thanks Kemal, I’m Sarah and I am an eng. lead on TensorFlow
Like Kemal mentioned earlier, the TF ecosystem serves the needs of diverse users and I’d like to spend a few minutes talking about each of them today.
Let’s start with researchers, a segment that we care deeply about.
NEXT_SLIDE
TensorFlow’s beginnings were as a research-focused framework. And it continues to support cutting edge research today.
You can see here some of the latest publications that were published using 2.x -
including a new real-time sign language detection model and reinforcement learning using Quantum circuits.
NEXT_SLIDE
The TF ecosystem is a rich collection of components, add-ons and extensions and it is no different for research. We support several of these libraries to support the specialized needs of research. Many of these were developed by researchers, for researchers.
NEXT_SLIDE
SKIP
Libraries like TF Probability and TF Agents work with 2.x and we’re working across multiple teams to make sure the rest of the extensions are compatible with 2.x.
With 2.x, the experience has gotten better, and we have gotten feedback that it address a lot of the usability needs of researchers.
And we continue to focus on compatibility across the ecosystem - including APIs, libraries and models.
We’re seeing that about 57% of XManager Python jobs now support TF 2.x, which we’re excited about. In 2021, making the migration process easier is something that we will focus on.
NEXT_SLIDE
TensorFlow Recommenders (TFRS), an open-source TensorFlow package that makes building, evaluating, and serving sophisticated recommender models easy.
Built with TensorFlow 2.x, TFRS makes it possible to:
Build and evaluate flexible candidate nomination models;
Freely incorporate item, user, and context information into recommendation models;
Train multi-task models that jointly optimize multiple recommendation objectives;
Efficiently serve the resulting models using TensorFlow Serving.
Moving on production users. TF remains the mainstay of production deployment across Google and is powering the production ML needs of all our PAs and developers.
NEXT_SLIDE
TensorFlow Cloud is a Python package that provides APIs for a seamless transition from local debugging to distributed training in Google Cloud. It simplifies the process of training TensorFlow models on the cloud into a single, simple function call, requiring minimal setup and no changes to your model. TensorFlow Cloud handles cloud-specific tasks such as creating VM instances and distribution strategies for your models automatically. This guide will demonstrate how to interface with Google Cloud through TensorFlow Cloud, and the wide range of functionality provided within TensorFlow Cloud. We'll start with the simplest use-case.
Not going to work with every model for every time.
In terms of hardwarre TPUs are pushing the frontier of what is possible in terms of ML compute. and users at Google are able to realize their benefits via TensorFlow.
We continue to focus on making TPUs easier to use as well as more scalable to meet the demands of our products and users.
NEXT_SLIDE
Now, Cross-platform readiness and capabilities are a core part of TF.
and tf makes it possible and easy to deploy your production model to edge devices or browser and these solutions are already running on billions of devices.
NEXT_SLIDE
However, running on different platforms and devices requires operating in unique constraints, like low latency environments, working with poor network connectivity, all while trying to preserve privacy.
Libraries like TensorFlow.js and TensorFlow Lite are built to keep these constraints in mind.
NEXT_SLIDE
TFJS is a library used for training and deploying models in the browser.
-- Be sure to mention electron
NEXT_SLIDE
Next, let’s talk about on-device. TensorFlow Lite is powering machine learrning on mobile and embedded devices across Google. It’s used in pretty much every on-device ML feature in large products across Google and is the industry leader across several performance dimensions.
Just the other week, we announced the new Pixel 4a with new AI features like “Hold For Me” and HDR+ which run w/TFLite.
NEXT_SLIDE
Later today, you’ll hear from Arun about improvements we’re making to TF Lite, for better usability, performance and model optimization gains, all based off of internal and external feedback.
Also, we have a full day of talks for on-device ML at Google set up for tomorrow, please check those out if you are interested.
NEXT_SLIDE
Whether you’re a researcher or an applied ML engineer, one thing that we all have in common is the responsibility that comes with building a machine learning system.
I’d like to quickly talk about how the TensorFlow ecosystem is helping to empower all users to build AI responsibly and the tools and resources that are available.
NEXT_SLIDE
Responsible AI is a set of principles and practices that help guide developers in building AI for everyone.
It touches upon areas like fairness, interpretability, privacy, and security, and making sure we keep these in mind when we build AI system.
A typical ML workflow will have several stages.
We think Responsible AI can be incorporated into each one of them.
To do this, you can ask key questions to ensure your AI system is inclusive and secure.
For example, asking ‘Who is my system for? Or are there any privacy considerations when collecting my data?’ are important because fairness and privacy concerns may be experienced differently in different applications.
And tackling these issues is not a one-size-fits-all task. There are several tools and resources to help.
For example, TF Privacy’s new functionality allows you to assess your model’s privacy. Training with privacy is not always possible. The new functionality allows you to measure a model’s privacy without having to retrain it.
Another tool example is the Model Card Toolkit, which can automate transparency reporting for models.
Model cards are a framework for sharing the essential facts of a machine learning model in a structured, accessible way.
Building a Model Card is time intensive & requires specific expertise -- the Model Card Toolkit facilitates this process by auto-generating key Model Card components using existing pipeline artifacts and displays them in a standardized UI.
To help make things easier, we launched the Responsible AI with TensorFlow Toolkit in June.
This toolkit packages all of the Responsible AI principles, tools, and resources in the TF ecosystem into a single destination on tensorflow.org.
I want to end on talking about the most important part of what we are building, and that’s the machine learning community.
And I want to begin by thanking all of YOU - your feedback, your contributions, what you build, this is what makes all of this possible.
We understand that ML is often a collaborative experience. We built TensorBoard.dev to enable anyone to easily upload their TensorBoard logs and get a permalink that can be shared in blog posts, papers, and on social media.
Since launch last year, we have added more of TensorBoard’s dashboards, the ability to download metrics data as Pandas DataFrames, adding experiment metadata like title and description, and more.
We’re still at an early stage, but we’re excited to see how researchers are using TensorBoard.dev, like showing the 550 curves that don’t fit in a paper, or provide access to the raw metrics so that others can reproduce the charts in your paper and explore them further.
To support these product updates and make TensorFlow more accessible for everyone, we’re excited to an update on translation.
Through a new partnership with GitLocalize, we have improved the current process/contribute experience for community translators by adding Jupyter notebook support to our tooling. Now it's easier than ever for the community to contribute translations for our runnable documentation. Currently 13 languages have community contributed docs: Arabic, Greek, Spanish, French, Indonesian, Italian, Japanese, Korean, Portugese, Russian, Turkish, Vietnamese, and Simplified Chinese.
And we are also supplementing the number of translations. We completed the translation of our guides, tutorials and other documentation into Japanese, Korean, and Simplified Chinese with more to follow!
We hope this improves the product experience for our global practitioners.
One great way to connect is to join a TensorFlow User Group.
These grassroots communities started organically, and we now have 80 of them globally.
Even with pandemic situation in 2020, our communities remained strong. For example the Indian TFUGs hosted a 4-day, 20- session event covering wide range of areas including mobile, web, production and research.
I’m sure this map can have a lot more dots! If you want to start a user group, please reach out and we’ll help you get started!
We love our 165 GDEs. They give techtalks, run workshops, shoot videos and write articles for developers worldwide.
In 2020 in particular, the GDEs expanded their effort toward development project that the Background Stylizer won TF community spotlight award and the AI vs COVID project released very large COVID related dataset.
Speaking of the certificate program, we launched this at the Dev Summit in March and have had over 1,000 people spanning 53 countries pass the exam!
This exam created by the TensorFlow team, covering topics such as:
Text classification: using NLP to build spam filters
Computer vision: using CNNs to do image recognition
Sequences and Prediction
By passing this foundational certification, you’ll be able to share your expertise with the world, and display your certificate badge on Linkedin, GitHub, or the TensorFlow Certificate Network.
And to widen access to people of diverse backgrounds and experiences, we're excited to offer a limited number of stipends to help cover the certificate costs.