We have reached a remarkable point in history with the evolution of AI, from applying this technology to incredible use cases in healthcare, to addressing the world's biggest humanitarian and environmental issues. Our ability to learn task-specific functions for vision, language, sequence and control tasks is getting better at a rapid pace. This talk will survey some of the current advances in AI, compare AI to other fields that have historically developed over time, and calibrate where we are in the relative advancement timeline. We will also speculate about the next inflection points and capabilities that AI can offer down the road, and look at how those might intersect with other emergent fields, e.g. Quantum computing.
2. Today we’ll talk about...
●Long term perspectives on AI
●Potential technology inflection points
○Predicted variables
○Quantum computing
●Potential use case inflection points
○Tackling societal challenges
3. AI vs. ML Artificial Intelligence
The science of making
things smart
Machine Learning
Making machines that learn
to be smarter
4. Spam the old way:
Write a computer program
with explicit rules to follow
if email contains sale
then mark is-spam;
if email contains …
if email contains …
5.
6. Spam the new way:
Write a computer program
to learn from examples
try to classify some emails;
change self to reduce errors;
repeat;
22. Potential of ML across industries and use cases
Personalize
advertising
Identify and
navigate
roads
Personalize
financial
products
Optimizing
pricing
and scheduling
in real time
Volume(Breadthandfrequencyofdata)
Impact score
Finance
Trave
l
Automotiv
e
Teleco
m
Media
Consumer
Healthcare
25. Growing adoption of ML
● Adoption of ML in applications has rapidly increased over the last 4 years
● Headroom is still very large
● ML systems are written from a ML engineer’s point of view
● Systems are built from the full stack SWE point of view leading to implicit
mismatch → slow adoption
● Need a solution from full stack SWE perspective
26. Machine Learning today
Goal: Predict a value - based on observations (from the past, maybe distant
past)
1. Capture the observations in a feature vector
2. With a good feature vector, building good ML models is possible
Feature Vector
Multiple SWE
Quarters
✔️
Prediction
Experiments
and Tuning
System
27. Today’s World
Chasm of Suffering
Products ML
C++
Java
Python
Javascript
Model
Tensorflow
Hyperparameters
Logging
Feature Store
Feature Computation
Model Serving
Operational Concerns
Privacy
Policy and Compliance
29. Characteristics of a solution
● Feel native to programming and system building.
● Inversion of control, software developer holds control.
● Simplicity without sacrificing expressiveness, particularly with regard to
types of problems it can solve.
30. Places to look
● Programs and systems are a mix of:
○ Variables / Data
○ Computation
○ Control Flow
● We can imagine overloading any of these for a new abstraction.
31. Which one to overload
● Programs and systems are a mix of:
○ Variables / Data → Object oriented view of prediction
○ Computation → Functional view of prediction
○ Control Flow → Decision view of prediction
● All are valid and can generate meaningful abstractions; we choose
variables for the Object oriented abstraction and its flexibility in data
visibility.
32. Predicted Variables
● Overload the notion of variables to give them “self assignment” ability.
● Allow them to observe state of the program.
● Blame after effects on them.
● They learn to evaluate themselves every time they are used.
33. Predicted Variables
PVars is an end-to-end solution: go straight from code to predictions.
Teams focus on data and product goals, not pipelines and infrastructure for ML.
✔️
Start using PVars Prediction
PVars
Predicted Variables in Programming
34. Example: Hello World
pvar = PVar(dtype=bool) // create the variable
v = pvar.value // read from the variable
while not v:
pvar.feedback(BAD) // provide feedback to the variable
v = pvar.value // read from the variable
pvar.feedback(GOOD) // provide feedback to the variable
print("Hello World")
35. Caches using Predicted Variables
Before PVars
class LRUCache(object):
def __init__(self, size):
self.storage = CacheStorage(size)
def get(self, key):
if key not in self.storage:
return None
self._update_timestamp(key, now())
return self.storage[key]
def store(self, key, value):
if self.storage.full():
evict_key = self._get_key_to_evict()
self.storage.evict(evict_key)
self.storage.insert(key, value, now())
36. class PredictedCache(object):
def __init__(self, size):
self.storage = CacheStorage(size)
self.pvar = PVar(dtype=float,
observations = {'access': key_type, 'store': key_type},
initial_policy_fn=now)
def get(self, key):
if key not in self.storage:
self.pvar.feedback(BAD)
return None
self.pvar.feedback(GOOD)
self.pvar.observe('access', key)
predicted_timestamp = pvar.value
self._update_timestamp(key, predicted_timestamp)
return self.storage[key]
def store(self, key, value):
self.pvar.observe('store', key)
if self.storage.full():
evict_key = self._get_key_to_evict()
self.storage.evict(evict_key)
predicted_timestamp = pvar.value
self.storage.insert(key, value, predicted_timestamp)
Caches using Predicted Variables
After PVars
37. Caches
●Predicting which slot to evict (1-out-C)
○Improvements over LRU policy on small cache sizes range
○Power law synthetic access pattern (5000 keys)
38. PVars can express a wide range of uses
● Constant value → Predicted Constants
● Single Invocation, ground truth feedback → Supervised ML
● Single Invocation, cost feedback → Bandits
● Multi Invocation, cost feedback → RL / blackbox / dynamical systems
methods.
● All of above with stochastic or deterministic variables.
● Which evaluations should i get feedback on ? → Active learning
● Multi-level Data visibility → Privacy sensitive ML
● Device locality for variables → Federated computation
● …...and more
41. Predict: p * interpolation + (1-p) * binary
Reward: |old| / |new| / 2 - (2 if not found)
Predict: position
Reward: |old| / |new| / 2 - (2 if not found)
Simplify
prediction
Simplify
reward
+discount 0.75
(bandits RL)
Predict: p * interpolation + (1-p) * binary
Reward: -1 if not found, 10 when found
Similar results on chi2, gamma, pareto, power distributions
Simplify
reward
Simplify
prediction
Predict: position
Reward: -1 if not found, 10 if found
Works only on
uniform and
triangular
distributions
for now
Binary Search (normal distribution)
42. We aspire to change programming
Make it a natural thing to use ML for all developers in all programming
languages
● Whenever you add a heuristic (or a constant or a flag)
● There's no reason not to use it whenever you put an "arbitrary" constant
right now.
Stretch Goals:
● C++ 202X, Python 4, and Java 10 will have predicted variables as a
standard type
46. Task
Produce samples {x1, ..., xm} from distribution pU(x).
Recent result from complexity theory (Bouland,
Fefferman, Nirkhe, Vazirani):
It is #P-hard to compute pU(xi).
Random circuits, the “hello world” program for
quantum processors
Formulate quantum circuit by randomly picking 1-qubit or 2-qubit gates
from a universal gate set acting on the global superposition state.
47. Understanding the probability distribution involved in
supremacy experiments
“Speckles” in 2n = N dimensional Hilbert Space Porter Thomas Distribution P(p)=Ne-Np
Sort bit strings
by probability
48. {x1, . . . , xm}
{x’1, . . . , x’m}
Use cross entropy to measure
quality of samples
Formulate quantum circuit by randomly picking 1-qubit or 2-qubit gates
from a universal gate set acting on the global superposition state.
Experiment to demonstrate quantum
supremacy
49. Google Quantum AI timeline
2018? 2028?
72 qubits 105 qubits
Quantum supremacy
Beyond classical computing
capability demonstrated for a
select computational problem
NISQ processors
Algorithms developed for
● Certifiable random numbers
● Simulation of quantum systems
● Quantum optimization
● Quantum neural networks
Error corrected quantum computer
Growing list of quantum algorithms for wide
variety of applications with proven speedups
● Unstructured search
● Factoring
● Semidefinite programming
● Solving linear systems
61. 1. Be socially beneficial
2. Avoid creating or reinforcing
unfair bias
3. Be built and tested for safety
4. Be accountable to people
5. Incorporate privacy design
principles
6. Uphold high standards of
scientific excellence
7. Be made available for uses that
accord with these principles
Google’s AI Principles