SlideShare una empresa de Scribd logo
1 de 56
Descargar para leer sin conexión
RECURRENT NEURAL
NETWORKS FOR TEXT
ANALYSIS
Alec Radford
O P E N
D A T A
S C I E N C E
C O N F E R E N C E_
BOSTON 2015
@opendatasci
Recurrent Neural
Networks for text analysis
From idea to practice
ALEC RADFORD
Follow Along
Slides at: http://goo.gl/WLsUWv
How ML
-0.15, 0.2, 0, 1.5
A, B, C, D
The cat sat on the
mat.
Numerical, great!
Categorical, great!
Uhhh…….
How text is dealt with
(ML perspective)
Text
Features
(bow, TFIDF, LSA, etc...)
Linear Model
(SVM, softmax)
Structure is important!
The cat sat on the mat.
sat the on mat cat the
● Certain tasks, structure is essential:
○ Humor
○ Sarcasm
● Certain tasks, ngrams can get you a
long way:
○ Sentiment Analysis
○ Topic detection
● Specific words can be strong indicators
○ useless, fantastic (sentiment)
○ hoop, green tea, NASDAQ (topic)
Structure is hard
Ngrams is typical way of preserving some structure.
sat
the on
mat
cat
the cat cat sat sat on
on thethe mat
Beyond bi or tri-grams occurrences become very rare and
dimensionality becomes huge (1, 10 million + features)
Structure is hard
How text is dealt with
(ML perspective)
Text
Features
(bow, TFIDF, LSA, etc...)
Linear Model
(SVM, softmax)
How text should be dealt with?
Text RNN
Linear Model
(SVM, softmax)
How an RNN works
the cat sat on the mat
How an RNN works
the cat sat on the mat
input to hidden
How an RNN works
the cat sat on the mat
input to hidden
hidden to hidden
How an RNN works
the cat sat on the mat
input to hidden
hidden to hidden
How an RNN works
the cat sat on the mat
projections
(activities x weights)
activities
(vectors of values)
input to hidden
hidden to hidden
How an RNN works
the cat sat on the mat
projections
(activities x weights)
activities
(vectors of values)
Learned representation of
sequence.
input to hidden
hidden to hidden
How an RNN works
the cat sat on the mat
projections
(activities x weights)
activities
(vectors of values)
cat
hidden to output
input to hidden
hidden to hidden
From text to RNN input
the cat sat on the mat
“The cat sat on the mat.”
Tokenize .
Assign index 0 1 2 3 0 4 5
String input
Embedding lookup 2.5 0.3 -1.2 0.2 -3.3 0.7 -4.1 1.6 2.8 1.1 5.7 -0.2 2.5 0.3 -1.2 1.4 0.6 -3.9 -3.8 1.5 0.1
2.5 0.3 -1.2
0.2 -3.3 0.7
-4.1 1.6 2.8
1.1 5.7 -0.2
1.4 0.6 -3.9
-3.8 1.5 0.1
Learned matrix
You can stack them too
the cat sat on the mat
cat
hidden to output
input to hidden
hidden to hidden
But aren’t RNNs unstable?
Simple RNNs trained with SGD are unstable/difficult to learn.
But modern RNNs with various tricks blow up much less often!
● Gating Units
● Gradient Clipping
● Steeper gates
● Better initialization
● Better optimizers
● Bigger datasets
Simple Recurrent Unit
ht-1
xt
+ ht
xt+1
+ ht+1
+ Element wise addition
Activation function
Routes information can propagate along
Involved in modifying information flow and
values
⊙
⊙⊙
Gated Recurrent Unit - GRU
xt
r
htht-1 ht
z
+
~
1-z z
+ Element wise addition
⊙ Element wise multiplication
Routes information can propagate along
Involved in modifying information flow and
values
Gated Recurrent Unit - GRU
⊙
⊙⊙
xt
r
htht-1
z
+
~
1-z z
⊙
⊙⊙
xt+1
r
ht+1ht
z
+
~
1-z z
ht+1
Gating is important
For sentiment analysis of longer
sequences of text (paragraph or so)
a simple RNN has difficulty learning
at all while a gated RNN does so
easily.
Which One?
There are two types of gated RNNs:
● Gated Recurrent Units (GRU) by
K. Cho, recently introduced and
used for machine translation and
speech recognition tasks.
● Long short term memory (LSTM)
by S. Hochreiter and J.
Schmidhuber has been around
since 1997 and has been used
far more. Various modifications
to it exist.
Which One?
GRU is simpler, faster, and optimizes
quicker (at least on sentiment).
Because it only has two gates
(compared to four) approximately 1.5-
1.75x faster for theano
implementation.
If you have a huge dataset and don’t
mind waiting LSTM may be better in
the long run due to its greater
complexity - especially if you add
peephole connections.
Exploding Gradients?
Exploding gradients are a major problem
for traditional RNNs trained with SGD.
One of the sources of the reputation of
RNNs being hard to train.
In 2012, R Pascanu and T. Mikolov
proposed clipping the norm of the gradient
to alleviate this.
Modern optimizers don’t seem to have this
problem - at least for classification text
analysis.
Better Gating Functions
Interesting paper at NIPS workshop (Q. Lyu, J. Zhu) - make the gates “steeper” so
they change more rapidly from “off” to “on” so model learns to use them quicker.
Better Initialization
Andrew Saxe last year showed that initializing weight matrices with random
orthogonal matrices works better than random gaussian (or uniform) matrices.
In addition, Richard Socher (and more recently Quoc Le) have used identity
initialization schemes which work great as well.
Understanding Optimizers
2D moons dataset
courtesy of scikit-learn
Comparing Optimizers
Adam (D. Kingma) combines the
early optimization speed of
Adagrad (J. Duchi) with the better
later convergence of various other
methods like Adadelta (M. Zeiler)
and RMSprop (T. Tieleman).
Warning: Generalization
performance of Adam seems
slightly worse for smaller datasets.
It adds up
Up to 10x more efficient training once you
add all the tricks together compared to a
naive implementation - much more stable
- rarely diverges.
Around 7.5x faster, the various tricks add
a bit of computation time.
Too much? - Overfitting
RNNs can overfit very well as we will
see. As they continue to fit to training
dataset, their performance on test data
will plateau or even worsen.
Keep track of it using a validation set,
save model at each iteration over
training data and pick the earliest, best,
validation performance.
The Showdown
Model #1 Model #2
+ 512 dim
embedding
512 dim
hidden state
output
Using bigrams and grid search on min_df for
vectorizer and regularization coefficient for model.
Using whatever I tried that worked :)
Adam, GRU, steeper sigmoid gates, ortho/identity
Sentiment & Helpfulness
Effect of Dataset Size
● RNNs have poor generalization properties on small
datasets.
○ 1K labeled examples 25-50% worse than linear model…
● RNNs have better generalization properties on large
datasets.
○ 1M labeled examples 0-30% better than linear model.
● Crossovers between 10K and 1M examples
○ Depends on dataset.
The Thing we don’t talk about
For 1 million paragraph sized text examples to converge:
● Linear model takes 30 minutes on a single CPU core.
● RNN takes 90 minutes on a Titan X.
● RNN takes five days on a single CPU core.
RNN is about 250x slower on CPU than linear model…
This is why we use GPUs
Visualizing
representations of
words learned via
sentiment
TSNE - L.J.P. van derIndividual words colored by average sentiment
Negative
Positive
Model learns to separate negative and positive words, not too surprising
Quantities of TimeQualifiers
Product nouns
Punctuation
Much cooler, model also begins to learn components of language from only binary sentiment labels
The library - Passage
● Tiny RNN library built on top of Theano
● https://github.com/IndicoDataSolutions/Passage
● Still alpha - we’re working on it!
● Supports simple, LSTM, and GRU recurrent layers
● Supports multiple recurrent layers
● Supports deep input to and deep output from hidden layers
○ no deep transitions currently
● Supports embedding and onehot input representations
● Can be used for both regression and classification problems
○ Regression needs preprocessing for stability - working on it
● Much more in the pipeline
An example
Sentiment analysis of movie reviews - 25K labeled examples
RNN imports
RNN imports
preprocessing
RNN imports
preprocessing
load training data
RNN imports
preprocessing
tokenize data
load training data
RNN imports
preprocessing
configure model
tokenize data
load training data
RNN imports
preprocessing
make and train model
tokenize data
load training data
configure model
RNN imports
preprocessing
load test data
make and train model
tokenize data
load training data
configure model
RNN imports
preprocessing
predict on test data
load test data
make and train model
tokenize data
load training data
configure model
The results
Top 10! - barely :)
Summary
● RNNs look to be a competitive tool in certain situations
for text analysis.
● Especially if you have a large 1M+ example dataset
o A GPU or great patience is essential
● Otherwise it can be difficult to justify over linear models
o Speed
o Complexity
o Poor generalization with small datasets
Contact
alec@indico.io
We’re hiring!
● Data Engineer
● Infrastructure Engineer
● Interested?
o contact@indico.io (or talk-to/email me after pres.)
Questions?

Más contenido relacionado

La actualidad más candente

Natural language processing and transformer models
Natural language processing and transformer modelsNatural language processing and transformer models
Natural language processing and transformer modelsDing Li
 
A Comprehensive Review of Large Language Models for.pptx
A Comprehensive Review of Large Language Models for.pptxA Comprehensive Review of Large Language Models for.pptx
A Comprehensive Review of Large Language Models for.pptxSaiPragnaKancheti
 
Deep Learning: Application Landscape - March 2018
Deep Learning: Application Landscape - March 2018Deep Learning: Application Landscape - March 2018
Deep Learning: Application Landscape - March 2018Grigory Sapunov
 
Introduction to Recurrent Neural Network
Introduction to Recurrent Neural NetworkIntroduction to Recurrent Neural Network
Introduction to Recurrent Neural NetworkKnoldus Inc.
 
Natural Language Processing (NLP)
Natural Language Processing (NLP)Natural Language Processing (NLP)
Natural Language Processing (NLP)Yuriy Guts
 
Recurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRURecurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRUananth
 
Deep Learning - CNN and RNN
Deep Learning - CNN and RNNDeep Learning - CNN and RNN
Deep Learning - CNN and RNNAshray Bhandare
 
Recurrent neural networks rnn
Recurrent neural networks   rnnRecurrent neural networks   rnn
Recurrent neural networks rnnKuppusamy P
 
Deep Learning With Neural Networks
Deep Learning With Neural NetworksDeep Learning With Neural Networks
Deep Learning With Neural NetworksAniket Maurya
 
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language UnderstandingBERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language UnderstandingMinh Pham
 
TensorFlow Tutorial | Deep Learning With TensorFlow | TensorFlow Tutorial For...
TensorFlow Tutorial | Deep Learning With TensorFlow | TensorFlow Tutorial For...TensorFlow Tutorial | Deep Learning With TensorFlow | TensorFlow Tutorial For...
TensorFlow Tutorial | Deep Learning With TensorFlow | TensorFlow Tutorial For...Simplilearn
 
Pre trained language model
Pre trained language modelPre trained language model
Pre trained language modelJiWenKim
 
Transformer Introduction (Seminar Material)
Transformer Introduction (Seminar Material)Transformer Introduction (Seminar Material)
Transformer Introduction (Seminar Material)Yuta Niki
 
Machine Learning Deep Learning AI and Data Science
Machine Learning Deep Learning AI and Data Science Machine Learning Deep Learning AI and Data Science
Machine Learning Deep Learning AI and Data Science Venkata Reddy Konasani
 
Seq2Seq (encoder decoder) model
Seq2Seq (encoder decoder) modelSeq2Seq (encoder decoder) model
Seq2Seq (encoder decoder) model佳蓉 倪
 
Deep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural NetworksDeep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural NetworksChristian Perone
 
Introduction of Deep Learning
Introduction of Deep LearningIntroduction of Deep Learning
Introduction of Deep LearningMyungjin Lee
 

La actualidad más candente (20)

Natural language processing and transformer models
Natural language processing and transformer modelsNatural language processing and transformer models
Natural language processing and transformer models
 
A Comprehensive Review of Large Language Models for.pptx
A Comprehensive Review of Large Language Models for.pptxA Comprehensive Review of Large Language Models for.pptx
A Comprehensive Review of Large Language Models for.pptx
 
Deep Learning: Application Landscape - March 2018
Deep Learning: Application Landscape - March 2018Deep Learning: Application Landscape - March 2018
Deep Learning: Application Landscape - March 2018
 
Introduction to Recurrent Neural Network
Introduction to Recurrent Neural NetworkIntroduction to Recurrent Neural Network
Introduction to Recurrent Neural Network
 
Natural Language Processing (NLP)
Natural Language Processing (NLP)Natural Language Processing (NLP)
Natural Language Processing (NLP)
 
Recurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRURecurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRU
 
rnn BASICS
rnn BASICSrnn BASICS
rnn BASICS
 
Deep Learning - CNN and RNN
Deep Learning - CNN and RNNDeep Learning - CNN and RNN
Deep Learning - CNN and RNN
 
Recurrent neural networks rnn
Recurrent neural networks   rnnRecurrent neural networks   rnn
Recurrent neural networks rnn
 
Deep Learning With Neural Networks
Deep Learning With Neural NetworksDeep Learning With Neural Networks
Deep Learning With Neural Networks
 
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language UnderstandingBERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
 
TensorFlow Tutorial | Deep Learning With TensorFlow | TensorFlow Tutorial For...
TensorFlow Tutorial | Deep Learning With TensorFlow | TensorFlow Tutorial For...TensorFlow Tutorial | Deep Learning With TensorFlow | TensorFlow Tutorial For...
TensorFlow Tutorial | Deep Learning With TensorFlow | TensorFlow Tutorial For...
 
Pre trained language model
Pre trained language modelPre trained language model
Pre trained language model
 
Transformer Introduction (Seminar Material)
Transformer Introduction (Seminar Material)Transformer Introduction (Seminar Material)
Transformer Introduction (Seminar Material)
 
Rnn and lstm
Rnn and lstmRnn and lstm
Rnn and lstm
 
Machine Learning Deep Learning AI and Data Science
Machine Learning Deep Learning AI and Data Science Machine Learning Deep Learning AI and Data Science
Machine Learning Deep Learning AI and Data Science
 
Sentiment Analysis
Sentiment AnalysisSentiment Analysis
Sentiment Analysis
 
Seq2Seq (encoder decoder) model
Seq2Seq (encoder decoder) modelSeq2Seq (encoder decoder) model
Seq2Seq (encoder decoder) model
 
Deep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural NetworksDeep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural Networks
 
Introduction of Deep Learning
Introduction of Deep LearningIntroduction of Deep Learning
Introduction of Deep Learning
 

Similar a RNNs for Text Analysis: From Idea to Practice

deepnet-lourentzou.ppt
deepnet-lourentzou.pptdeepnet-lourentzou.ppt
deepnet-lourentzou.pptyang947066
 
Deep learning for NLP and Transformer
 Deep learning for NLP  and Transformer Deep learning for NLP  and Transformer
Deep learning for NLP and TransformerArvind Devaraj
 
Deep Learning for Developers (Advanced Workshop)
Deep Learning for Developers (Advanced Workshop)Deep Learning for Developers (Advanced Workshop)
Deep Learning for Developers (Advanced Workshop)Amazon Web Services
 
Deep Dive on Deep Learning (June 2018)
Deep Dive on Deep Learning (June 2018)Deep Dive on Deep Learning (June 2018)
Deep Dive on Deep Learning (June 2018)Julien SIMON
 
Machine Learning with Spark
Machine Learning with SparkMachine Learning with Spark
Machine Learning with Sparkelephantscale
 
Corinna Cortes, Head of Research, Google, at MLconf NYC 2017
Corinna Cortes, Head of Research, Google, at MLconf NYC 2017Corinna Cortes, Head of Research, Google, at MLconf NYC 2017
Corinna Cortes, Head of Research, Google, at MLconf NYC 2017MLconf
 
Training at AI Frontiers 2018 - Lukasz Kaiser: Sequence to Sequence Learning ...
Training at AI Frontiers 2018 - Lukasz Kaiser: Sequence to Sequence Learning ...Training at AI Frontiers 2018 - Lukasz Kaiser: Sequence to Sequence Learning ...
Training at AI Frontiers 2018 - Lukasz Kaiser: Sequence to Sequence Learning ...AI Frontiers
 
Accelerating stochastic gradient descent using adaptive mini batch size3
Accelerating stochastic gradient descent using adaptive mini batch size3Accelerating stochastic gradient descent using adaptive mini batch size3
Accelerating stochastic gradient descent using adaptive mini batch size3muayyad alsadi
 
Predicting Tweet Sentiment
Predicting Tweet SentimentPredicting Tweet Sentiment
Predicting Tweet SentimentLucinda Linde
 
B4UConference_machine learning_deeplearning
B4UConference_machine learning_deeplearningB4UConference_machine learning_deeplearning
B4UConference_machine learning_deeplearningHoa Le
 
TensorfLow_Basic.pptx
TensorfLow_Basic.pptxTensorfLow_Basic.pptx
TensorfLow_Basic.pptxTMUb202109065
 
BERT- Pre-training of Deep Bidirectional Transformers for Language Understand...
BERT- Pre-training of Deep Bidirectional Transformers for Language Understand...BERT- Pre-training of Deep Bidirectional Transformers for Language Understand...
BERT- Pre-training of Deep Bidirectional Transformers for Language Understand...Kyuri Kim
 
Deep Learning and Watson Studio
Deep Learning and Watson StudioDeep Learning and Watson Studio
Deep Learning and Watson StudioSasha Lazarevic
 
Deeplearning on Hadoop @OSCON 2014
Deeplearning on Hadoop @OSCON 2014Deeplearning on Hadoop @OSCON 2014
Deeplearning on Hadoop @OSCON 2014Adam Gibson
 
Georgia Tech cse6242 - Intro to Deep Learning and DL4J
Georgia Tech cse6242 - Intro to Deep Learning and DL4JGeorgia Tech cse6242 - Intro to Deep Learning and DL4J
Georgia Tech cse6242 - Intro to Deep Learning and DL4JJosh Patterson
 
Deep Learning Enabled Question Answering System to Automate Corporate Helpdesk
Deep Learning Enabled Question Answering System to Automate Corporate HelpdeskDeep Learning Enabled Question Answering System to Automate Corporate Helpdesk
Deep Learning Enabled Question Answering System to Automate Corporate HelpdeskSaurabh Saxena
 

Similar a RNNs for Text Analysis: From Idea to Practice (20)

deepnet-lourentzou.ppt
deepnet-lourentzou.pptdeepnet-lourentzou.ppt
deepnet-lourentzou.ppt
 
Distributed deep learning_over_spark_20_nov_2014_ver_2.8
Distributed deep learning_over_spark_20_nov_2014_ver_2.8Distributed deep learning_over_spark_20_nov_2014_ver_2.8
Distributed deep learning_over_spark_20_nov_2014_ver_2.8
 
Deep learning for NLP and Transformer
 Deep learning for NLP  and Transformer Deep learning for NLP  and Transformer
Deep learning for NLP and Transformer
 
presentation.ppt
presentation.pptpresentation.ppt
presentation.ppt
 
Deep Learning for Developers (Advanced Workshop)
Deep Learning for Developers (Advanced Workshop)Deep Learning for Developers (Advanced Workshop)
Deep Learning for Developers (Advanced Workshop)
 
Deep Dive on Deep Learning (June 2018)
Deep Dive on Deep Learning (June 2018)Deep Dive on Deep Learning (June 2018)
Deep Dive on Deep Learning (June 2018)
 
Chatbot ppt
Chatbot pptChatbot ppt
Chatbot ppt
 
Machine Learning with Spark
Machine Learning with SparkMachine Learning with Spark
Machine Learning with Spark
 
Corinna Cortes, Head of Research, Google, at MLconf NYC 2017
Corinna Cortes, Head of Research, Google, at MLconf NYC 2017Corinna Cortes, Head of Research, Google, at MLconf NYC 2017
Corinna Cortes, Head of Research, Google, at MLconf NYC 2017
 
Training at AI Frontiers 2018 - Lukasz Kaiser: Sequence to Sequence Learning ...
Training at AI Frontiers 2018 - Lukasz Kaiser: Sequence to Sequence Learning ...Training at AI Frontiers 2018 - Lukasz Kaiser: Sequence to Sequence Learning ...
Training at AI Frontiers 2018 - Lukasz Kaiser: Sequence to Sequence Learning ...
 
AI and Deep Learning
AI and Deep Learning AI and Deep Learning
AI and Deep Learning
 
Accelerating stochastic gradient descent using adaptive mini batch size3
Accelerating stochastic gradient descent using adaptive mini batch size3Accelerating stochastic gradient descent using adaptive mini batch size3
Accelerating stochastic gradient descent using adaptive mini batch size3
 
Predicting Tweet Sentiment
Predicting Tweet SentimentPredicting Tweet Sentiment
Predicting Tweet Sentiment
 
B4UConference_machine learning_deeplearning
B4UConference_machine learning_deeplearningB4UConference_machine learning_deeplearning
B4UConference_machine learning_deeplearning
 
TensorfLow_Basic.pptx
TensorfLow_Basic.pptxTensorfLow_Basic.pptx
TensorfLow_Basic.pptx
 
BERT- Pre-training of Deep Bidirectional Transformers for Language Understand...
BERT- Pre-training of Deep Bidirectional Transformers for Language Understand...BERT- Pre-training of Deep Bidirectional Transformers for Language Understand...
BERT- Pre-training of Deep Bidirectional Transformers for Language Understand...
 
Deep Learning and Watson Studio
Deep Learning and Watson StudioDeep Learning and Watson Studio
Deep Learning and Watson Studio
 
Deeplearning on Hadoop @OSCON 2014
Deeplearning on Hadoop @OSCON 2014Deeplearning on Hadoop @OSCON 2014
Deeplearning on Hadoop @OSCON 2014
 
Georgia Tech cse6242 - Intro to Deep Learning and DL4J
Georgia Tech cse6242 - Intro to Deep Learning and DL4JGeorgia Tech cse6242 - Intro to Deep Learning and DL4J
Georgia Tech cse6242 - Intro to Deep Learning and DL4J
 
Deep Learning Enabled Question Answering System to Automate Corporate Helpdesk
Deep Learning Enabled Question Answering System to Automate Corporate HelpdeskDeep Learning Enabled Question Answering System to Automate Corporate Helpdesk
Deep Learning Enabled Question Answering System to Automate Corporate Helpdesk
 

Más de odsc

Understanding the Chief Data Officer
Understanding the Chief Data Officer Understanding the Chief Data Officer
Understanding the Chief Data Officer odsc
 
Machine-In-The-Loop for Knowledge Discovery
Machine-In-The-Loop for Knowledge DiscoveryMachine-In-The-Loop for Knowledge Discovery
Machine-In-The-Loop for Knowledge Discoveryodsc
 
API Driven Development
API Driven Development API Driven Development
API Driven Development odsc
 
Mobile technology Usage by Humanitarian Programs: A Metadata Analysis
Mobile technology Usage by Humanitarian Programs: A Metadata AnalysisMobile technology Usage by Humanitarian Programs: A Metadata Analysis
Mobile technology Usage by Humanitarian Programs: A Metadata Analysisodsc
 
Productionizing Deep Learning From the Ground Up
Productionizing Deep Learning From the Ground UpProductionizing Deep Learning From the Ground Up
Productionizing Deep Learning From the Ground Upodsc
 
Big Data Infrastructure: Introduction to Hadoop with MapReduce, Pig, and Hive
Big Data Infrastructure: Introduction to Hadoop with MapReduce, Pig, and HiveBig Data Infrastructure: Introduction to Hadoop with MapReduce, Pig, and Hive
Big Data Infrastructure: Introduction to Hadoop with MapReduce, Pig, and Hiveodsc
 
Think Breadth, Not Depth
Think Breadth, Not DepthThink Breadth, Not Depth
Think Breadth, Not Depthodsc
 
Data Science at Dow Jones: Monetizing Data, News and Information
Data Science at Dow Jones: Monetizing Data, News and InformationData Science at Dow Jones: Monetizing Data, News and Information
Data Science at Dow Jones: Monetizing Data, News and Informationodsc
 
Spark, Python and Parquet
Spark, Python and Parquet Spark, Python and Parquet
Spark, Python and Parquet odsc
 
Building a Predictive Analytics Solution with Azure ML
Building a Predictive Analytics Solution with Azure MLBuilding a Predictive Analytics Solution with Azure ML
Building a Predictive Analytics Solution with Azure MLodsc
 
Beyond Names
Beyond NamesBeyond Names
Beyond Namesodsc
 
How Woman are Conquering the S&P 500
How Woman are Conquering the S&P 500How Woman are Conquering the S&P 500
How Woman are Conquering the S&P 500odsc
 
Domain Expertise and Unstructured Data
Domain Expertise and Unstructured DataDomain Expertise and Unstructured Data
Domain Expertise and Unstructured Dataodsc
 
Kaggle The Home of Data Science
Kaggle The Home of Data ScienceKaggle The Home of Data Science
Kaggle The Home of Data Scienceodsc
 
Open Source Tools & Data Science Competitions
Open Source Tools & Data Science Competitions Open Source Tools & Data Science Competitions
Open Source Tools & Data Science Competitions odsc
 
Machine Learning with scikit-learn
Machine Learning with scikit-learnMachine Learning with scikit-learn
Machine Learning with scikit-learnodsc
 
Bridging the Gap Between Data and Insight using Open-Source Tools
Bridging the Gap Between Data and Insight using Open-Source ToolsBridging the Gap Between Data and Insight using Open-Source Tools
Bridging the Gap Between Data and Insight using Open-Source Toolsodsc
 
Top 10 Signs of the Textpocalypse
Top 10 Signs of the TextpocalypseTop 10 Signs of the Textpocalypse
Top 10 Signs of the Textpocalypseodsc
 
The Art of Data Science
The Art of Data Science The Art of Data Science
The Art of Data Science odsc
 
Frontiers of Open Data Science Research
Frontiers of Open Data Science ResearchFrontiers of Open Data Science Research
Frontiers of Open Data Science Researchodsc
 

Más de odsc (20)

Understanding the Chief Data Officer
Understanding the Chief Data Officer Understanding the Chief Data Officer
Understanding the Chief Data Officer
 
Machine-In-The-Loop for Knowledge Discovery
Machine-In-The-Loop for Knowledge DiscoveryMachine-In-The-Loop for Knowledge Discovery
Machine-In-The-Loop for Knowledge Discovery
 
API Driven Development
API Driven Development API Driven Development
API Driven Development
 
Mobile technology Usage by Humanitarian Programs: A Metadata Analysis
Mobile technology Usage by Humanitarian Programs: A Metadata AnalysisMobile technology Usage by Humanitarian Programs: A Metadata Analysis
Mobile technology Usage by Humanitarian Programs: A Metadata Analysis
 
Productionizing Deep Learning From the Ground Up
Productionizing Deep Learning From the Ground UpProductionizing Deep Learning From the Ground Up
Productionizing Deep Learning From the Ground Up
 
Big Data Infrastructure: Introduction to Hadoop with MapReduce, Pig, and Hive
Big Data Infrastructure: Introduction to Hadoop with MapReduce, Pig, and HiveBig Data Infrastructure: Introduction to Hadoop with MapReduce, Pig, and Hive
Big Data Infrastructure: Introduction to Hadoop with MapReduce, Pig, and Hive
 
Think Breadth, Not Depth
Think Breadth, Not DepthThink Breadth, Not Depth
Think Breadth, Not Depth
 
Data Science at Dow Jones: Monetizing Data, News and Information
Data Science at Dow Jones: Monetizing Data, News and InformationData Science at Dow Jones: Monetizing Data, News and Information
Data Science at Dow Jones: Monetizing Data, News and Information
 
Spark, Python and Parquet
Spark, Python and Parquet Spark, Python and Parquet
Spark, Python and Parquet
 
Building a Predictive Analytics Solution with Azure ML
Building a Predictive Analytics Solution with Azure MLBuilding a Predictive Analytics Solution with Azure ML
Building a Predictive Analytics Solution with Azure ML
 
Beyond Names
Beyond NamesBeyond Names
Beyond Names
 
How Woman are Conquering the S&P 500
How Woman are Conquering the S&P 500How Woman are Conquering the S&P 500
How Woman are Conquering the S&P 500
 
Domain Expertise and Unstructured Data
Domain Expertise and Unstructured DataDomain Expertise and Unstructured Data
Domain Expertise and Unstructured Data
 
Kaggle The Home of Data Science
Kaggle The Home of Data ScienceKaggle The Home of Data Science
Kaggle The Home of Data Science
 
Open Source Tools & Data Science Competitions
Open Source Tools & Data Science Competitions Open Source Tools & Data Science Competitions
Open Source Tools & Data Science Competitions
 
Machine Learning with scikit-learn
Machine Learning with scikit-learnMachine Learning with scikit-learn
Machine Learning with scikit-learn
 
Bridging the Gap Between Data and Insight using Open-Source Tools
Bridging the Gap Between Data and Insight using Open-Source ToolsBridging the Gap Between Data and Insight using Open-Source Tools
Bridging the Gap Between Data and Insight using Open-Source Tools
 
Top 10 Signs of the Textpocalypse
Top 10 Signs of the TextpocalypseTop 10 Signs of the Textpocalypse
Top 10 Signs of the Textpocalypse
 
The Art of Data Science
The Art of Data Science The Art of Data Science
The Art of Data Science
 
Frontiers of Open Data Science Research
Frontiers of Open Data Science ResearchFrontiers of Open Data Science Research
Frontiers of Open Data Science Research
 

Último

Top 10 Hubspot Development Companies in 2024
Top 10 Hubspot Development Companies in 2024Top 10 Hubspot Development Companies in 2024
Top 10 Hubspot Development Companies in 2024TopCSSGallery
 
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesAssure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesThousandEyes
 
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sectoritnewsafrica
 
Microservices, Docker deploy and Microservices source code in C#
Microservices, Docker deploy and Microservices source code in C#Microservices, Docker deploy and Microservices source code in C#
Microservices, Docker deploy and Microservices source code in C#Karmanjay Verma
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch TuesdayIvanti
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfNeo4j
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfIngrid Airi González
 
Landscape Catalogue 2024 Australia-1.pdf
Landscape Catalogue 2024 Australia-1.pdfLandscape Catalogue 2024 Australia-1.pdf
Landscape Catalogue 2024 Australia-1.pdfAarwolf Industries LLC
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality AssuranceInflectra
 
Infrared simulation and processing on Nvidia platforms
Infrared simulation and processing on Nvidia platformsInfrared simulation and processing on Nvidia platforms
Infrared simulation and processing on Nvidia platformsYoss Cohen
 
All These Sophisticated Attacks, Can We Really Detect Them - PDF
All These Sophisticated Attacks, Can We Really Detect Them - PDFAll These Sophisticated Attacks, Can We Really Detect Them - PDF
All These Sophisticated Attacks, Can We Really Detect Them - PDFMichael Gough
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityIES VE
 
Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...
Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...
Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...Jeffrey Haguewood
 
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)Mark Simos
 
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Alkin Tezuysal
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Strongerpanagenda
 
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security ObservabilityGlenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security Observabilityitnewsafrica
 
Digital Tools & AI in Career Development
Digital Tools & AI in Career DevelopmentDigital Tools & AI in Career Development
Digital Tools & AI in Career DevelopmentMahmoud Rabie
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...panagenda
 

Último (20)

Top 10 Hubspot Development Companies in 2024
Top 10 Hubspot Development Companies in 2024Top 10 Hubspot Development Companies in 2024
Top 10 Hubspot Development Companies in 2024
 
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesAssure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
 
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector
 
Microservices, Docker deploy and Microservices source code in C#
Microservices, Docker deploy and Microservices source code in C#Microservices, Docker deploy and Microservices source code in C#
Microservices, Docker deploy and Microservices source code in C#
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch Tuesday
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdf
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdf
 
Landscape Catalogue 2024 Australia-1.pdf
Landscape Catalogue 2024 Australia-1.pdfLandscape Catalogue 2024 Australia-1.pdf
Landscape Catalogue 2024 Australia-1.pdf
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
 
Infrared simulation and processing on Nvidia platforms
Infrared simulation and processing on Nvidia platformsInfrared simulation and processing on Nvidia platforms
Infrared simulation and processing on Nvidia platforms
 
All These Sophisticated Attacks, Can We Really Detect Them - PDF
All These Sophisticated Attacks, Can We Really Detect Them - PDFAll These Sophisticated Attacks, Can We Really Detect Them - PDF
All These Sophisticated Attacks, Can We Really Detect Them - PDF
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a reality
 
Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...
Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...
Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...
 
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)
Tampa BSides - The No BS SOC (slides from April 6, 2024 talk)
 
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
 
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security ObservabilityGlenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
 
Digital Tools & AI in Career Development
Digital Tools & AI in Career DevelopmentDigital Tools & AI in Career Development
Digital Tools & AI in Career Development
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
 

RNNs for Text Analysis: From Idea to Practice

  • 1. RECURRENT NEURAL NETWORKS FOR TEXT ANALYSIS Alec Radford O P E N D A T A S C I E N C E C O N F E R E N C E_ BOSTON 2015 @opendatasci
  • 2. Recurrent Neural Networks for text analysis From idea to practice ALEC RADFORD
  • 3. Follow Along Slides at: http://goo.gl/WLsUWv
  • 4. How ML -0.15, 0.2, 0, 1.5 A, B, C, D The cat sat on the mat. Numerical, great! Categorical, great! Uhhh…….
  • 5. How text is dealt with (ML perspective) Text Features (bow, TFIDF, LSA, etc...) Linear Model (SVM, softmax)
  • 6. Structure is important! The cat sat on the mat. sat the on mat cat the ● Certain tasks, structure is essential: ○ Humor ○ Sarcasm ● Certain tasks, ngrams can get you a long way: ○ Sentiment Analysis ○ Topic detection ● Specific words can be strong indicators ○ useless, fantastic (sentiment) ○ hoop, green tea, NASDAQ (topic)
  • 7. Structure is hard Ngrams is typical way of preserving some structure. sat the on mat cat the cat cat sat sat on on thethe mat Beyond bi or tri-grams occurrences become very rare and dimensionality becomes huge (1, 10 million + features)
  • 9. How text is dealt with (ML perspective) Text Features (bow, TFIDF, LSA, etc...) Linear Model (SVM, softmax)
  • 10. How text should be dealt with? Text RNN Linear Model (SVM, softmax)
  • 11. How an RNN works the cat sat on the mat
  • 12. How an RNN works the cat sat on the mat input to hidden
  • 13. How an RNN works the cat sat on the mat input to hidden hidden to hidden
  • 14. How an RNN works the cat sat on the mat input to hidden hidden to hidden
  • 15. How an RNN works the cat sat on the mat projections (activities x weights) activities (vectors of values) input to hidden hidden to hidden
  • 16. How an RNN works the cat sat on the mat projections (activities x weights) activities (vectors of values) Learned representation of sequence. input to hidden hidden to hidden
  • 17. How an RNN works the cat sat on the mat projections (activities x weights) activities (vectors of values) cat hidden to output input to hidden hidden to hidden
  • 18. From text to RNN input the cat sat on the mat “The cat sat on the mat.” Tokenize . Assign index 0 1 2 3 0 4 5 String input Embedding lookup 2.5 0.3 -1.2 0.2 -3.3 0.7 -4.1 1.6 2.8 1.1 5.7 -0.2 2.5 0.3 -1.2 1.4 0.6 -3.9 -3.8 1.5 0.1 2.5 0.3 -1.2 0.2 -3.3 0.7 -4.1 1.6 2.8 1.1 5.7 -0.2 1.4 0.6 -3.9 -3.8 1.5 0.1 Learned matrix
  • 19. You can stack them too the cat sat on the mat cat hidden to output input to hidden hidden to hidden
  • 20. But aren’t RNNs unstable? Simple RNNs trained with SGD are unstable/difficult to learn. But modern RNNs with various tricks blow up much less often! ● Gating Units ● Gradient Clipping ● Steeper gates ● Better initialization ● Better optimizers ● Bigger datasets
  • 21. Simple Recurrent Unit ht-1 xt + ht xt+1 + ht+1 + Element wise addition Activation function Routes information can propagate along Involved in modifying information flow and values
  • 22. ⊙ ⊙⊙ Gated Recurrent Unit - GRU xt r htht-1 ht z + ~ 1-z z + Element wise addition ⊙ Element wise multiplication Routes information can propagate along Involved in modifying information flow and values
  • 23. Gated Recurrent Unit - GRU ⊙ ⊙⊙ xt r htht-1 z + ~ 1-z z ⊙ ⊙⊙ xt+1 r ht+1ht z + ~ 1-z z ht+1
  • 24. Gating is important For sentiment analysis of longer sequences of text (paragraph or so) a simple RNN has difficulty learning at all while a gated RNN does so easily.
  • 25. Which One? There are two types of gated RNNs: ● Gated Recurrent Units (GRU) by K. Cho, recently introduced and used for machine translation and speech recognition tasks. ● Long short term memory (LSTM) by S. Hochreiter and J. Schmidhuber has been around since 1997 and has been used far more. Various modifications to it exist.
  • 26. Which One? GRU is simpler, faster, and optimizes quicker (at least on sentiment). Because it only has two gates (compared to four) approximately 1.5- 1.75x faster for theano implementation. If you have a huge dataset and don’t mind waiting LSTM may be better in the long run due to its greater complexity - especially if you add peephole connections.
  • 27. Exploding Gradients? Exploding gradients are a major problem for traditional RNNs trained with SGD. One of the sources of the reputation of RNNs being hard to train. In 2012, R Pascanu and T. Mikolov proposed clipping the norm of the gradient to alleviate this. Modern optimizers don’t seem to have this problem - at least for classification text analysis.
  • 28. Better Gating Functions Interesting paper at NIPS workshop (Q. Lyu, J. Zhu) - make the gates “steeper” so they change more rapidly from “off” to “on” so model learns to use them quicker.
  • 29. Better Initialization Andrew Saxe last year showed that initializing weight matrices with random orthogonal matrices works better than random gaussian (or uniform) matrices. In addition, Richard Socher (and more recently Quoc Le) have used identity initialization schemes which work great as well.
  • 30. Understanding Optimizers 2D moons dataset courtesy of scikit-learn
  • 31. Comparing Optimizers Adam (D. Kingma) combines the early optimization speed of Adagrad (J. Duchi) with the better later convergence of various other methods like Adadelta (M. Zeiler) and RMSprop (T. Tieleman). Warning: Generalization performance of Adam seems slightly worse for smaller datasets.
  • 32. It adds up Up to 10x more efficient training once you add all the tricks together compared to a naive implementation - much more stable - rarely diverges. Around 7.5x faster, the various tricks add a bit of computation time.
  • 33. Too much? - Overfitting RNNs can overfit very well as we will see. As they continue to fit to training dataset, their performance on test data will plateau or even worsen. Keep track of it using a validation set, save model at each iteration over training data and pick the earliest, best, validation performance.
  • 34. The Showdown Model #1 Model #2 + 512 dim embedding 512 dim hidden state output Using bigrams and grid search on min_df for vectorizer and regularization coefficient for model. Using whatever I tried that worked :) Adam, GRU, steeper sigmoid gates, ortho/identity
  • 36. Effect of Dataset Size ● RNNs have poor generalization properties on small datasets. ○ 1K labeled examples 25-50% worse than linear model… ● RNNs have better generalization properties on large datasets. ○ 1M labeled examples 0-30% better than linear model. ● Crossovers between 10K and 1M examples ○ Depends on dataset.
  • 37. The Thing we don’t talk about For 1 million paragraph sized text examples to converge: ● Linear model takes 30 minutes on a single CPU core. ● RNN takes 90 minutes on a Titan X. ● RNN takes five days on a single CPU core. RNN is about 250x slower on CPU than linear model… This is why we use GPUs
  • 38. Visualizing representations of words learned via sentiment TSNE - L.J.P. van derIndividual words colored by average sentiment
  • 39. Negative Positive Model learns to separate negative and positive words, not too surprising
  • 40. Quantities of TimeQualifiers Product nouns Punctuation Much cooler, model also begins to learn components of language from only binary sentiment labels
  • 41. The library - Passage ● Tiny RNN library built on top of Theano ● https://github.com/IndicoDataSolutions/Passage ● Still alpha - we’re working on it! ● Supports simple, LSTM, and GRU recurrent layers ● Supports multiple recurrent layers ● Supports deep input to and deep output from hidden layers ○ no deep transitions currently ● Supports embedding and onehot input representations ● Can be used for both regression and classification problems ○ Regression needs preprocessing for stability - working on it ● Much more in the pipeline
  • 42. An example Sentiment analysis of movie reviews - 25K labeled examples
  • 43.
  • 49. RNN imports preprocessing make and train model tokenize data load training data configure model
  • 50. RNN imports preprocessing load test data make and train model tokenize data load training data configure model
  • 51. RNN imports preprocessing predict on test data load test data make and train model tokenize data load training data configure model
  • 52. The results Top 10! - barely :)
  • 53. Summary ● RNNs look to be a competitive tool in certain situations for text analysis. ● Especially if you have a large 1M+ example dataset o A GPU or great patience is essential ● Otherwise it can be difficult to justify over linear models o Speed o Complexity o Poor generalization with small datasets
  • 55. We’re hiring! ● Data Engineer ● Infrastructure Engineer ● Interested? o contact@indico.io (or talk-to/email me after pres.)