SlideShare una empresa de Scribd logo
1 de 24
Artificial Intelligence
Language Models

Maryam Khordad
The University of Western Ontario
1
Natural Language Processing
• The idea behind NLP: To give computers
the ability to process human language.
• To get computers to perform useful tasks
involving human language:
– Dialogue systems
• Cleverbot

– Machine translation
– Question answering (Search engines)

2
Natural Language Processing
• Knowledge of language.
• One of the important information about a
language: Is a string a valid member of a
language or not?
–
–
–
–

Word
Sentence
Phrase
…

3
Language Models
• Formal grammars (e.g. regular, context free)
give a hard “binary” model of the legal
sentences in a language.
• For NLP, a probabilistic model of a
language that gives a probability that a
string is a member of a language is more
useful.
• To specify a correct probability distribution,
the probability of all sentences in a
language must sum to 1.
4
Uses of Language Models
• Speech recognition
– “I ate a cherry” is a more likely sentence than “Eye eight
uh Jerry”

• OCR & Handwriting recognition
– More probable sentences are more likely correct readings.

• Machine translation
– More likely sentences are probably better translations.

• Generation
– More likely sentences are probably better NL generations.

• Context sensitive spelling correction
– “Their are problems wit this sentence.”
5
Completion Prediction
• A language model also supports predicting
the completion of a sentence.
– Please turn off your cell _____
– Your program does not ______

• Predictive text input systems can guess what
you are typing and give choices on how to
complete it.

6
N-Gram Word Models
• We can have n-gram models over sequences of words,
characters, syllables or other units.
• Estimate probability of each word given prior context.
– P(phone | Please turn off your cell)

• An N-gram model uses only N 1 words of prior context.
– Unigram: P(phone)
– Bigram: P(phone | cell)
– Trigram: P(phone | your cell)

• The Markov assumption is the presumption that the future
behavior of a dynamical system only depends on its recent
history. In particular, in a kth-order Markov model, the
next state only depends on the k most recent states,
therefore an N-gram model is a (N 1)-order Markov model.
7
N-Gram Model Formulas
• Word sequences
w1n

w1... wn

• Chain rule of probability
n
n
1

P( w )

2
1

n 1
1

P( w1 ) P( w2 | w1 ) P( w3 | w )... P( wn | w

• Bigram approximation

P( wk | w1k 1 )

)
k 1

n
n
1

P( w )

P ( wk | wk 1 )
k 1

• N-gram approximation
n
n
1

k
P( wk | wk

P( w )

1
N 1

)

k 1
8
Estimating Probabilities
• N-gram conditional probabilities can be estimated
from raw text based on the relative frequency of
word sequences.
Bigram:
N-gram:

C ( wn 1wn )
C ( wn 1 )

P ( wn | wn 1 )
n
P(wn | wn

1
N 1

)

n
C (wn 1 1wn )
N
n
C (wn 1 1 )
N

• To have a consistent probabilistic model, append a
unique start (<s>) and end (</s>) symbol to every
sentence and treat these as additional words.
9
N-gram character models
• One of the simplest language models: P(c1N )
• Language identification: given the text determine
which language it is written in.
• Build a trigram character model of each candidate
language: P(ci | ci 2:i 1 , l )
• We want to find the most probable language given
the Text:
l * argmax P(l | c1N )
l

argmax P(l ) P(c1N | l )
l

N

argmax P (l )
l

P(ci | ci
i 1

2:i 1

, l)
10
Train and Test Corpora
• We call a body of text a corpus (plural corpora).
• A language model must be trained on a large
corpus of text to estimate good parameter values.
• Model can be evaluated based on its ability to
predict a high probability for a disjoint (held-out)
test corpus (testing on the training corpus would
give an optimistically biased estimate).
• Ideally, the training (and test) corpus should be
representative of the actual application data.

11
Unknown Words
• How to handle words in the test corpus that
did not occur in the training data, i.e. out of
vocabulary (OOV) words?
• Train a model that includes an explicit
symbol for an unknown word (<UNK>).
– Choose a vocabulary in advance and replace
other words in the training corpus with
<UNK>.
– Replace the first occurrence of each word in the
training data with <UNK>.
12
Perplexity
• Measure of how well a model “fits” the test data.
• Uses the probability that the model assigns to the
test corpus.
• Normalizes for the number of words in the test
corpus and takes the inverse.
N

Perplexity(W1 )

N

1
P( w1w2 ...wN )

• Measures the weighted average branching factor
in predicting the next word (lower is better).

13
Sample Perplexity Evaluation
• Models trained on 38 million words from
the Wall Street Journal (WSJ) using a
19,979 word vocabulary.
• Evaluate on a disjoint set of 1.5 million
WSJ words.
Unigram
Perplexity
962

Bigram
170

Trigram
109

14
Smoothing
• Since there are a combinatorial number of possible
strings, many rare (but not impossible)
combinations never occur in training, so the
system incorrectly assigns zero to many
parameters.
• If a new combination occurs during testing, it is
given a probability of zero and the entire sequence
gets a probability of zero (i.e. infinite perplexity).
• Example:
– “--th” Very common
– “--ht” Uncommon but what if we see this sentence:
“This program issues an http request.”
15
Smoothing
• In practice, parameters are smoothed to reassign
some probability mass to unseen events.
– Adding probability mass to unseen events requires
removing it from seen ones (discounting) in order to
maintain a joint distribution that sums to 1.

16
Laplace (Add-One) Smoothing
• The simplest type of smoothing.
• In the lack of further information, if a random
Boolean variable X has been false in all n
observations so far then the estimate for P(X=true)
should be 1/(n+2).
• He assumes that with two more trials, one might be
false and one false.
• Performs relatively poorly.

17
Advanced Smoothing
• Many advanced techniques have been
developed to improve smoothing for
language models.
– Interpolation
– Backoff

18
Model Combination
• As N increases, the power (expressiveness)
of an N-gram model increases, but the
ability to estimate accurate parameters from
sparse data decreases (i.e. the smoothing
problem gets worse).
• A general approach is to combine the results
of multiple N-gram models of increasing
complexity (i.e. increasing N).

19
Interpolation
• Linearly combine estimates of N-gram models of
increasing order.
Interpolated Trigram Model:

ˆ
P(wn | wn 2, wn 1 )

3

Where:

(wn | wn 2, wn 1 )
i

2

P(wn | wn 1 )

1

P(wn )

1

i

•
•

i Can

be fixed or can be trained.
i Can depend on the counts: if we have a high
count of trigrams then we weigh them relatively
more otherwise put more weight on the bigram
and unigrams.
20
Backoff
• Only use lower-order model when data for higherorder model is unavailable (i.e. count is zero).
• Recursively back-off to weaker models until data
is available.

21
Summary
• Language models assign a probability that a
sentence is a legal string in a language.
• They are useful as a component of many NLP
systems, such as ASR, OCR, and MT.
• Simple N-gram models are easy to train on
corpora and can provide useful estimates of
sentence likelihood.
• N-gram gives inaccurate parameters for models
trained on sparse data.
• Smoothing techniques adjust parameter estimates
to account for unseen (but not impossible) events.
22
Questions

23
About this slide presentation
• These slides are developed using Raymond
J.Mooney’s slides from University of Texas at
Austin
(http://www.cs.utexas.edu/~mooney/cs388/) .

24

Más contenido relacionado

La actualidad más candente

Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language Processing
Yasir Khan
 

La actualidad más candente (20)

Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language Processing
 
Nlp
NlpNlp
Nlp
 
NLP_KASHK:Minimum Edit Distance
NLP_KASHK:Minimum Edit DistanceNLP_KASHK:Minimum Edit Distance
NLP_KASHK:Minimum Edit Distance
 
Natural Language Processing (NLP)
Natural Language Processing (NLP)Natural Language Processing (NLP)
Natural Language Processing (NLP)
 
Natural language processing
Natural language processingNatural language processing
Natural language processing
 
NLP
NLPNLP
NLP
 
Natural language processing
Natural language processingNatural language processing
Natural language processing
 
Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language Processing
 
Stemming And Lemmatization Tutorial | Natural Language Processing (NLP) With ...
Stemming And Lemmatization Tutorial | Natural Language Processing (NLP) With ...Stemming And Lemmatization Tutorial | Natural Language Processing (NLP) With ...
Stemming And Lemmatization Tutorial | Natural Language Processing (NLP) With ...
 
Deep Learning for Natural Language Processing
Deep Learning for Natural Language ProcessingDeep Learning for Natural Language Processing
Deep Learning for Natural Language Processing
 
Natural Language processing Parts of speech tagging, its classes, and how to ...
Natural Language processing Parts of speech tagging, its classes, and how to ...Natural Language processing Parts of speech tagging, its classes, and how to ...
Natural Language processing Parts of speech tagging, its classes, and how to ...
 
Introduction to natural language processing
Introduction to natural language processingIntroduction to natural language processing
Introduction to natural language processing
 
Introduction to Natural Language Processing
Introduction to Natural Language ProcessingIntroduction to Natural Language Processing
Introduction to Natural Language Processing
 
5. phases of nlp
5. phases of nlp5. phases of nlp
5. phases of nlp
 
Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language Processing
 
Lecture: Word Sense Disambiguation
Lecture: Word Sense DisambiguationLecture: Word Sense Disambiguation
Lecture: Word Sense Disambiguation
 
NLP
NLPNLP
NLP
 
NLP_KASHK:Smoothing N-gram Models
NLP_KASHK:Smoothing N-gram ModelsNLP_KASHK:Smoothing N-gram Models
NLP_KASHK:Smoothing N-gram Models
 
Natural language processing
Natural language processingNatural language processing
Natural language processing
 
Natural lanaguage processing
Natural lanaguage processingNatural lanaguage processing
Natural lanaguage processing
 

Destacado (7)

Information Extraction
Information ExtractionInformation Extraction
Information Extraction
 
Sentiment Analysis and Social Media: How and Why
Sentiment Analysis and Social Media: How and WhySentiment Analysis and Social Media: How and Why
Sentiment Analysis and Social Media: How and Why
 
Cultural Model (Learning Literature)
Cultural Model (Learning Literature) Cultural Model (Learning Literature)
Cultural Model (Learning Literature)
 
Cultural models
Cultural modelsCultural models
Cultural models
 
Teaching culture through literature to EFL students
Teaching culture through literature to EFL studentsTeaching culture through literature to EFL students
Teaching culture through literature to EFL students
 
Chomsky's theories of-language-acquisition1-1225480010904742-8
Chomsky's theories of-language-acquisition1-1225480010904742-8Chomsky's theories of-language-acquisition1-1225480010904742-8
Chomsky's theories of-language-acquisition1-1225480010904742-8
 
Chomsky’s and skinner’s theory of language acquisition
Chomsky’s and skinner’s theory of language acquisitionChomsky’s and skinner’s theory of language acquisition
Chomsky’s and skinner’s theory of language acquisition
 

Similar a Language models

Pptphrase tagset mapping for french and english treebanks and its application...
Pptphrase tagset mapping for french and english treebanks and its application...Pptphrase tagset mapping for french and english treebanks and its application...
Pptphrase tagset mapping for french and english treebanks and its application...
Lifeng (Aaron) Han
 
pptphrase-tagset-mapping-for-french-and-english-treebanks-and-its-application...
pptphrase-tagset-mapping-for-french-and-english-treebanks-and-its-application...pptphrase-tagset-mapping-for-french-and-english-treebanks-and-its-application...
pptphrase-tagset-mapping-for-french-and-english-treebanks-and-its-application...
Lifeng (Aaron) Han
 
Adnan: Introduction to Natural Language Processing
Adnan: Introduction to Natural Language Processing Adnan: Introduction to Natural Language Processing
Adnan: Introduction to Natural Language Processing
Mustafa Jarrar
 

Similar a Language models (20)

lec03-LanguageModels_230214_161016.pdf
lec03-LanguageModels_230214_161016.pdflec03-LanguageModels_230214_161016.pdf
lec03-LanguageModels_230214_161016.pdf
 
L05 language model_part2
L05 language model_part2L05 language model_part2
L05 language model_part2
 
NLP_KASHK:Evaluating Language Model
NLP_KASHK:Evaluating Language ModelNLP_KASHK:Evaluating Language Model
NLP_KASHK:Evaluating Language Model
 
NLP_KASHK:N-Grams
NLP_KASHK:N-GramsNLP_KASHK:N-Grams
NLP_KASHK:N-Grams
 
12.4.1 n-grams.pdf
12.4.1 n-grams.pdf12.4.1 n-grams.pdf
12.4.1 n-grams.pdf
 
Energy-Based Models with Applications to Speech and Language Processing
Energy-Based Models with Applications to Speech and Language ProcessingEnergy-Based Models with Applications to Speech and Language Processing
Energy-Based Models with Applications to Speech and Language Processing
 
Master defence 2020 - Anastasiia Khaburska - Statistical and Neural Language ...
Master defence 2020 - Anastasiia Khaburska - Statistical and Neural Language ...Master defence 2020 - Anastasiia Khaburska - Statistical and Neural Language ...
Master defence 2020 - Anastasiia Khaburska - Statistical and Neural Language ...
 
Lecture 6
Lecture 6Lecture 6
Lecture 6
 
[Paper Introduction] Efficient Lattice Rescoring Using Recurrent Neural Netwo...
[Paper Introduction] Efficient Lattice Rescoring Using Recurrent Neural Netwo...[Paper Introduction] Efficient Lattice Rescoring Using Recurrent Neural Netwo...
[Paper Introduction] Efficient Lattice Rescoring Using Recurrent Neural Netwo...
 
2017:12:06 acl読み会"Learning attention for historical text normalization by lea...
2017:12:06 acl読み会"Learning attention for historical text normalization by lea...2017:12:06 acl読み会"Learning attention for historical text normalization by lea...
2017:12:06 acl読み会"Learning attention for historical text normalization by lea...
 
2-Chapter Two-N-gram Language Models.ppt
2-Chapter Two-N-gram Language Models.ppt2-Chapter Two-N-gram Language Models.ppt
2-Chapter Two-N-gram Language Models.ppt
 
Engineering Intelligent NLP Applications Using Deep Learning – Part 2
Engineering Intelligent NLP Applications Using Deep Learning – Part 2 Engineering Intelligent NLP Applications Using Deep Learning – Part 2
Engineering Intelligent NLP Applications Using Deep Learning – Part 2
 
Pptphrase tagset mapping for french and english treebanks and its application...
Pptphrase tagset mapping for french and english treebanks and its application...Pptphrase tagset mapping for french and english treebanks and its application...
Pptphrase tagset mapping for french and english treebanks and its application...
 
pptphrase-tagset-mapping-for-french-and-english-treebanks-and-its-application...
pptphrase-tagset-mapping-for-french-and-english-treebanks-and-its-application...pptphrase-tagset-mapping-for-french-and-english-treebanks-and-its-application...
pptphrase-tagset-mapping-for-french-and-english-treebanks-and-its-application...
 
TransQuest
TransQuestTransQuest
TransQuest
 
Word representations in vector space
Word representations in vector spaceWord representations in vector space
Word representations in vector space
 
Adnan: Introduction to Natural Language Processing
Adnan: Introduction to Natural Language Processing Adnan: Introduction to Natural Language Processing
Adnan: Introduction to Natural Language Processing
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
 
OUTDATED Text Mining 2/5: Language Modeling
OUTDATED Text Mining 2/5: Language ModelingOUTDATED Text Mining 2/5: Language Modeling
OUTDATED Text Mining 2/5: Language Modeling
 
Mcs 031
Mcs 031Mcs 031
Mcs 031
 

Último

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 

Último (20)

Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 

Language models

  • 1. Artificial Intelligence Language Models Maryam Khordad The University of Western Ontario 1
  • 2. Natural Language Processing • The idea behind NLP: To give computers the ability to process human language. • To get computers to perform useful tasks involving human language: – Dialogue systems • Cleverbot – Machine translation – Question answering (Search engines) 2
  • 3. Natural Language Processing • Knowledge of language. • One of the important information about a language: Is a string a valid member of a language or not? – – – – Word Sentence Phrase … 3
  • 4. Language Models • Formal grammars (e.g. regular, context free) give a hard “binary” model of the legal sentences in a language. • For NLP, a probabilistic model of a language that gives a probability that a string is a member of a language is more useful. • To specify a correct probability distribution, the probability of all sentences in a language must sum to 1. 4
  • 5. Uses of Language Models • Speech recognition – “I ate a cherry” is a more likely sentence than “Eye eight uh Jerry” • OCR & Handwriting recognition – More probable sentences are more likely correct readings. • Machine translation – More likely sentences are probably better translations. • Generation – More likely sentences are probably better NL generations. • Context sensitive spelling correction – “Their are problems wit this sentence.” 5
  • 6. Completion Prediction • A language model also supports predicting the completion of a sentence. – Please turn off your cell _____ – Your program does not ______ • Predictive text input systems can guess what you are typing and give choices on how to complete it. 6
  • 7. N-Gram Word Models • We can have n-gram models over sequences of words, characters, syllables or other units. • Estimate probability of each word given prior context. – P(phone | Please turn off your cell) • An N-gram model uses only N 1 words of prior context. – Unigram: P(phone) – Bigram: P(phone | cell) – Trigram: P(phone | your cell) • The Markov assumption is the presumption that the future behavior of a dynamical system only depends on its recent history. In particular, in a kth-order Markov model, the next state only depends on the k most recent states, therefore an N-gram model is a (N 1)-order Markov model. 7
  • 8. N-Gram Model Formulas • Word sequences w1n w1... wn • Chain rule of probability n n 1 P( w ) 2 1 n 1 1 P( w1 ) P( w2 | w1 ) P( w3 | w )... P( wn | w • Bigram approximation P( wk | w1k 1 ) ) k 1 n n 1 P( w ) P ( wk | wk 1 ) k 1 • N-gram approximation n n 1 k P( wk | wk P( w ) 1 N 1 ) k 1 8
  • 9. Estimating Probabilities • N-gram conditional probabilities can be estimated from raw text based on the relative frequency of word sequences. Bigram: N-gram: C ( wn 1wn ) C ( wn 1 ) P ( wn | wn 1 ) n P(wn | wn 1 N 1 ) n C (wn 1 1wn ) N n C (wn 1 1 ) N • To have a consistent probabilistic model, append a unique start (<s>) and end (</s>) symbol to every sentence and treat these as additional words. 9
  • 10. N-gram character models • One of the simplest language models: P(c1N ) • Language identification: given the text determine which language it is written in. • Build a trigram character model of each candidate language: P(ci | ci 2:i 1 , l ) • We want to find the most probable language given the Text: l * argmax P(l | c1N ) l argmax P(l ) P(c1N | l ) l N argmax P (l ) l P(ci | ci i 1 2:i 1 , l) 10
  • 11. Train and Test Corpora • We call a body of text a corpus (plural corpora). • A language model must be trained on a large corpus of text to estimate good parameter values. • Model can be evaluated based on its ability to predict a high probability for a disjoint (held-out) test corpus (testing on the training corpus would give an optimistically biased estimate). • Ideally, the training (and test) corpus should be representative of the actual application data. 11
  • 12. Unknown Words • How to handle words in the test corpus that did not occur in the training data, i.e. out of vocabulary (OOV) words? • Train a model that includes an explicit symbol for an unknown word (<UNK>). – Choose a vocabulary in advance and replace other words in the training corpus with <UNK>. – Replace the first occurrence of each word in the training data with <UNK>. 12
  • 13. Perplexity • Measure of how well a model “fits” the test data. • Uses the probability that the model assigns to the test corpus. • Normalizes for the number of words in the test corpus and takes the inverse. N Perplexity(W1 ) N 1 P( w1w2 ...wN ) • Measures the weighted average branching factor in predicting the next word (lower is better). 13
  • 14. Sample Perplexity Evaluation • Models trained on 38 million words from the Wall Street Journal (WSJ) using a 19,979 word vocabulary. • Evaluate on a disjoint set of 1.5 million WSJ words. Unigram Perplexity 962 Bigram 170 Trigram 109 14
  • 15. Smoothing • Since there are a combinatorial number of possible strings, many rare (but not impossible) combinations never occur in training, so the system incorrectly assigns zero to many parameters. • If a new combination occurs during testing, it is given a probability of zero and the entire sequence gets a probability of zero (i.e. infinite perplexity). • Example: – “--th” Very common – “--ht” Uncommon but what if we see this sentence: “This program issues an http request.” 15
  • 16. Smoothing • In practice, parameters are smoothed to reassign some probability mass to unseen events. – Adding probability mass to unseen events requires removing it from seen ones (discounting) in order to maintain a joint distribution that sums to 1. 16
  • 17. Laplace (Add-One) Smoothing • The simplest type of smoothing. • In the lack of further information, if a random Boolean variable X has been false in all n observations so far then the estimate for P(X=true) should be 1/(n+2). • He assumes that with two more trials, one might be false and one false. • Performs relatively poorly. 17
  • 18. Advanced Smoothing • Many advanced techniques have been developed to improve smoothing for language models. – Interpolation – Backoff 18
  • 19. Model Combination • As N increases, the power (expressiveness) of an N-gram model increases, but the ability to estimate accurate parameters from sparse data decreases (i.e. the smoothing problem gets worse). • A general approach is to combine the results of multiple N-gram models of increasing complexity (i.e. increasing N). 19
  • 20. Interpolation • Linearly combine estimates of N-gram models of increasing order. Interpolated Trigram Model: ˆ P(wn | wn 2, wn 1 ) 3 Where: (wn | wn 2, wn 1 ) i 2 P(wn | wn 1 ) 1 P(wn ) 1 i • • i Can be fixed or can be trained. i Can depend on the counts: if we have a high count of trigrams then we weigh them relatively more otherwise put more weight on the bigram and unigrams. 20
  • 21. Backoff • Only use lower-order model when data for higherorder model is unavailable (i.e. count is zero). • Recursively back-off to weaker models until data is available. 21
  • 22. Summary • Language models assign a probability that a sentence is a legal string in a language. • They are useful as a component of many NLP systems, such as ASR, OCR, and MT. • Simple N-gram models are easy to train on corpora and can provide useful estimates of sentence likelihood. • N-gram gives inaccurate parameters for models trained on sparse data. • Smoothing techniques adjust parameter estimates to account for unseen (but not impossible) events. 22
  • 24. About this slide presentation • These slides are developed using Raymond J.Mooney’s slides from University of Texas at Austin (http://www.cs.utexas.edu/~mooney/cs388/) . 24

Notas del editor

  1. What distinguishes language processing applications from other data processing systems is their use of knowledge of language.