5. Why do we need them?
– Get things done
• E.g. set up alarm/reminder, take note
– Easy access to structured data, services and apps
• E.g. find docs/photos/notes
– Assist your routine schedule
• E.g. check the account balance
– Be more productive in managing your work and personal life
5
6. Mobile Service 行動客服
6
• allows customers to conduct a range of financial transactions remotely
using a mobile devices, usually called an app (e.g. Richart)
reducing the need for visiting a branch cost reduction
7. 7
Mobile Service 行動客服
• The users can finish specific tasks that are predefined by the app
• Limitation
– App usage design may not be user-friendly
– Good designs may differ across people
– Learning how to use app takes time
8. Why Natural Language?
• Global Digital Statistics (2015 January)
Global Population
7.21B
Active Internet Users
3.01B
Active Social
Media Accounts
2.08B
Active Unique
Mobile Users
3.65B
The more natural and convenient input of devices evolves towards speech.
8
10. • Dialogue systems are intelligent agents that are able to help users
finish tasks more efficiently via spoken interactions.
• Dialogue systems are being incorporated into various devices (smart-
phones, smart TVs, in-car navigating system, etc).
Good dialogue systems assist users to access information conveniently
and finish tasks efficiently.
Dialogue System 對話系統
JARVIS – Iron Man’s Personal Assistant Baymax – Personal Healthcare Companion
10
11. App Bot
11
• A bot is responsible for a “single” domain, similar to an app
Seamless and automatic information transferring across domains
reduce duplicate information and interaction
12. Outline
Introduction
Intelligent Assistants 智慧助理
Mobile Service 行動客服
Dialogue System/Bot 對話系統/機器人
Framework
Language Understanding 語意理解
Dialogue Management 對話管理
Output Generation 輸出生成
Recent Trend
Industrial Trend and Challenge
Deep Learning Basics
Deep Learning for Dialogues
Conclusion 12
13. System Framework
13
Speech
Recognition
Language Understanding (LU)
• Domain Identification
• User Intent Detection
• Slot Filling
Dialogue Management (DM)
• Dialogue State Tracking
• System Action/Policy
Decision
Output
Generation
Recognized Text
我要申辦下下周期間的國外上網方案
Semantic Frame
apply_international_dataplan
period=下下周
System Action/Policy
request_country
Text response
你要去哪一國
Screen
Display
country?
Text Input
我要申辦下下周期間的國外上網方案
Speech Signal
18. 我要繳交上個月的手機費帳單
3. Slot Filling
Requires Predefined Schema
User
Intelligent
Agent
18
手機 DB
Number Period Amount
0933xxx 12月 800
0928xxx 11月 560
: : :
FEE_PAYMENT
period=“上個月”
FEE_PAYMENT
period=“12月”
amount=“800”
Semantic Frame
Machine Learning for Information Extraction
19. System Framework
19
Speech
Recognition
Language Understanding (LU)
• Domain Identification
• User Intent Detection
• Slot Filling
Dialogue Management (DM)
• Dialogue State Tracking
• System Action/Policy
Decision
Output
Generation
Recognized Text
我要申辦下下周期間的國外上網方案
Semantic Frame
apply_international_dataplan
period=下下周
System Action/Policy
request_country
Text response
你要去哪一國
Screen
Display
country?
Text Input
我要申辦下下周期間的國外上網方案
Speech Signal
20. State Tracking
Requires Hand-Crafted States
User
Intelligent
Agent
20
amount period number
amount,
period
period,
number
amount,
card
all
要0933那個號碼
NULL
我要繳交上個月的手機費帳單
22. State Tracking
Requires Hand-Crafted States
User
Intelligent
Agent
22
amount period number
amount,
period
period,
number
amount,
number
all
NULL
我要繳交x個月的手機費帳單
FEE_PAYMENT
period=“這個月”
FEE_PAYMENT
period=“上個月”
FEE_PAYMENT
?
?
30. Challenge
• Predefined semantic schema
Chen et al., “Matrix Factorization with Knowledge Graph Propagation for Unsupervised Spoken Language Understanding,” in ACL-IJCNLP, 2015.
• Data without annotations
Chen et al., “Zero-Shot Learning of Intent Embeddings for Expansion by Convolutional Deep Structured Semantic Models,” in ICASSP, 2016.
• Semantic concept interpretation
Chen et al., “Deriving Local Relational Surface Forms from Dependency-Based Entity Embeddings for Unsupervised Spoken Language Understanding,” in SLT, 2014.
• Predefined dialogue states
Chen, et al., “End-to-End Memory Networks with Knowledge Carryover for Multi-Turn Spoken Language Understanding,” in Interspeech, 2016.
• Error propagation
Hakkani-Tur et al., “Multi-Domain Joint Semantic Frame Parsing using Bi-directional RNN-LSTM,” in Interspeech, 2016.
• Cross-domain intention/bot hierarchy
Sun et al., “An Intelligent Assistant for High-Level Task Understanding,” in IUI, 2016.
Sun et al., “AppDialogue: Multi-App Dialogues for Intelligent Assistants,” in LREC, 2016.
Chen et al., “Leveraging Behavioral Patterns of Mobile Applications for Personalized Spoken Language Understanding,” in ICMI, 2016.
• Cross-domain information transferring
Kim et al., “New Transfer Learning Techniques For Disparate Label Sets,” in ACL-IJCNLP, 2015.
FIND_RESTAURANT
rating=“good” rating=5? 4?
HotelRest Flight
Travel
Trip
Planning
30
32. Learning ≈
Looking for a Function
• Speech Recognition
• Handwritten Recognition
• Weather forecast
• Play video games
f
f
f
f
“2”
“你好”
“ Saturday”
“move left”
Thursday
32
33. Machine Learning Framework
Training is to pick the best function given the observed data
Testing is to predict the label using the learned function
Training Data
Model: Hypothesis Function Set
21, ff
Training: Pick the best function f *
Testing: yxf
y
*
f“Best” Function
,ˆ,,ˆ, 2211 yxyx
Testing Data
,?,x
“It claims too much.”
- (negative)
:x
:ˆy
33
function input
function output
34. Target Function
• Classification Task
– x: input object to be classified a N-dim vector
– y: class/label a M-dim vector
yxf MN
RRf :
Assume both x and y can be represented as fixed-size vectors
34
35. Vector Representation Ex
• Handwriting Digit Classification
“2”“1”
0
0
1
10 dimensions for digit
recognition
“1”
“2”
“3”
0
1
0 “1”
“2”
“3”
1: for ink
0: otherwise
Each pixel
corresponds to an
element in the vector
1
0
16 x 16
16 x 16 = 256
dimensions
x: image y: class/label
“1” or not
“2” or not
“3” or not
MN
RRf :
35
36. Vector Representation Ex
• Sentiment Analysis
“-”“+”
0
0
1
3 dimensions
(positive, negative, neutral)
“+”
“-”
“?”
0
1
0 “+”
“-”
“?”
1: indicates the word
0: otherwise
Each element in the vector
corresponds to a word in the
vocabulary
1
0
dimensions =
size of vocab
x: word y: class/label
“+” or not
“-” or not
“?” or not
MN
RRf :
“love”
36
40. A Layer of Neurons
• Handwriting digit classification
40
MN
RRf :
A layer of neurons can handle multiple possible output,
and the result depends on the max one
…
1x
2x
Nx
1
1y
…
…
“1” or not
“2” or not
“3” or not
2y
3y
10 neurons/10 classes
Which
one is
the
max?
41. Neural Networks – Multi-
Layer Perceptron (MLP)
1a 1z
2z
1x
2x
z
2a
Hidden Units
1 1
y
41
42. • Continuous function w/ 2 layers
• Combine two opposite-facing
threshold functions to make a
ridge
• Continuous function w/ 3 layers
• Combine two perpendicular
ridges to make a bump
Add bumps of various sizes
and locations to fit any surface
Expression of MLP
http://aima.eecs.berkeley.edu/slides-pdf/chapter20b.pdf
Multiple layers enrich the model expression, so that
the model can approximate more complex functions
42
43. Deep Neural Networks (DNN)
• Fully connected feedforward network
1x
2x
……
Layer 1
……
1y
2y
……
Layer 2
……
Layer L
……
……
……
Input Output
MyNx
vector
x
vector
y
MN
RRf :
Deep NN: multiple hidden layers
43
46. RNN for SLU
• Joint Multi-Domain Intent Prediction and Slot Filling
– Information can mutually enhanced
46
semantic frame sequence
ht-1 ht+1ht
W W W W
taiwanese
B-type
U
food
U
please
U
V
O
V
O
V
hT+1
EOS
U
FIND_REST
V
Slot Tagging Intent
Prediction
Hakkani-Tur, et al., “Multi-Domain Joint Semantic Frame Parsing using Bi-directional RNN-LSTM,” in Interspeech, 2016.
47. 47
just sent email to bob about fishing this weekend
O O O O
B-contact_name
O
B-subject I-subject I-subject
U
S
I send_email
D communication
send_email(contact_name=“bob”, subject=“fishing this weekend”)
are we going to fish this weekend
U1
S2
send_email(message=“are we going to fish this weekend”)
send email to bob
U2
send_email(contact_name=“bob”)
B-message
I-message
I-message I-message I-message
I-message I-message
B-contact_nameS1
Domain Identification Intent Prediction Slot Filling
Contextual SLU (Chen et al., 2016)
48. 48
u
Knowledge Attention Distributionpi
mi
Memory Representation
Weighted
Sum
h
∑ Wkg
o
Knowledge Encoding
Representation
history utterances {xi}
current utterance
c
Inner
Product
Sentence
Encoder
RNNin
x1 x2 xi…
Contextual
Sentence Encoder
x1 x2 xi…
RNNmem
slot tagging sequence y
ht-1 ht
V V
W W W
wt-1 wt
yt-1 yt
U U
RNN
Tagger
M M
Chen, et al., “End-to-End Memory Networks with Knowledge Carryover for Multi-Turn Spoken Language Understanding,” in Interspeech, 2016.
1. Sentence Encoding 2. Knowledge Attention 3. Knowledge Encoding
Contextual SLU (Chen et al., 2016)
Idea: additionally incorporating contextual knowledge during slot tagging
track dialogue states in a latent way
50. InfoBot:
End-to-End Dialogue System with
Supervised & Reinforcement Learning
50
Movie=?; Actor=Bill Murray; Release Year=1993
Find me the Bill Murray’s movie.
I think it came out in 1993.
When was it released?
Groundhog Day is a Bill Murray
movie which came out in 1993.
KB-InfoBotUser
(Groundhog Day, actor, Bill Murray)
(Groundhog Day, release year, 1993)
(Australia, actor, Nicole Kidman)
(Mad Max: Fury Road, release year, 2015)
Knowledge Base (head, relation, tail)
Dhingra, et al., “End-to-End Reinforcement Learning of Dialogue Agents for Information Access,” arXiv.:1609.00777v2.
51. E2E RL Task-Completion Bot
51
wi
B-
typ
e
wi
+1
wi+2
O O
E
OS
<intent
>
wi
B-
typ
e
wi
+1
wi+2
O O
E
OS
<intent
> Semantic Frame
request_movie
genre=action,
date=this weekend
System Action/Policy
request_location
User Dialogue Action
Inform(location=Bellevue)
Time t-1
wi
<slot>
wi+1 wi+2
O O
EOS
<intent>
Language Understanding (LU)
Time t-2
Time t
Dialogue
Management
(DM)
w0 w1
w2
Natural Language Generation (NLG)
EOS
User
Goal
User Agenda Modeling
User Simulator (Li et al., 2016)
End-to-End Neural Dialogue System
Text Input
Are there any action movies to
see this weekend?
52. E2E RL Task-Completion Bot
• Simulated User
– Generate interactions based on a predefined fake goal
– Automatically learn strategy by training on the simulated data
52
User goal: Two tickets for “the witch” tomorrow 9:30 PM at regal meridian 16, Seattle.
RULE BASED SYSTEM
User: Can I get 2 tickets for the witch?
Agent: Which theater do you want?
User: Regal meridian 16
Agent: What time would you like to see it?
User: 9:30 pm
Agent: What date would you like to see it?
User: Tomorrow!
Agent: How many people are you?
User: Just two adults.
Agent: What is your location?
User: Seattle!
Agent: What kind of video format do you like?
User: Oh, any kind is okay.
Agent: Do you have any price constraint?
User: No, I don’t care.
Agent: Great - I was able to purchase 2 tickets for you to
see the witch tomorrow at regal meridian 16 theater in
Seattle at 9:30 pm.
User: Thank you.
Agent: Thank you!
REINFORCEMENT LEARNING SYSTEM
User: Can I get 2 tickets for the witch?
Agent: What date would you like to see it?
User: Tomorrow
Agent: Which theater do you prefer?
User: Regal meridian 16
Agent: What is your location?
User: Seattle
Agent: What time would you like to see it?
User: 9:30 pm.
Agent: Great - I was able to purchase 2 tickets for you
to see the witch tomorrow at regal meridian 16
theater in Seattle at 9:30 pm.
User: Thanks.
Agent: Thanks!
53. Outline
Introduction
Intelligent Assistants 智慧助理
Mobile Service 行動客服
Dialogue System/Bot 對話系統/機器人
Framework
Language Understanding 語意理解
Dialogue Management 對話管理
Output Generation 輸出生成
Recent Trend
Industrial Trend and Challenge
Deep Learning Basics
Deep Learning for Dialogues
Conclusion 53
54. Conclusion
• The conversational bots can help users manage information
access and finish tasks via spoken interactions
– More natural
– More convenient
– More efficient
– User-centered
• Future Vision
– Not only single-turn requests but also multi-turn conversations
– Not only simple transactions but also complicated ones
• Dialogues can span on multiple domains (e.g. check remaining
data and then apply for more data)
• NN-Based Dialogue System
– Pipeline outputs are represented as vectors distributional
– The execution is constrained by backend services symbolic
54
55. Q & A
T H A N K S F O R YO U R AT T E N T I O N !
55