6. QUOTES
• Probably the last PBMT paper ever
• People working on digital humanities don't really know what digital
humanities are…
• Kids learn language having heard a very small amount – to further
advance AI we need to focus on low resourced conditions instead of
big data
• Home Made Restaurant Warmly
• to make by hand taste
9. KEYNOTE: MARK SELIGMAN, SPOKEN
TRANSLATION, INC.
PERCEPTUALLY GROUNDED DEEP SEMANTICS
IN FUTURE HYBRID MACHINE TRANSLATION
Nine Issues in Speech Translation
– Discourse
– Speech acts
– Topic tracking
– Domain
– Prosody
– Pauses
– Pitch, stress
– Translation mismatches
– System architecture, data
structures
Improve Statistical MT
• User feedback + machine learning
• More, better data
• Parsing > hybrid MT
10. KEYNOTE: MARK SELIGMAN, SPOKEN
TRANSLATION, INC.
PERCEPTUALLY GROUNDED DEEP SEMANTICS
IN FUTURE HYBRID MACHINE TRANSLATION
車
_car
を
_obj
運転
_driving
する
_do
人
_person
Syntactic
structure
NP
VP
Semantic
structure
PP V
N NP VN V
drive
person
person
car
mod
agt obj
The Return of Semantics:
Interlingua/Ontologies
Grounded Semantics
12. JOAKIM NIVRE
UPPSALA UNIVERSITY, SWEDEN
Universal Dependencies - Dubious Linguistics and Crappy Parsing?
• Maximize parallelism – but don’t overdo it
• Don’t annotate the same thing in different ways
• Don’t make different things look the same
• Don’t annotate things that are not there
• Universal taxonomy with language-specific elaboration
• Languages select from a universal pool of categories
• Allow language-specific extensions
13. JOAKIM NIVRE
UPPSALA UNIVERSITY, SWEDEN
Manning's law
1. UD needs to be satisfactory on linguistic analysis grounds for individual languages.
2. UD needs to be good for linguistic typology, i.e., providing a suitable basis for bringing
out cross-linguistic parallelism across languages and language families.
3. UD must be suitable for rapid, consistent annotation by a human annotator.
4. UD must be suitable for computer parsing with high accuracy.
5. UD must be easily comprehended and used by a non-linguist, whether a language
learner or an engineer with prosaic needs for language processing.
6. UD must support well downstream language understanding tasks (relation extraction,
reading comprehension, machine translation, …).
14. JOAKIM NIVRE
UPPSALA UNIVERSITY, SWEDEN
Dubious linguistics?
• Lexical dependencies and functional relations encoded in a
single tree
• Grounded in linguistic typology and dependency grammar
traditions
Crappy parsing?
• Not so bad with existing parsers, especially for cross-lingual
parsing
• Learn richer parsing models grounded in linguistic typology
15. REIKO MAZUKA
RIKEN BRAIN SCIENCE INSTITUTE, JAPAN
• 12month old babies are called 'old babies‘
• Medical stuff has lots of data, lots of problems
• … let alone …
DINA DEMNER-FUSHMAN
U.S. NATIONAL LIBRARY OF MEDICINE, U.S.A.
SIMONE TEUFEL
UNIVERSITY OF CAMBRIDGE, U.K.
17. CHARNER: CHARACTER-LEVEL
NAMED ENTITY RECOGNITION
Onur Kuru, Ozan Arkan Can, Deniz Yuret
• Stacked bidirectional LSTMs
• inputs characters
• outputs tag probabilities for each character
• Probabilities are then converted to word level named entity tags using a
Viterbi decoder
• Close to state-of-the-art NER performance in seven languages with the
same basic model using only labeled NER data and no hand-engineered
features or other external resources like syntactic taggers or Gazetteers
18. WHAT TOPIC DO YOU WANT TO HEAR ABOUT?
A BILINGUAL TALKING ROBOT
USING ENGLISH AND JAPANESE WIKIPEDIAS
Graham Wilcock, Kristiina Jokinen
21. INTERACTIVE ATTENTION
FOR NEURAL MACHINE TRANSLATION
Fandong Meng, Zhengdong Lu, Hang Li, Qun Liu
• Models the interaction between the decoder and the
representation of source sentence during translation by both
reading and writing operations
• Can keep track of the interaction history and therefore improve
the translation performance
22. SUB-WORD SIMILARITY BASED SEARCH FOR
EMBEDDINGS: INDUCING RARE-WORD
EMBEDDINGS FOR WORD SIMILARITY TASKS AND
LANGUAGE MODELLING
Mittul Singh, Clayton Greenberg, Youssef Oualil, Dietrich Klakow
• Training good word embeddings requires large amounts of data.
Out-of-vocabulary words will still be encountered.
• Existing methods use computationally-intensive morphological
analysis to generate embeddings
• The proposed system applies a computationally-simpler sub-word
search on words that have existing embeddings
• Up to 50% reduction in rare word perplexity in comparison to other
more complex language models
23. MULTI-ENGINE AND MULTI-ALIGNMENT BASED
AUTOMATIC POST-EDITING
AND ITS IMPACT ON TRANSLATION
PRODUCTIVITY
Santanu Pal, Sudip Kumar Naskar, Josef van Genabith
• Parallel system combination in the APE stage of a sequential MT-
APE combination
• Substantial translation improvements
• automatic evaluation (+5.9%)
• productivity in post-editing (21.76%)
• System combination on the level of APE alignments yields further
improvements
33. PREDICTING HUMAN SIMILARITY JUDGMENTS
WITH DISTRIBUTIONAL MODELS:
THE VALUE OF WORD ASSOCIATIONS
Simon De Deyne, Amy Perfors, Daniel J Navarro
• Internal language models, that are more closely aligned to the
mental representations of words
• Count based model for text corpora
• Predicting structure from text corpora using word embeddings
• Count based model for word associations
• A spreading activation approach to semantic structure
34. EXTENDING THE USE OF ADAPTOR GRAMMARS
FOR UNSUPERVISED MORPHOLOGICAL
SEGMENTATION OF UNSEEN LANGUAGES
Ramy Eskander, Owen Rambow, Tianchun Yang
• Segmentation of words in a language into a sequence of
morphs
• Without rewriting or normalizing morphs
• Without identifying the stem
• Without identifying morphological features