Se ha denunciado esta presentación.
Utilizamos tu perfil de LinkedIn y tus datos de actividad para personalizar los anuncios y mostrarte publicidad más relevante. Puedes cambiar tus preferencias de publicidad en cualquier momento.
One Shot Learning
SK Telecom Video Tech. Lab.
김지성 Manager
Contents
Why do we need one-shot learning?
What is one-shot learning?
How to do “one-shot learning”
Recap
종합기술원
미래기술원
Lab Introduction using Backpropagation
Video Tech. Lab
One Shot Learning:
Learning a class from
a single labelled example
Giraffe Classification
VS
어린이는 한 장의 기린 사진을 통해
기린이라는 Class의 Concept을
학습 가능함
DNN은 한 class를
학습하기 위해
수백장 or 수천장
필요함
Korean Food Classifier
CNN Architecutre
…
1. Batch Size (ex. 256장)씩 학습한다. (120만장씩 90번)
2. 최적화는 Gradient based Optimization
( Extensive, Increment...
Machine Learning Principle:
Test and Train conditions
Must Match.
Matching Networks Architecutre (deepmind, ‘16)
1. Test 할 때 1장만 보여주면서 학습할 때 수백장 보여주는게 좋을까?
2. 학습할 때도 한 class당 한장씩 만 보여주어야 한...
Matching Networks Architecutre
: Test Example
Attention by Softmax & Cosine distance
Candidate : Chihuahua
Test Input : Sh...
Matching Networks Architecutre
: Test Example
It’s differentiable! End to End Nearest Neighbour Classifier
Candidate : Chi...
Matching Networks Architecutre
CNNCNNCNNCNN
Bi-LSTM
fc7 : 4096 dim
fc7 : 4096 dim
fc7 : 4096 dim
fc7 : 4096 dim
Matching Networks Result
Recap
Why do we need one-shot learning?
If there is a few data for training/testing
What is one-shot learning?
Learning a ...
Face Recognition
Appendix
One-Shot Learning
How can we learn a novel concept from a
few examples? (random guess will be 1/N)
Korean
Problem Setting - Training
Problem Setting - Training
Greek
Problem Setting - Testing
1 Angelic
2 Oriya
3 Tibetan
4 Keble
5 ULOG
1 Angelic ?
2 Oriya
3 Tibetan ?
4 Keble
5 ULOG? = 1/N...
Recent Papers about One Shot Learning
One-shot Learning with Memory-Augmented Neural Networks
a) Submitted on : 16.05.19
b...
One Shot Learning with
Memory
Augmented
Neural
Network
MANN Architecture
RAM 
CPU 
Neural Turing Machine vs Memory Augmented Neural Network
1. MANN은 NTM의 변형이다.
2. Controller는 Feed Forward NN or LSTM을 사용하였다...
Why do we need Differentiable Memory?
잘 알려진 것 처럼 CNN is good for spatial structure (Image, … )
RNN is good for temporal st...
Augmenting Neural Nets with a Memory Module
CNN + RNN is good for spatiotemporal structure (Video, …)
그렇다면 아주 간단한 Question...
Augmenting Neural Nets with a Memory Module
Augmenting Neural Nets with a Memory Module
bAbI task Example Story
Sam moved to the garden.
Mary left the milk.
John left...
Augmenting Neural Nets with a Memory Module
bAbI task Example Story
Sam moved to the garden.
Mary left the milk.
John left...
Augmenting Neural Nets with a Memory Module
bAbI task Example Story
Sam moved to the garden.
Mary left the milk.
John left...
Augmenting Neural Nets with a Memory Module
bAbI task Example Story
Sam moved to the garden.
Mary left the milk.
John left...
Augmenting Neural Nets with a Memory Module
즉! Data가 out-of-order인 경우 기존 CNN, RNN으로는 잘 해결되지 않는다.
현재 아래 구조의 data들에 대해서는 아직 ...
Differentialble Memory : MemN2N
1. 단어를 one-hot vector로 변형한 뒤 A, B, C matrix를 통해 문장을 메모리화
2. Embedding 된 질문 문장(u)과의 inner p...
MANN - Task Setup
1. : Training Set
2. 입력이미지는 105 x 105  20 x 20  400 x 1 로 flatten
3. Time off-set labels 을 사용 (x_t, y_...
MANN - Network Strategy
1. Episode = Task  학습에 사용한 Sequence (x_t, y_t-1)
2. 1 episode의 길이는 5 classes 일 경우에는 50, 10 classe...
MANN - Network Strategy
1. Time off-set labels 을 사용 (x_t, y_t-1) 하여 현재 결과가 다음
입력으로 들어가도록 구성
2. External Memory에는 (image, l...
Omniglot classification - LSTM (5 way)
Omniglot classification - MANN (5 way)
Omniglot classification - LSTM (15 way)
Omniglot classification - MANN (15 way)
Human vs MANN
1. 사람에게도 동일하게 한장의 사진을 보여준 뒤 1-5 사이의 숫자를
선택하게 하고 매 Instance 마다 정답을 알려줄 경우 1-shot의
경우 57.3% 정확도
2. 하지만 MANN은 8...
Summary
1. Internal Memory (CNN, RNN)는 새로운 information을
Internal Memory(Weights) 에 반영하는 속도가 느림.
2. External Memory (MemN2N...
Próxima SlideShare
Cargando en…5
×

One-Shot Learning

10.191 visualizaciones

Publicado el

Tensorflow KR 2차 모임 라이트닝톡

Publicado en: Datos y análisis

One-Shot Learning

  1. 1. One Shot Learning SK Telecom Video Tech. Lab. 김지성 Manager
  2. 2. Contents Why do we need one-shot learning? What is one-shot learning? How to do “one-shot learning” Recap
  3. 3. 종합기술원 미래기술원 Lab Introduction using Backpropagation Video Tech. Lab
  4. 4. One Shot Learning: Learning a class from a single labelled example
  5. 5. Giraffe Classification VS 어린이는 한 장의 기린 사진을 통해 기린이라는 Class의 Concept을 학습 가능함 DNN은 한 class를 학습하기 위해 수백장 or 수천장 필요함
  6. 6. Korean Food Classifier
  7. 7. CNN Architecutre … 1. Batch Size (ex. 256장)씩 학습한다. (120만장씩 90번) 2. 최적화는 Gradient based Optimization ( Extensive, Incremental Training on Large DB ) 3. 새로운 Image에 대한 학습 반영이 느리다. 4. 그러므로 One-Shot Learning에 적합하지 않다!
  8. 8. Machine Learning Principle: Test and Train conditions Must Match.
  9. 9. Matching Networks Architecutre (deepmind, ‘16) 1. Test 할 때 1장만 보여주면서 학습할 때 수백장 보여주는게 좋을까? 2. 학습할 때도 한 class당 한장씩 만 보여주어야 한다! (or 5장) 3. g, f 는 입력을 embedding 하는 neural network (VGG, Inception) 4. Embedding을 통해 나온 feature vector들 사이의 유사도를 구한다. 5. 유사도의 weighted sum을 통해 최종 Test label을 결정
  10. 10. Matching Networks Architecutre : Test Example Attention by Softmax & Cosine distance Candidate : Chihuahua Test Input : Shepherd Candidate : Retriever Candidate : Siberian Husky Candidate : Shepherd fc7 : 4096 dim CNN CNN : Test Label: Training Set
  11. 11. Matching Networks Architecutre : Test Example It’s differentiable! End to End Nearest Neighbour Classifier Candidate : Chihuahua Test Input : Shepherd Candidate : Retriever Candidate : Siberian Husky Candidate : Shepherd fc7 : 4096 dim CNN CNN : Test Label: Training Set CNN CNN CNN Bi- LSTM LSTM
  12. 12. Matching Networks Architecutre CNNCNNCNNCNN Bi-LSTM fc7 : 4096 dim fc7 : 4096 dim fc7 : 4096 dim fc7 : 4096 dim
  13. 13. Matching Networks Result
  14. 14. Recap Why do we need one-shot learning? If there is a few data for training/testing What is one-shot learning? Learning a class from a single labelled example How to do “one-shot learning” Start with Omniglot Example import tensorflow as tf
  15. 15. Face Recognition
  16. 16. Appendix
  17. 17. One-Shot Learning How can we learn a novel concept from a few examples? (random guess will be 1/N)
  18. 18. Korean Problem Setting - Training
  19. 19. Problem Setting - Training Greek
  20. 20. Problem Setting - Testing 1 Angelic 2 Oriya 3 Tibetan 4 Keble 5 ULOG 1 Angelic ? 2 Oriya 3 Tibetan ? 4 Keble 5 ULOG? = 1/N 한번은 정답을 알려준다. (One-shot) 새로운 Character에 대해 Test (5지 선다)
  21. 21. Recent Papers about One Shot Learning One-shot Learning with Memory-Augmented Neural Networks a) Submitted on : 16.05.19 b) Written by : Deep Mind c) Link : https://arxiv.org/abs/1605.06065 Matching Networks for One Shot Learning a) Submitted on : 16.06.13 b) Written by : Deep Mind c) Link : https://arxiv.org/abs/1606.04080
  22. 22. One Shot Learning with Memory Augmented Neural Network
  23. 23. MANN Architecture RAM  CPU 
  24. 24. Neural Turing Machine vs Memory Augmented Neural Network 1. MANN은 NTM의 변형이다. 2. Controller는 Feed Forward NN or LSTM을 사용하였다. 3. 기존 NTM은 복사, 정렬등의 알고리즘을 학습하는데 사용하였지만 이 논문에서는 One-shot Learing에 사용 4. 빠른 학습을 위해 Memory에 Write할 때 가장 적게 사용된 메모리 or 가장 최근에 사용된 메모리에 Write함 (LRUA) 5. 그렇다면 기본적인 질문?! 왜 Augmented Memory가 필요할까?
  25. 25. Why do we need Differentiable Memory? 잘 알려진 것 처럼 CNN is good for spatial structure (Image, … ) RNN is good for temporal structure (Audio, …)
  26. 26. Augmenting Neural Nets with a Memory Module CNN + RNN is good for spatiotemporal structure (Video, …) 그렇다면 아주 간단한 Question & Answering 문제에서도 CNN 및 RNN or 그 조합이 잘 동작하는가?
  27. 27. Augmenting Neural Nets with a Memory Module
  28. 28. Augmenting Neural Nets with a Memory Module bAbI task Example Story Sam moved to the garden. Mary left the milk. John left the football. Daniel moved to the garden. Sam went to the kitchen. Sandra moved to the hallway. Mary moved to the hallway. Mary left the milk. Sam drops the apple there. Q : Where was the apple after the garden?
  29. 29. Augmenting Neural Nets with a Memory Module bAbI task Example Story Sam moved to the garden. Mary left the milk. John left the football. Daniel moved to the garden. Sam went to the kitchen. Sandra moved to the hallway. Mary moved to the hallway. Mary left the milk. Sam drops the apple there. Q : Where was the apple after the garden?
  30. 30. Augmenting Neural Nets with a Memory Module bAbI task Example Story Sam moved to the garden. Mary left the milk. John left the football. Daniel moved to the garden. Sam went to the kitchen. Sandra moved to the hallway. Mary moved to the hallway. Mary left the milk. Sam drops the apple there. Q : Where was the apple after the garden?
  31. 31. Augmenting Neural Nets with a Memory Module bAbI task Example Story Sam moved to the garden. Mary left the milk. John left the football. Daniel moved to the garden. Sam went to the kitchen. Sandra moved to the hallway. Mary moved to the hallway. Mary left the milk. Sam drops the apple there. Q : Where was the apple after the garden?
  32. 32. Augmenting Neural Nets with a Memory Module 즉! Data가 out-of-order인 경우 기존 CNN, RNN으로는 잘 해결되지 않는다. 현재 아래 구조의 data들에 대해서는 아직 다양한 노력 中 1. Out-of-order access (순차적이지 않은 접근) 2. 장기 기억 3. 정렬되지 않은 data
  33. 33. Differentialble Memory : MemN2N 1. 단어를 one-hot vector로 변형한 뒤 A, B, C matrix를 통해 문장을 메모리화 2. Embedding 된 질문 문장(u)과의 inner product를 통해 Attention 결정 3. 구해진 attention과 입력 Output Memory와의 weighted sum을 통해 output matrix o 구함 4. 최종적으로 Softmax(W(u+o))를 고려해 답을 도출
  34. 34. MANN - Task Setup 1. : Training Set 2. 입력이미지는 105 x 105  20 x 20  400 x 1 로 flatten 3. Time off-set labels 을 사용 (x_t, y_t-1) 하여 현재 결과가 다음 입력으로 들어가도록 구성
  35. 35. MANN - Network Strategy 1. Episode = Task  학습에 사용한 Sequence (x_t, y_t-1) 2. 1 episode의 길이는 5 classes 일 경우에는 50, 10 classes일 경우에는 100 3. 학습은 이런 episode를 10만번 수행 하여 결과 도출
  36. 36. MANN - Network Strategy 1. Time off-set labels 을 사용 (x_t, y_t-1) 하여 현재 결과가 다음 입력으로 들어가도록 구성 2. External Memory에는 (image, label) 쌍으로 저장해 놓는다. 3. External Memory에 새로 입력된 이미지와 유사도가 큰 이미지가 있으면 Read해서 결과를 도출한다.
  37. 37. Omniglot classification - LSTM (5 way)
  38. 38. Omniglot classification - MANN (5 way)
  39. 39. Omniglot classification - LSTM (15 way)
  40. 40. Omniglot classification - MANN (15 way)
  41. 41. Human vs MANN 1. 사람에게도 동일하게 한장의 사진을 보여준 뒤 1-5 사이의 숫자를 선택하게 하고 매 Instance 마다 정답을 알려줄 경우 1-shot의 경우 57.3% 정확도 2. 하지만 MANN은 82.8 % 정확도 3. 앞에서 설명한 Matching Net은 98.1% 정확도
  42. 42. Summary 1. Internal Memory (CNN, RNN)는 새로운 information을 Internal Memory(Weights) 에 반영하는 속도가 느림. 2. External Memory (MemN2N, NTM)을 사용하면 빠르게 새로운 정보를 저장하고 검색하는 것이 가능함. (MANN) 3. Deep Neural Feature에 대한 Metric Learning 과 External Memory의 아이디어를 결합하여 Matching Network를 제안함 4. 학습에 필요한 이미지가 충분하지 않을 경우 큰 장점이 있지만 반대의 경우에는 State-of-the Art 대비 열세

×