Se ha denunciado esta presentación.
Se está descargando tu SlideShare. ×

using Self-Supervised Learning Can Improve Model Robustness and uncertainty.pptx

Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Cargando en…3
×

Eche un vistazo a continuación

1 de 22 Anuncio
Anuncio

Más Contenido Relacionado

Más reciente (20)

Anuncio

using Self-Supervised Learning Can Improve Model Robustness and uncertainty.pptx

  1. 1. Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty arXIV: 2019
  2. 2. Concerns for robustness and uncertainty • Robustness to Common Corruptions • Robustness to Adversarial Perturbations • Robustness to Label Corruptions • Out-of-Distribution Detection • Conclusion
  3. 3. Predict the relative position of image patches Use Resulting representation to improve object detection Self supervision for learning without labelled data
  4. 4. Related works
  5. 5. Create surrogate classes • Train on by transforming seed image patches • E.g., predict image rotations;
  6. 6. Using colorization as a proxy task; (Pretext)
  7. 7. Maximizing mutual information • Features extracted from multiple views of a shared context
  8. 8. Robustness Resistant across a variety of imperfect training and testing • Fog • Blur • JPEG Compression • Adversarial attack • Corrupted labels
  9. 9. Examples: Robustness
  10. 10. Out-of-distribution detection Anomalous or significantly different data used in the training
  11. 11. Robustness to common corruption • Noise | Blur | Weather | Digital (Corruption categories)
  12. 12. Proposed method: Robustness to rotation prediction • Auxiliary self-supervision in the form of predicting rotations
  13. 13. Self-supervision with rotation prediction • Supervised classification (Texture biased) • Self-supervision with rotation prediction • Provided Strong regularization to correct bias • Concentrate on global structure
  14. 14. Results: 19 corruption categories
  15. 15. Robustness to Adversarial Perturbations
  16. 16. Robustness to label corruptions After performing Automatic Labeling/Non-expert labeling
  17. 17. Robustness to label corrupution • The Gold Loss Correction (GLC) is a semi-verified method for label noise robustness in deep learning classifiers.
  18. 18. Out-of-distribution detection • Experiments with anomalies: Gaussian, Rademacher, Blobs, Textures, SVHN, Places365, LSUN, and CIFAR-100 images.
  19. 19. Ablation study with Imagenet • Self-attention is useful in one-class OOD detection, enabling the network to more easily learn shape and compare regions across the whole image.
  20. 20. Self-attention with convolution block attention Module(CBAM)
  21. 21. Conclusion • Rotation prediction can improve classifier robustness to common corruptions, adversarial perturbations, and label corruptions • Helpful OOD detection • OOD detection with large image size (224*224) • Self attention is of great value in learning global structure

×