SlideShare una empresa de Scribd logo
1 de 15
Descargar para leer sin conexión
DEEP LEARNING JP
[DL Papers]
Domain Generalization by Learning and Removing Domain-
specific Features
Yuting Lin, Kokusai Kogyo Co., Ltd.(国際航業)
http://deeplearning.jp/
1
書誌情報
• タイトル
– Domain Generalization by Learning and Removing Domain-specific Features
• 著者
– Yu Ding1, Lei Wang1, Bin Liang2, Shuming Liang2, Yang Wang2, Fang Chen2
– 1University of Wollongong 2University of Technology Sydney
• NeurIPS2022に採択
• Paper
– https://openreview.net/forum?id=37Rf7BTAtAM
• Code
– https://github.com/yulearningg/LRDG
2
Introduction
• 背景
– Domain shiftによる汎化性能の低下は深層学習の一つの課題
– CNNはdomainに依存するlocal特徴で認識してしまう傾向
• 本論文はunseen domainでも適用可能な学習手法domain generalizationに
注目
– 既存手法は、 domain依存な特徴は除去できていないことが課題
• Domain-specific情報は明示的に教えず、sourceからdomain-invariant情報のみで学習
• テクスチャなどの情報をlocalな特徴として、学習しない(ペナルティをかける)手法もあるが、他の
localな特徴の除去に保証がない
– domain依存な特徴明示的に排除(学習しないように)する手法を提案
3
概要
• 提案手法の概要
– domain-specific classifiers, encoder-decoder network, domain-invariant classifierで
構成される
– source domain毎にdomain-specific classifierを設け、対象domainに依存する情報を
抽出
• 他のsource domainに対応できないように
– 画像をdomain不変なspaceにmapping
– domain-invariant classifierで最終的に認識を行う
• Data augmentation的な手法と異なる点
– 学習データは増やさず、domain不変な中間表現を作成
4
提案手法
• Image-to-image translation手法にインスパイアされた
• 提案手法の全体図
5
出典:https://openreview.net/forum?id=37Rf7BTAtAM
提案手法の詳細
• Learning domain-specific features
– N source domain 𝒟𝑠 = 𝐷𝑠
1, 𝐷𝑠
2, ⋯ , 𝐷𝑠
𝑁 に対し、classifier ℱ𝑠 = 𝐹1, 𝐹2, ⋯ , 𝐹𝑁 を用意
– 𝐷𝑠
𝑖は𝐹𝑖のみ対応可能:
• 𝐹𝑖のみ認識可能:分類loss(cross entropy loss)を最小になるように学習をガイド
𝑎𝑟𝑔 min
𝜃𝑖
𝔼𝐷𝑠
𝑖~𝒟𝑠
𝔼 𝑥𝑗
𝑖
,𝑦𝑗
𝑖
~𝐷𝑠
𝑖 𝐿𝐶 𝐹𝑖 𝑥𝑗
𝑖
; 𝜃𝑖 , 𝑦𝑗
𝑖
• 他のclassifier( 𝐹1, 𝐹2, ⋯ 𝐹𝑖−1, 𝐹𝑖+1, ⋯ 𝐹𝑁 )は認識できない:uncertainty loss(entropy loss)で学習
できないように
𝑎𝑟𝑔 max
𝜃𝑖
𝔼𝐷𝑠
𝑘~𝒟𝑠,𝑘≠𝑖 𝔼 𝑥𝑗
𝑘
,𝑦𝑗
𝑘
~𝐷𝑠
𝑘 𝐿𝑈 𝐹𝑖 𝑥𝑗
𝑘
; 𝜃𝑖
– classifier ℱ𝑠は予め学習したら、パラメータをフリーズ
6
提案手法の詳細
• Removing domain-specific features
– Encoder-decoder形式で入力画像を新しい画像空間にmapping
– その画像空間は各classifierのuncertainty lossを最大化する → domain不変な画像
空間にmapping
𝑎𝑟𝑔 max
𝜃𝑀
𝔼𝐷𝑠
𝑖~𝒟𝑠
𝔼 𝑥𝑗
𝑖
,𝑦𝑗
𝑖
~𝐷𝑠
𝑖 𝐿𝑈 𝐹𝑖 𝑀 𝑥𝑗
𝑖
; 𝜃𝑀 ; 𝜃𝑖
– Reconstruction loss(pixel-wise l2 loss)で、変な画像にmappingされないように
𝑎𝑟𝑔 min
𝜃𝑀
𝔼𝐷𝑠
𝑖~𝒟𝑠
𝔼 𝑥𝑗
𝑖
,𝑦𝑗
𝑖
~𝐷𝑠
𝑖 𝐿𝑅 𝑀 𝑥𝑗
𝑖
; 𝜃𝑀 , 𝑥𝑗
𝑖
– domain-invariant classifier Fで認識(タスク)を行う
𝑎𝑟𝑔 min
𝜃𝑀,𝜃𝐹
𝔼𝐷𝑠
𝑖~𝒟𝑠
𝔼 𝑥𝑗
𝑖
,𝑦𝑗
𝑖
~𝐷𝑠
𝑖 𝐿𝐶 𝐹 𝑀 𝑥𝑗
𝑖
; 𝜃𝑀 ; 𝜃𝐹 , 𝑦𝑗
𝑖
7
提案手法の詳細
• 提案手法のLoss
– domain-specific classifier学習の段階
𝐿1 = 𝐿𝐶
ℱ𝑠
+ 𝜆1𝐿𝑈
ℱ𝑠
– domain-invariant classifier学習の段階
𝐿2 = 𝐿𝐶
𝐹 𝑀
+ 𝜆2𝐿𝑈
𝑀
+ 𝜆3𝐿𝑅
𝑀
8
提案手法の理論的な分析
• 提案手法の理論的な分析
– 完璧なlabeling function: 𝑓 ∶ 𝒳 → 𝒴
– 実際のfunction(hypothesis): ℎ ∶ 𝒳 → 𝒴
– domain 𝒟おけるℎのリスク: ℛ ℎ = 𝔼𝑥~𝒟 ℒ ℎ 𝑥 − 𝑓 𝑥
– Source domainが複数ある場合は混合分布で表現
• Λ𝑆 = ഥ
𝐷: ഥ
𝐷 ∙ = σ𝑖=1
𝑁
𝜋𝑖 𝐷𝑆
𝑖
∙ , 0 ≤ 𝜋𝑖 ≤ 1, σ𝑖=1
𝑁
𝜋𝑖 = 1
– 既往文献[1]により、Λ𝑆と𝒟𝑡のgeneralization risk boundは
– 𝒟𝑡のupper boundは𝛾, 𝜖に依存
– 提案手法domain specific featureを排除し、 𝐷𝑠
1
, 𝐷𝑠
2
, ⋯ , 𝐷𝑠
𝑁
, 𝒟𝑡から ෡
𝐷𝑠
1
, ෡
𝐷𝑠
2
, ⋯ , ෡
𝐷𝑠
𝑁
, ෡
𝒟𝑡に
mapping
• 𝑑ℋ
෡
𝐷𝑠
𝑖
, ෡
𝐷𝑠
𝑗
≤ 𝐷𝑠
𝑖
, 𝐷𝑠
𝑗
→ source domainの分布に接近し、𝜖が小さくなる
• ෡
𝒟𝑡もsource domainの分布に接近し、𝛾も小さくなる 9
[1] Isabela Albuquerque, João Monteiro, Mohammad Darvishi, Tiago H Falk, and Ioannis Mitliagkas. Generalizing to unseen domains via distribution matching. arXiv preprint arXiv:1911.00804, 2020.
実験設定
• データセット
– PACS: Photo (P), Art Painting (A), Cartoon (C) and Sketch (S)
– VLCS: PASCAL VOC 2007 (V), LabelMe (L), Caltech (C) and Sun (S)
– Office-Home: Art (A), Clipart (C), Product (P), and Real-World (R)
– そのうち、3つをsource domain、1つをtarget domain
• Network:
– Encoder-decoder: U-Net
– Classifier (ImageNet pretrained)
• AlexNet: PACS, VLCS
• ResNet18: PACS and Office-Home
• ResNet50: PACS
• 比較対象:
– domain-invariant based手法、data augmentation based手法、 meta-learning based手法
– Baseline: empirical risk minimization (ERM) 10
実験結果-定量評価
• PACSの検証結果
– Domain: Art painting, Cartoon, Photo, Sketch
– 全体的に評価する場合、提案手法の精度が最も高い
– Domain-specific/invariant情報両方を利用する手法と比較
• Art paintingとPhotoは共通するdomain-specific特徴があり、お互いのdomainの認識にポジティブな効果がある。一方、
Cartoon, Sketchにはネガティブに働く傾向
11
出典: https://deepai.org/publication/domain-generalization-via-gradient-surgery
実験結果-定量評価
• VLCSの検証結果:精度が2nd
• Office-Homeの検証結果:精度が1st
• 提案手法の有効性は確認
12
出典: https://deepai.org/publication/domain-generalization-via-gradient-surgery
実験結果-Domain divergence
• Source domain divergence
– H-divergenceで評価:
• classifier(linearSVM)でsource domainを分類できるかで評価
• Proxy A-distance: 2 1 − 2𝜀 , where 𝜀 = test error
• Baseline: AlexNetの最終層をSVMに入力
• 提案手法:domain-invariant classifierの最終層をSVMに入力
– 提案手法(mapped source domain)はdomain分布を近づけることができた
13
実験結果-Domain divergence
• Source-Target domain divergence
– Λ𝑆と𝒟𝑡のdivergenceで評価
• Λ𝑆 = ഥ
𝐷: ഥ
𝐷 ∙ = σ𝑖=1
𝑁
𝜋𝑖 𝐷𝑆
𝑖
∙ , 0 ≤ 𝜋𝑖 ≤ 1, σ𝑖=1
𝑁
𝜋𝑖 = 1
• 𝜋𝑖は{0, 0.1, 0.2, · · · , 0.9, 1}からrandom samplingして評価
– 提案手法は、source domainとtarget domain間のdivergence
を削減できた
• 提案手法は、source domain間の分布の違い𝛾、
source-target domain間の分布の違い𝜖を削減し、
generalization risk boundを減らすことができた
14
まとめ
• 明示的にdomain-specific情報を除外する手法を提案
– ネットワークはdomain-specific classifiers, encoder-decoder network, domain-
invariant classifierで構成
– 2-step形式で学習
• Limitation
– Source domainの数に応じてdomain-specific classifierを用意する必要がある
• novel domain-specific classifierが望ましい
– Target domainのdomain-specific情報を除外できない
• 潜在空間上で実施することが考えられる
15

Más contenido relacionado

La actualidad más candente

SSII2022 [SS1] ニューラル3D表現の最新動向〜 ニューラルネットでなんでも表せる?? 〜​
SSII2022 [SS1] ニューラル3D表現の最新動向〜 ニューラルネットでなんでも表せる?? 〜​SSII2022 [SS1] ニューラル3D表現の最新動向〜 ニューラルネットでなんでも表せる?? 〜​
SSII2022 [SS1] ニューラル3D表現の最新動向〜 ニューラルネットでなんでも表せる?? 〜​SSII
 
GAN(と強化学習との関係)
GAN(と強化学習との関係)GAN(と強化学習との関係)
GAN(と強化学習との関係)Masahiro Suzuki
 
SSII2021 [OS2-01] 転移学習の基礎:異なるタスクの知識を利用するための機械学習の方法
SSII2021 [OS2-01] 転移学習の基礎:異なるタスクの知識を利用するための機械学習の方法SSII2021 [OS2-01] 転移学習の基礎:異なるタスクの知識を利用するための機械学習の方法
SSII2021 [OS2-01] 転移学習の基礎:異なるタスクの知識を利用するための機械学習の方法SSII
 
ドメイン適応の原理と応用
ドメイン適応の原理と応用ドメイン適応の原理と応用
ドメイン適応の原理と応用Yoshitaka Ushiku
 
[DL輪読会]Wasserstein GAN/Towards Principled Methods for Training Generative Adv...
[DL輪読会]Wasserstein GAN/Towards Principled Methods for Training Generative Adv...[DL輪読会]Wasserstein GAN/Towards Principled Methods for Training Generative Adv...
[DL輪読会]Wasserstein GAN/Towards Principled Methods for Training Generative Adv...Deep Learning JP
 
cvpaper.challenge 研究効率化 Tips
cvpaper.challenge 研究効率化 Tipscvpaper.challenge 研究効率化 Tips
cvpaper.challenge 研究効率化 Tipscvpaper. challenge
 
強化学習と逆強化学習を組み合わせた模倣学習
強化学習と逆強化学習を組み合わせた模倣学習強化学習と逆強化学習を組み合わせた模倣学習
強化学習と逆強化学習を組み合わせた模倣学習Eiji Uchibe
 
[DL輪読会]Model soups: averaging weights of multiple fine-tuned models improves ...
[DL輪読会]Model soups: averaging weights of multiple fine-tuned models improves ...[DL輪読会]Model soups: averaging weights of multiple fine-tuned models improves ...
[DL輪読会]Model soups: averaging weights of multiple fine-tuned models improves ...Deep Learning JP
 
【DL輪読会】DINOv2: Learning Robust Visual Features without Supervision
【DL輪読会】DINOv2: Learning Robust Visual Features without Supervision【DL輪読会】DINOv2: Learning Robust Visual Features without Supervision
【DL輪読会】DINOv2: Learning Robust Visual Features without SupervisionDeep Learning JP
 
[DL輪読会]Vision Transformer with Deformable Attention (Deformable Attention Tra...
[DL輪読会]Vision Transformer with Deformable Attention (Deformable Attention Tra...[DL輪読会]Vision Transformer with Deformable Attention (Deformable Attention Tra...
[DL輪読会]Vision Transformer with Deformable Attention (Deformable Attention Tra...Deep Learning JP
 
【メタサーベイ】Video Transformer
 【メタサーベイ】Video Transformer 【メタサーベイ】Video Transformer
【メタサーベイ】Video Transformercvpaper. challenge
 
Optimizer入門&最新動向
Optimizer入門&最新動向Optimizer入門&最新動向
Optimizer入門&最新動向Motokawa Tetsuya
 
【メタサーベイ】基盤モデル / Foundation Models
【メタサーベイ】基盤モデル / Foundation Models【メタサーベイ】基盤モデル / Foundation Models
【メタサーベイ】基盤モデル / Foundation Modelscvpaper. challenge
 
Domain Adaptation 発展と動向まとめ(サーベイ資料)
Domain Adaptation 発展と動向まとめ(サーベイ資料)Domain Adaptation 発展と動向まとめ(サーベイ資料)
Domain Adaptation 発展と動向まとめ(サーベイ資料)Yamato OKAMOTO
 
多様な強化学習の概念と課題認識
多様な強化学習の概念と課題認識多様な強化学習の概念と課題認識
多様な強化学習の概念と課題認識佑 甲野
 
[DL輪読会]data2vec: A General Framework for Self-supervised Learning in Speech,...
[DL輪読会]data2vec: A General Framework for  Self-supervised Learning in Speech,...[DL輪読会]data2vec: A General Framework for  Self-supervised Learning in Speech,...
[DL輪読会]data2vec: A General Framework for Self-supervised Learning in Speech,...Deep Learning JP
 
[DL輪読会]相互情報量最大化による表現学習
[DL輪読会]相互情報量最大化による表現学習[DL輪読会]相互情報量最大化による表現学習
[DL輪読会]相互情報量最大化による表現学習Deep Learning JP
 
PRML学習者から入る深層生成モデル入門
PRML学習者から入る深層生成モデル入門PRML学習者から入る深層生成モデル入門
PRML学習者から入る深層生成モデル入門tmtm otm
 
Attentionの基礎からTransformerの入門まで
Attentionの基礎からTransformerの入門までAttentionの基礎からTransformerの入門まで
Attentionの基礎からTransformerの入門までAGIRobots
 

La actualidad más candente (20)

SSII2022 [SS1] ニューラル3D表現の最新動向〜 ニューラルネットでなんでも表せる?? 〜​
SSII2022 [SS1] ニューラル3D表現の最新動向〜 ニューラルネットでなんでも表せる?? 〜​SSII2022 [SS1] ニューラル3D表現の最新動向〜 ニューラルネットでなんでも表せる?? 〜​
SSII2022 [SS1] ニューラル3D表現の最新動向〜 ニューラルネットでなんでも表せる?? 〜​
 
GAN(と強化学習との関係)
GAN(と強化学習との関係)GAN(と強化学習との関係)
GAN(と強化学習との関係)
 
SSII2021 [OS2-01] 転移学習の基礎:異なるタスクの知識を利用するための機械学習の方法
SSII2021 [OS2-01] 転移学習の基礎:異なるタスクの知識を利用するための機械学習の方法SSII2021 [OS2-01] 転移学習の基礎:異なるタスクの知識を利用するための機械学習の方法
SSII2021 [OS2-01] 転移学習の基礎:異なるタスクの知識を利用するための機械学習の方法
 
ドメイン適応の原理と応用
ドメイン適応の原理と応用ドメイン適応の原理と応用
ドメイン適応の原理と応用
 
[DL輪読会]Wasserstein GAN/Towards Principled Methods for Training Generative Adv...
[DL輪読会]Wasserstein GAN/Towards Principled Methods for Training Generative Adv...[DL輪読会]Wasserstein GAN/Towards Principled Methods for Training Generative Adv...
[DL輪読会]Wasserstein GAN/Towards Principled Methods for Training Generative Adv...
 
cvpaper.challenge 研究効率化 Tips
cvpaper.challenge 研究効率化 Tipscvpaper.challenge 研究効率化 Tips
cvpaper.challenge 研究効率化 Tips
 
Semantic segmentation
Semantic segmentationSemantic segmentation
Semantic segmentation
 
強化学習と逆強化学習を組み合わせた模倣学習
強化学習と逆強化学習を組み合わせた模倣学習強化学習と逆強化学習を組み合わせた模倣学習
強化学習と逆強化学習を組み合わせた模倣学習
 
[DL輪読会]Model soups: averaging weights of multiple fine-tuned models improves ...
[DL輪読会]Model soups: averaging weights of multiple fine-tuned models improves ...[DL輪読会]Model soups: averaging weights of multiple fine-tuned models improves ...
[DL輪読会]Model soups: averaging weights of multiple fine-tuned models improves ...
 
【DL輪読会】DINOv2: Learning Robust Visual Features without Supervision
【DL輪読会】DINOv2: Learning Robust Visual Features without Supervision【DL輪読会】DINOv2: Learning Robust Visual Features without Supervision
【DL輪読会】DINOv2: Learning Robust Visual Features without Supervision
 
[DL輪読会]Vision Transformer with Deformable Attention (Deformable Attention Tra...
[DL輪読会]Vision Transformer with Deformable Attention (Deformable Attention Tra...[DL輪読会]Vision Transformer with Deformable Attention (Deformable Attention Tra...
[DL輪読会]Vision Transformer with Deformable Attention (Deformable Attention Tra...
 
【メタサーベイ】Video Transformer
 【メタサーベイ】Video Transformer 【メタサーベイ】Video Transformer
【メタサーベイ】Video Transformer
 
Optimizer入門&最新動向
Optimizer入門&最新動向Optimizer入門&最新動向
Optimizer入門&最新動向
 
【メタサーベイ】基盤モデル / Foundation Models
【メタサーベイ】基盤モデル / Foundation Models【メタサーベイ】基盤モデル / Foundation Models
【メタサーベイ】基盤モデル / Foundation Models
 
Domain Adaptation 発展と動向まとめ(サーベイ資料)
Domain Adaptation 発展と動向まとめ(サーベイ資料)Domain Adaptation 発展と動向まとめ(サーベイ資料)
Domain Adaptation 発展と動向まとめ(サーベイ資料)
 
多様な強化学習の概念と課題認識
多様な強化学習の概念と課題認識多様な強化学習の概念と課題認識
多様な強化学習の概念と課題認識
 
[DL輪読会]data2vec: A General Framework for Self-supervised Learning in Speech,...
[DL輪読会]data2vec: A General Framework for  Self-supervised Learning in Speech,...[DL輪読会]data2vec: A General Framework for  Self-supervised Learning in Speech,...
[DL輪読会]data2vec: A General Framework for Self-supervised Learning in Speech,...
 
[DL輪読会]相互情報量最大化による表現学習
[DL輪読会]相互情報量最大化による表現学習[DL輪読会]相互情報量最大化による表現学習
[DL輪読会]相互情報量最大化による表現学習
 
PRML学習者から入る深層生成モデル入門
PRML学習者から入る深層生成モデル入門PRML学習者から入る深層生成モデル入門
PRML学習者から入る深層生成モデル入門
 
Attentionの基礎からTransformerの入門まで
Attentionの基礎からTransformerの入門までAttentionの基礎からTransformerの入門まで
Attentionの基礎からTransformerの入門まで
 

Similar a 【DL輪読会】Domain Generalization by Learning and Removing Domainspecific Features

【DL輪読会】One-Shot Domain Adaptive and Generalizable Semantic Segmentation with ...
【DL輪読会】One-Shot Domain Adaptive and Generalizable Semantic Segmentation with ...【DL輪読会】One-Shot Domain Adaptive and Generalizable Semantic Segmentation with ...
【DL輪読会】One-Shot Domain Adaptive and Generalizable Semantic Segmentation with ...Deep Learning JP
 
[DL輪読会]Geometric Unsupervised Domain Adaptation for Semantic Segmentation
[DL輪読会]Geometric Unsupervised Domain Adaptation for Semantic Segmentation[DL輪読会]Geometric Unsupervised Domain Adaptation for Semantic Segmentation
[DL輪読会]Geometric Unsupervised Domain Adaptation for Semantic SegmentationDeep Learning JP
 
[DL輪読会]RobustNet: Improving Domain Generalization in Urban- Scene Segmentatio...
[DL輪読会]RobustNet: Improving Domain Generalization in Urban- Scene Segmentatio...[DL輪読会]RobustNet: Improving Domain Generalization in Urban- Scene Segmentatio...
[DL輪読会]RobustNet: Improving Domain Generalization in Urban- Scene Segmentatio...Deep Learning JP
 
【DL輪読会】HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentat...
【DL輪読会】HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentat...【DL輪読会】HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentat...
【DL輪読会】HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentat...Deep Learning JP
 
[DL輪読会]Focal Loss for Dense Object Detection
[DL輪読会]Focal Loss for Dense Object Detection[DL輪読会]Focal Loss for Dense Object Detection
[DL輪読会]Focal Loss for Dense Object DetectionDeep Learning JP
 
[DL輪読会]Differentiable Mapping Networks: Learning Structured Map Representatio...
[DL輪読会]Differentiable Mapping Networks: Learning Structured Map Representatio...[DL輪読会]Differentiable Mapping Networks: Learning Structured Map Representatio...
[DL輪読会]Differentiable Mapping Networks: Learning Structured Map Representatio...Deep Learning JP
 
Image net classification with Deep Convolutional Neural Networks
Image net classification with Deep Convolutional Neural NetworksImage net classification with Deep Convolutional Neural Networks
Image net classification with Deep Convolutional Neural NetworksShingo Horiuchi
 
[DL Hacks] Learning Transferable Features with Deep Adaptation Networks
[DL Hacks] Learning Transferable Features with Deep Adaptation Networks[DL Hacks] Learning Transferable Features with Deep Adaptation Networks
[DL Hacks] Learning Transferable Features with Deep Adaptation NetworksYusuke Iwasawa
 

Similar a 【DL輪読会】Domain Generalization by Learning and Removing Domainspecific Features (8)

【DL輪読会】One-Shot Domain Adaptive and Generalizable Semantic Segmentation with ...
【DL輪読会】One-Shot Domain Adaptive and Generalizable Semantic Segmentation with ...【DL輪読会】One-Shot Domain Adaptive and Generalizable Semantic Segmentation with ...
【DL輪読会】One-Shot Domain Adaptive and Generalizable Semantic Segmentation with ...
 
[DL輪読会]Geometric Unsupervised Domain Adaptation for Semantic Segmentation
[DL輪読会]Geometric Unsupervised Domain Adaptation for Semantic Segmentation[DL輪読会]Geometric Unsupervised Domain Adaptation for Semantic Segmentation
[DL輪読会]Geometric Unsupervised Domain Adaptation for Semantic Segmentation
 
[DL輪読会]RobustNet: Improving Domain Generalization in Urban- Scene Segmentatio...
[DL輪読会]RobustNet: Improving Domain Generalization in Urban- Scene Segmentatio...[DL輪読会]RobustNet: Improving Domain Generalization in Urban- Scene Segmentatio...
[DL輪読会]RobustNet: Improving Domain Generalization in Urban- Scene Segmentatio...
 
【DL輪読会】HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentat...
【DL輪読会】HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentat...【DL輪読会】HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentat...
【DL輪読会】HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentat...
 
[DL輪読会]Focal Loss for Dense Object Detection
[DL輪読会]Focal Loss for Dense Object Detection[DL輪読会]Focal Loss for Dense Object Detection
[DL輪読会]Focal Loss for Dense Object Detection
 
[DL輪読会]Differentiable Mapping Networks: Learning Structured Map Representatio...
[DL輪読会]Differentiable Mapping Networks: Learning Structured Map Representatio...[DL輪読会]Differentiable Mapping Networks: Learning Structured Map Representatio...
[DL輪読会]Differentiable Mapping Networks: Learning Structured Map Representatio...
 
Image net classification with Deep Convolutional Neural Networks
Image net classification with Deep Convolutional Neural NetworksImage net classification with Deep Convolutional Neural Networks
Image net classification with Deep Convolutional Neural Networks
 
[DL Hacks] Learning Transferable Features with Deep Adaptation Networks
[DL Hacks] Learning Transferable Features with Deep Adaptation Networks[DL Hacks] Learning Transferable Features with Deep Adaptation Networks
[DL Hacks] Learning Transferable Features with Deep Adaptation Networks
 

Más de Deep Learning JP

【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners
【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners
【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving PlannersDeep Learning JP
 
【DL輪読会】事前学習用データセットについて
【DL輪読会】事前学習用データセットについて【DL輪読会】事前学習用データセットについて
【DL輪読会】事前学習用データセットについてDeep Learning JP
 
【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...
【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...
【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...Deep Learning JP
 
【DL輪読会】Zero-Shot Dual-Lens Super-Resolution
【DL輪読会】Zero-Shot Dual-Lens Super-Resolution【DL輪読会】Zero-Shot Dual-Lens Super-Resolution
【DL輪読会】Zero-Shot Dual-Lens Super-ResolutionDeep Learning JP
 
【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv
【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv
【DL輪読会】BloombergGPT: A Large Language Model for Finance arxivDeep Learning JP
 
【DL輪読会】マルチモーダル LLM
【DL輪読会】マルチモーダル LLM【DL輪読会】マルチモーダル LLM
【DL輪読会】マルチモーダル LLMDeep Learning JP
 
【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo...
 【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo... 【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo...
【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo...Deep Learning JP
 
【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition
【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition
【DL輪読会】AnyLoc: Towards Universal Visual Place RecognitionDeep Learning JP
 
【DL輪読会】Can Neural Network Memorization Be Localized?
【DL輪読会】Can Neural Network Memorization Be Localized?【DL輪読会】Can Neural Network Memorization Be Localized?
【DL輪読会】Can Neural Network Memorization Be Localized?Deep Learning JP
 
【DL輪読会】Hopfield network 関連研究について
【DL輪読会】Hopfield network 関連研究について【DL輪読会】Hopfield network 関連研究について
【DL輪読会】Hopfield network 関連研究についてDeep Learning JP
 
【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )
【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )
【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )Deep Learning JP
 
【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...
【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...
【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...Deep Learning JP
 
【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"
【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"
【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"Deep Learning JP
 
【DL輪読会】"Language Instructed Reinforcement Learning for Human-AI Coordination "
【DL輪読会】"Language Instructed Reinforcement Learning  for Human-AI Coordination "【DL輪読会】"Language Instructed Reinforcement Learning  for Human-AI Coordination "
【DL輪読会】"Language Instructed Reinforcement Learning for Human-AI Coordination "Deep Learning JP
 
【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models
【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models
【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat ModelsDeep Learning JP
 
【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"
【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"
【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"Deep Learning JP
 
【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...
【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...
【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...Deep Learning JP
 
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...Deep Learning JP
 
【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...
【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...
【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...Deep Learning JP
 
【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...
【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...
【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...Deep Learning JP
 

Más de Deep Learning JP (20)

【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners
【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners
【DL輪読会】AdaptDiffuser: Diffusion Models as Adaptive Self-evolving Planners
 
【DL輪読会】事前学習用データセットについて
【DL輪読会】事前学習用データセットについて【DL輪読会】事前学習用データセットについて
【DL輪読会】事前学習用データセットについて
 
【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...
【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...
【DL輪読会】 "Learning to render novel views from wide-baseline stereo pairs." CVP...
 
【DL輪読会】Zero-Shot Dual-Lens Super-Resolution
【DL輪読会】Zero-Shot Dual-Lens Super-Resolution【DL輪読会】Zero-Shot Dual-Lens Super-Resolution
【DL輪読会】Zero-Shot Dual-Lens Super-Resolution
 
【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv
【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv
【DL輪読会】BloombergGPT: A Large Language Model for Finance arxiv
 
【DL輪読会】マルチモーダル LLM
【DL輪読会】マルチモーダル LLM【DL輪読会】マルチモーダル LLM
【DL輪読会】マルチモーダル LLM
 
【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo...
 【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo... 【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo...
【 DL輪読会】ToolLLM: Facilitating Large Language Models to Master 16000+ Real-wo...
 
【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition
【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition
【DL輪読会】AnyLoc: Towards Universal Visual Place Recognition
 
【DL輪読会】Can Neural Network Memorization Be Localized?
【DL輪読会】Can Neural Network Memorization Be Localized?【DL輪読会】Can Neural Network Memorization Be Localized?
【DL輪読会】Can Neural Network Memorization Be Localized?
 
【DL輪読会】Hopfield network 関連研究について
【DL輪読会】Hopfield network 関連研究について【DL輪読会】Hopfield network 関連研究について
【DL輪読会】Hopfield network 関連研究について
 
【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )
【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )
【DL輪読会】SimPer: Simple self-supervised learning of periodic targets( ICLR 2023 )
 
【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...
【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...
【DL輪読会】RLCD: Reinforcement Learning from Contrast Distillation for Language M...
 
【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"
【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"
【DL輪読会】"Secrets of RLHF in Large Language Models Part I: PPO"
 
【DL輪読会】"Language Instructed Reinforcement Learning for Human-AI Coordination "
【DL輪読会】"Language Instructed Reinforcement Learning  for Human-AI Coordination "【DL輪読会】"Language Instructed Reinforcement Learning  for Human-AI Coordination "
【DL輪読会】"Language Instructed Reinforcement Learning for Human-AI Coordination "
 
【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models
【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models
【DL輪読会】Llama 2: Open Foundation and Fine-Tuned Chat Models
 
【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"
【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"
【DL輪読会】"Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware"
 
【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...
【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...
【DL輪読会】Parameter is Not All You Need:Starting from Non-Parametric Networks fo...
 
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
 
【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...
【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...
【DL輪読会】Self-Supervised Learning from Images with a Joint-Embedding Predictive...
 
【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...
【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...
【DL輪読会】Towards Understanding Ensemble, Knowledge Distillation and Self-Distil...
 

【DL輪読会】Domain Generalization by Learning and Removing Domainspecific Features

  • 1. DEEP LEARNING JP [DL Papers] Domain Generalization by Learning and Removing Domain- specific Features Yuting Lin, Kokusai Kogyo Co., Ltd.(国際航業) http://deeplearning.jp/ 1
  • 2. 書誌情報 • タイトル – Domain Generalization by Learning and Removing Domain-specific Features • 著者 – Yu Ding1, Lei Wang1, Bin Liang2, Shuming Liang2, Yang Wang2, Fang Chen2 – 1University of Wollongong 2University of Technology Sydney • NeurIPS2022に採択 • Paper – https://openreview.net/forum?id=37Rf7BTAtAM • Code – https://github.com/yulearningg/LRDG 2
  • 3. Introduction • 背景 – Domain shiftによる汎化性能の低下は深層学習の一つの課題 – CNNはdomainに依存するlocal特徴で認識してしまう傾向 • 本論文はunseen domainでも適用可能な学習手法domain generalizationに 注目 – 既存手法は、 domain依存な特徴は除去できていないことが課題 • Domain-specific情報は明示的に教えず、sourceからdomain-invariant情報のみで学習 • テクスチャなどの情報をlocalな特徴として、学習しない(ペナルティをかける)手法もあるが、他の localな特徴の除去に保証がない – domain依存な特徴明示的に排除(学習しないように)する手法を提案 3
  • 4. 概要 • 提案手法の概要 – domain-specific classifiers, encoder-decoder network, domain-invariant classifierで 構成される – source domain毎にdomain-specific classifierを設け、対象domainに依存する情報を 抽出 • 他のsource domainに対応できないように – 画像をdomain不変なspaceにmapping – domain-invariant classifierで最終的に認識を行う • Data augmentation的な手法と異なる点 – 学習データは増やさず、domain不変な中間表現を作成 4
  • 5. 提案手法 • Image-to-image translation手法にインスパイアされた • 提案手法の全体図 5 出典:https://openreview.net/forum?id=37Rf7BTAtAM
  • 6. 提案手法の詳細 • Learning domain-specific features – N source domain 𝒟𝑠 = 𝐷𝑠 1, 𝐷𝑠 2, ⋯ , 𝐷𝑠 𝑁 に対し、classifier ℱ𝑠 = 𝐹1, 𝐹2, ⋯ , 𝐹𝑁 を用意 – 𝐷𝑠 𝑖は𝐹𝑖のみ対応可能: • 𝐹𝑖のみ認識可能:分類loss(cross entropy loss)を最小になるように学習をガイド 𝑎𝑟𝑔 min 𝜃𝑖 𝔼𝐷𝑠 𝑖~𝒟𝑠 𝔼 𝑥𝑗 𝑖 ,𝑦𝑗 𝑖 ~𝐷𝑠 𝑖 𝐿𝐶 𝐹𝑖 𝑥𝑗 𝑖 ; 𝜃𝑖 , 𝑦𝑗 𝑖 • 他のclassifier( 𝐹1, 𝐹2, ⋯ 𝐹𝑖−1, 𝐹𝑖+1, ⋯ 𝐹𝑁 )は認識できない:uncertainty loss(entropy loss)で学習 できないように 𝑎𝑟𝑔 max 𝜃𝑖 𝔼𝐷𝑠 𝑘~𝒟𝑠,𝑘≠𝑖 𝔼 𝑥𝑗 𝑘 ,𝑦𝑗 𝑘 ~𝐷𝑠 𝑘 𝐿𝑈 𝐹𝑖 𝑥𝑗 𝑘 ; 𝜃𝑖 – classifier ℱ𝑠は予め学習したら、パラメータをフリーズ 6
  • 7. 提案手法の詳細 • Removing domain-specific features – Encoder-decoder形式で入力画像を新しい画像空間にmapping – その画像空間は各classifierのuncertainty lossを最大化する → domain不変な画像 空間にmapping 𝑎𝑟𝑔 max 𝜃𝑀 𝔼𝐷𝑠 𝑖~𝒟𝑠 𝔼 𝑥𝑗 𝑖 ,𝑦𝑗 𝑖 ~𝐷𝑠 𝑖 𝐿𝑈 𝐹𝑖 𝑀 𝑥𝑗 𝑖 ; 𝜃𝑀 ; 𝜃𝑖 – Reconstruction loss(pixel-wise l2 loss)で、変な画像にmappingされないように 𝑎𝑟𝑔 min 𝜃𝑀 𝔼𝐷𝑠 𝑖~𝒟𝑠 𝔼 𝑥𝑗 𝑖 ,𝑦𝑗 𝑖 ~𝐷𝑠 𝑖 𝐿𝑅 𝑀 𝑥𝑗 𝑖 ; 𝜃𝑀 , 𝑥𝑗 𝑖 – domain-invariant classifier Fで認識(タスク)を行う 𝑎𝑟𝑔 min 𝜃𝑀,𝜃𝐹 𝔼𝐷𝑠 𝑖~𝒟𝑠 𝔼 𝑥𝑗 𝑖 ,𝑦𝑗 𝑖 ~𝐷𝑠 𝑖 𝐿𝐶 𝐹 𝑀 𝑥𝑗 𝑖 ; 𝜃𝑀 ; 𝜃𝐹 , 𝑦𝑗 𝑖 7
  • 8. 提案手法の詳細 • 提案手法のLoss – domain-specific classifier学習の段階 𝐿1 = 𝐿𝐶 ℱ𝑠 + 𝜆1𝐿𝑈 ℱ𝑠 – domain-invariant classifier学習の段階 𝐿2 = 𝐿𝐶 𝐹 𝑀 + 𝜆2𝐿𝑈 𝑀 + 𝜆3𝐿𝑅 𝑀 8
  • 9. 提案手法の理論的な分析 • 提案手法の理論的な分析 – 完璧なlabeling function: 𝑓 ∶ 𝒳 → 𝒴 – 実際のfunction(hypothesis): ℎ ∶ 𝒳 → 𝒴 – domain 𝒟おけるℎのリスク: ℛ ℎ = 𝔼𝑥~𝒟 ℒ ℎ 𝑥 − 𝑓 𝑥 – Source domainが複数ある場合は混合分布で表現 • Λ𝑆 = ഥ 𝐷: ഥ 𝐷 ∙ = σ𝑖=1 𝑁 𝜋𝑖 𝐷𝑆 𝑖 ∙ , 0 ≤ 𝜋𝑖 ≤ 1, σ𝑖=1 𝑁 𝜋𝑖 = 1 – 既往文献[1]により、Λ𝑆と𝒟𝑡のgeneralization risk boundは – 𝒟𝑡のupper boundは𝛾, 𝜖に依存 – 提案手法domain specific featureを排除し、 𝐷𝑠 1 , 𝐷𝑠 2 , ⋯ , 𝐷𝑠 𝑁 , 𝒟𝑡から ෡ 𝐷𝑠 1 , ෡ 𝐷𝑠 2 , ⋯ , ෡ 𝐷𝑠 𝑁 , ෡ 𝒟𝑡に mapping • 𝑑ℋ ෡ 𝐷𝑠 𝑖 , ෡ 𝐷𝑠 𝑗 ≤ 𝐷𝑠 𝑖 , 𝐷𝑠 𝑗 → source domainの分布に接近し、𝜖が小さくなる • ෡ 𝒟𝑡もsource domainの分布に接近し、𝛾も小さくなる 9 [1] Isabela Albuquerque, João Monteiro, Mohammad Darvishi, Tiago H Falk, and Ioannis Mitliagkas. Generalizing to unseen domains via distribution matching. arXiv preprint arXiv:1911.00804, 2020.
  • 10. 実験設定 • データセット – PACS: Photo (P), Art Painting (A), Cartoon (C) and Sketch (S) – VLCS: PASCAL VOC 2007 (V), LabelMe (L), Caltech (C) and Sun (S) – Office-Home: Art (A), Clipart (C), Product (P), and Real-World (R) – そのうち、3つをsource domain、1つをtarget domain • Network: – Encoder-decoder: U-Net – Classifier (ImageNet pretrained) • AlexNet: PACS, VLCS • ResNet18: PACS and Office-Home • ResNet50: PACS • 比較対象: – domain-invariant based手法、data augmentation based手法、 meta-learning based手法 – Baseline: empirical risk minimization (ERM) 10
  • 11. 実験結果-定量評価 • PACSの検証結果 – Domain: Art painting, Cartoon, Photo, Sketch – 全体的に評価する場合、提案手法の精度が最も高い – Domain-specific/invariant情報両方を利用する手法と比較 • Art paintingとPhotoは共通するdomain-specific特徴があり、お互いのdomainの認識にポジティブな効果がある。一方、 Cartoon, Sketchにはネガティブに働く傾向 11 出典: https://deepai.org/publication/domain-generalization-via-gradient-surgery
  • 12. 実験結果-定量評価 • VLCSの検証結果:精度が2nd • Office-Homeの検証結果:精度が1st • 提案手法の有効性は確認 12 出典: https://deepai.org/publication/domain-generalization-via-gradient-surgery
  • 13. 実験結果-Domain divergence • Source domain divergence – H-divergenceで評価: • classifier(linearSVM)でsource domainを分類できるかで評価 • Proxy A-distance: 2 1 − 2𝜀 , where 𝜀 = test error • Baseline: AlexNetの最終層をSVMに入力 • 提案手法:domain-invariant classifierの最終層をSVMに入力 – 提案手法(mapped source domain)はdomain分布を近づけることができた 13
  • 14. 実験結果-Domain divergence • Source-Target domain divergence – Λ𝑆と𝒟𝑡のdivergenceで評価 • Λ𝑆 = ഥ 𝐷: ഥ 𝐷 ∙ = σ𝑖=1 𝑁 𝜋𝑖 𝐷𝑆 𝑖 ∙ , 0 ≤ 𝜋𝑖 ≤ 1, σ𝑖=1 𝑁 𝜋𝑖 = 1 • 𝜋𝑖は{0, 0.1, 0.2, · · · , 0.9, 1}からrandom samplingして評価 – 提案手法は、source domainとtarget domain間のdivergence を削減できた • 提案手法は、source domain間の分布の違い𝛾、 source-target domain間の分布の違い𝜖を削減し、 generalization risk boundを減らすことができた 14
  • 15. まとめ • 明示的にdomain-specific情報を除外する手法を提案 – ネットワークはdomain-specific classifiers, encoder-decoder network, domain- invariant classifierで構成 – 2-step形式で学習 • Limitation – Source domainの数に応じてdomain-specific classifierを用意する必要がある • novel domain-specific classifierが望ましい – Target domainのdomain-specific情報を除外できない • 潜在空間上で実施することが考えられる 15