SlideShare una empresa de Scribd logo
1 de 31
The VoiceMOS Challenge 2022
Wen-Chin Huang¹, Erica Cooper², Yu Tsao³, Hsin-Min Wang³, Tomoki Toda¹, Junichi Yamagishi²
¹Nagoya University, Japan
²National Institute of Informatics, Japan
³Academia Sinica, Taiwan
第141回音声言語情報処理
研究発表会 招待講演
Outline
I. Introduction
II. Challenge description
A. Tracks and datasets
B. Rules and timeline
C. Evaluation metrics
D. Baseline systems
III. Challenge results
A. Participants demographics
B. Results, analysis and discussion
1. Comparison of baseline systems
2. Analysis of top systems
3. Sources of difficulty
4. Analysis of metrics
IV. Conclusions
Introduction
Important to evaluate speech synthesis systems, ex. text-to-speech (TTS), voice conversion (VC).
Mean opinion score (MOS) test:
Rate quality of individual samples.
Speech quality assessment
Subjective
test
Drawbacks:
1. Expensive: Costs too much time and money.
2. Context-dependent: numbers cannot be meaningfully
compared across different listening tests.
Important to evaluate speech synthesis systems, ex. text-to-speech (TTS), voice conversion (VC).
Mean opinion score (MOS) test:
Rate quality of individual samples.
Speech quality assessment
Subjective
test
Objective
assessment
Data-driven MOS prediction
(mostly based on deep learning)
Drawback: bad
generalization ability
due to limited training
data.
Goals of the VoiceMOS challenge
*Accepted as an Interspeech 2022 special session!
Encourage research in
automatic data-driven
MOS prediction
Compare different
approaches using
shared datasets and
evaluation
Focus on the
challenging case of
generalizing to a
separate listening test
Promote discussion
about the future of
this research field
Challenge
description
I. Tracks and datasets
II. Rules and timeline
III. Evaluation metrics
IV. Baseline systems
Challenge platform: CodaLab
Open-source web-based platform for reproducible machine learning research.
Tracks and dataset: Main track
The BVCC Dataset
● Samples from 187 different systems all rated together in one listening test
○ Past Blizzard Challenges (text-to-speech synthesis) since 2008
○ Past Voice Conversion Challenges (voice conversion) since 2016
○ ESPnet-TTS (implementations of modern TTS systems), 2020
● Test set contains some unseen systems, unseen listeners, and unseen speakers and is
balanced to match the distribution of scores in the training set
Listening test data from the Blizzard Challenge 2019
● “Out-of-domain” (OOD): Data from a completely separate listening test
● Chinese-language synthesis from systems submitted to the 2019 Blizzard Challenge
● Test set has some unseen systems and unseen listeners
● Designed to reflect a real-world setting where a small amount of labeled data is available
● Study generalization ability to a different listening test context
● Encourage unsupervised and semi-supervised approaches using unlabeled data
Tracks and dataset: OOD track
10% 40% 10% 40%
Labeled train set Unlabeled train set Dev set Test set
Dataset summary
Rules and timeline
Training phase
✔Main track train/dev audio+rating
✔OOD track train/dev audio+rating
Leaderboard available
Need to submit at least once
2021/12/5 2022/2/21 2022/2/28 2022/3/7
Break phase
No submission available.
Conduct result analysis.
Evaluation phase
✔Main track test
audio
✔OOD track test audio
Leaderboard available
Max 3 submissions
Post-evaluation phase
✔Main track test rating
✔OOD track test rating
✔Evaluation results
Submission & leaderboard reopen
General rules
External data is permitted.
All participants need to submit system description.
Evaluation metrics
System-level and Utterance-level
● Mean Squared Error (MSE): difference between predicted and actual MOS
● Linear Correlation Coefficient (LCC): a basic correlation measure
● Spearman Rank Correlation Coefficient (SRCC): non-parametric; measures ranking order
● Kendall Tau Rank Correlation (KTAU): more robust to errors
import numpy as np
import scipy.stats
# `true_mean_scores` and `predict_mean_scores` are both 1-d numpy arrays.
MSE = np.mean((true_mean_scores - predict_mean_scores)**2)
LCC = np.corrcoef(true_mean_scores, predict_mean_scores)[0][1]
SRCC = scipy.stats.spearmanr(true_mean_scores, predict_mean_scores)[0]
KTAU = scipy.stats.kendalltau(true_mean_scores, predict_mean_scores)[0]
Following prior work, we
picked system-level SRCC as
the main evaluation metric.
Baseline system: SSL-MOS
Fine-tune a self-supervised learning based (SSL) speech model for the MOS prediction task
● Pretrained wav2vec2
● Simple mean pooling and a linear fine-tuning layer
● Wav2vec2 model parameters are updated during fine-tuning
E. Cooper, W.-C. Huang, T. Toda, and J. Yamagishi, “Generalization ability of MOS prediction networks,” in Proc. ICASSP, 2022
Baseline system: MOSANet
● Originally developed for noisy speech
assessment
● Cross-domain input features:
○ Spectral information
○ Complex features
○ Raw waveform
○ Features extracted from SSL models
R. E. Zezario, S.-W. Fu, F. Chen, C.-S. Fuh, H.-M. Wang, and Y. Tsao, “Deep Learning-based Non-Intrusive
Multi-Objective Speech Assessment Model with Cross-Domain Features,” arXiv preprint
arXiv:2111.02363, 2021.
Baseline system: LDNet
Listener-dependent modeling
● Specialized model structure and inference
method allows making use of multiple ratings
per audio sample.
● No external data is used!
W.-C. Huang, E. Cooper, J. Yamagishi, and T. Toda, “LDNet: Unified Listener Dependent Modeling in MOS Prediction for Synthetic Speech,” in Proc. ICASSP, 2022
Challenge results
I. Participants demographics
II. Results, analysis and discussion
A. Comparison of baseline systems
B. Analysis of top systems
C. Sources of difficulty
D. Analysis of metrics
Participants demographics
Number of teams: 22 teams + 3 baselines
14 teams are from academia, 5 teams are
from industry, 3 teams are personal
Main track: 21 teams + 3 baselines
OOD track: 15 teams + 3 baselines
Baseline systems:
● B01: SSL-MOS
● B02: MOSANet
● B03: LDNet
Overall evaluation results: main track, OOD track
Comparison of baseline systems: main track
In terms of system-level SRCC,
11 teams outperformed the best
baseline, B01!
However, the gap between the
best baseline and the top
system is not large…
Comparison of baseline systems: OOD track
In terms of system-level SRCC,
only 2 teams outperformed or
on par with B01.
The gap is even smaller…
Participant feedback:
“The baseline was too strong!
Hard to get improvement!”
Analysis of approaches used
● Main track: Finetuning SSL > using SSL features > not using SSL
● OOD track: finetuned SSL models were both the best and worst systems
● Popular approaches:
○ Ensembling (top team in main track; top 2 teams in OOD track)
○ Multi-task learning
○ Use of speech recognizers (top team in OOD track)
Finetuned SSL
SSL
features
No
SSL
Analysis of approaches used
● 7 teams used per-listener ratings
● No teams used listener demographics
○ One team used “listener group”
● OOD track: only 3 teams used the unlabeled data:
Conducted their
own listening test
(top team)
Task-adaptive
pretraining
“Pseudo-label” the
unlabeled data using
trained model
Sources of difficulty
Are unseen categories more difficult?
Unseen systems
Unseen speakers
Unseen listeners
Main track OOD track
no
yes (7 teams)
Category
yes (17 teams)
yes (6 teams)
N/A
no
Sources of difficulty
Systems with large differences between their training and test set distributions are harder to predict.
Sources of difficulty
Low-quality systems
are easy to predict.
Middle and high
quality systems are
harder to predict.
Analysis of metrics
Why did you use system-level
SRCC as main metric?
Why are there 4 metrics?
There are two main
usage of MOS
prediction models:
Compare a lot of systems
Ex. evaluation in scientific challenges.
→ Ranking-related metrics are preferred
(LCC, SRCC, KTAU)
Evaluate absolute goodness of a system
Ex. use as objective function in training.
→ Numerical metrics are preferred
(MSE)
Analysis of metrics
We calculated the linear correlation coefficients between all metrics using main track results.
Correlation coefficients between
ranking based metrics are close to 1.
MSE is different from the other
three metrics.
⇒ Future researchers can consider just reporting 2 metrics: MSE + {LCC, SRCC, KTAU}.
⇒ It is still of significant importance to develop a general metric that reflects both aspects.
Conclusions
Conclusions
⇒ Attracted more than
20 participant teams.
⇒ SSL is very
powerful in this task.
⇒ Generalizing to a
different listening test
is still very hard.
⇒ There will be a 2nd,
3rd, 4th,... version!!
The goals of the VoiceMOS challenge:
Useful materials
The CodaLab challenge page is still open! Datasets are free to download. VoiceMOS Challenge
The baseline systems are open-sourced!
● https://github.com/nii-yamagishilab/mos-finetune-ssl
● https://github.com/dhimasryan/MOSA-Net-Cross-Domain
● https://github.com/unilight/LDNet
The arXiv paper is available!
● https://arxiv.org/abs/2203.11389

Más contenido relacionado

La actualidad más candente

短時間発話を用いた話者照合のための音声加工の効果に関する検討
短時間発話を用いた話者照合のための音声加工の効果に関する検討短時間発話を用いた話者照合のための音声加工の効果に関する検討
短時間発話を用いた話者照合のための音声加工の効果に関する検討Shinnosuke Takamichi
 
音情報処理における特徴表現
音情報処理における特徴表現音情報処理における特徴表現
音情報処理における特徴表現NU_I_TODALAB
 
CREST「共生インタラクション」共創型音メディア機能拡張プロジェクト
CREST「共生インタラクション」共創型音メディア機能拡張プロジェクトCREST「共生インタラクション」共創型音メディア機能拡張プロジェクト
CREST「共生インタラクション」共創型音メディア機能拡張プロジェクトNU_I_TODALAB
 
音声信号の分析と加工 - 音声を自在に変換するには?
音声信号の分析と加工 - 音声を自在に変換するには?音声信号の分析と加工 - 音声を自在に変換するには?
音声信号の分析と加工 - 音声を自在に変換するには?NU_I_TODALAB
 
フィラーを含む自発音声合成モデルの品質低下原因の調査と一貫性保証による改善
フィラーを含む自発音声合成モデルの品質低下原因の調査と一貫性保証による改善フィラーを含む自発音声合成モデルの品質低下原因の調査と一貫性保証による改善
フィラーを含む自発音声合成モデルの品質低下原因の調査と一貫性保証による改善Yuta Matsunaga
 
音源分離における音響モデリング(Acoustic modeling in audio source separation)
音源分離における音響モデリング(Acoustic modeling in audio source separation)音源分離における音響モデリング(Acoustic modeling in audio source separation)
音源分離における音響モデリング(Acoustic modeling in audio source separation)Daichi Kitamura
 
WaveNetが音声合成研究に与える影響
WaveNetが音声合成研究に与える影響WaveNetが音声合成研究に与える影響
WaveNetが音声合成研究に与える影響NU_I_TODALAB
 
音声合成のコーパスをつくろう
音声合成のコーパスをつくろう音声合成のコーパスをつくろう
音声合成のコーパスをつくろうShinnosuke Takamichi
 
やさしく音声分析法を学ぶ: ケプストラム分析とLPC分析
やさしく音声分析法を学ぶ: ケプストラム分析とLPC分析やさしく音声分析法を学ぶ: ケプストラム分析とLPC分析
やさしく音声分析法を学ぶ: ケプストラム分析とLPC分析Shinnosuke Takamichi
 
Statistical voice conversion with direct waveform modeling
Statistical voice conversion with direct waveform modelingStatistical voice conversion with direct waveform modeling
Statistical voice conversion with direct waveform modelingNU_I_TODALAB
 
環境音の特徴を活用した音響イベント検出・シーン分類
環境音の特徴を活用した音響イベント検出・シーン分類環境音の特徴を活用した音響イベント検出・シーン分類
環境音の特徴を活用した音響イベント検出・シーン分類Keisuke Imoto
 
音素事後確率を利用した表現学習に基づく発話感情認識
音素事後確率を利用した表現学習に基づく発話感情認識音素事後確率を利用した表現学習に基づく発話感情認識
音素事後確率を利用した表現学習に基づく発話感情認識NU_I_TODALAB
 
音源分離 ~DNN音源分離の基礎から最新技術まで~ Tokyo bishbash #3
音源分離 ~DNN音源分離の基礎から最新技術まで~ Tokyo bishbash #3音源分離 ~DNN音源分離の基礎から最新技術まで~ Tokyo bishbash #3
音源分離 ~DNN音源分離の基礎から最新技術まで~ Tokyo bishbash #3Naoya Takahashi
 
距離学習を導入した二値分類モデルによる異常音検知
距離学習を導入した二値分類モデルによる異常音検知距離学習を導入した二値分類モデルによる異常音検知
距離学習を導入した二値分類モデルによる異常音検知NU_I_TODALAB
 
Neural text-to-speech and voice conversion
Neural text-to-speech and voice conversionNeural text-to-speech and voice conversion
Neural text-to-speech and voice conversionYuki Saito
 
深層生成モデルに基づく音声合成技術
深層生成モデルに基づく音声合成技術深層生成モデルに基づく音声合成技術
深層生成モデルに基づく音声合成技術NU_I_TODALAB
 
Deep Neural Networkに基づく日常生活行動認識における適応手法
Deep Neural Networkに基づく日常生活行動認識における適応手法Deep Neural Networkに基づく日常生活行動認識における適応手法
Deep Neural Networkに基づく日常生活行動認識における適応手法NU_I_TODALAB
 
双方向LSTMによるラウドネス及びMFCCからの振幅スペクトログラム予測と評価
双方向LSTMによるラウドネス及びMFCCからの振幅スペクトログラム予測と評価双方向LSTMによるラウドネス及びMFCCからの振幅スペクトログラム予測と評価
双方向LSTMによるラウドネス及びMFCCからの振幅スペクトログラム予測と評価Kitamura Laboratory
 

La actualidad más candente (20)

短時間発話を用いた話者照合のための音声加工の効果に関する検討
短時間発話を用いた話者照合のための音声加工の効果に関する検討短時間発話を用いた話者照合のための音声加工の効果に関する検討
短時間発話を用いた話者照合のための音声加工の効果に関する検討
 
音情報処理における特徴表現
音情報処理における特徴表現音情報処理における特徴表現
音情報処理における特徴表現
 
CREST「共生インタラクション」共創型音メディア機能拡張プロジェクト
CREST「共生インタラクション」共創型音メディア機能拡張プロジェクトCREST「共生インタラクション」共創型音メディア機能拡張プロジェクト
CREST「共生インタラクション」共創型音メディア機能拡張プロジェクト
 
音声信号の分析と加工 - 音声を自在に変換するには?
音声信号の分析と加工 - 音声を自在に変換するには?音声信号の分析と加工 - 音声を自在に変換するには?
音声信号の分析と加工 - 音声を自在に変換するには?
 
フィラーを含む自発音声合成モデルの品質低下原因の調査と一貫性保証による改善
フィラーを含む自発音声合成モデルの品質低下原因の調査と一貫性保証による改善フィラーを含む自発音声合成モデルの品質低下原因の調査と一貫性保証による改善
フィラーを含む自発音声合成モデルの品質低下原因の調査と一貫性保証による改善
 
音源分離における音響モデリング(Acoustic modeling in audio source separation)
音源分離における音響モデリング(Acoustic modeling in audio source separation)音源分離における音響モデリング(Acoustic modeling in audio source separation)
音源分離における音響モデリング(Acoustic modeling in audio source separation)
 
WaveNetが音声合成研究に与える影響
WaveNetが音声合成研究に与える影響WaveNetが音声合成研究に与える影響
WaveNetが音声合成研究に与える影響
 
音声合成のコーパスをつくろう
音声合成のコーパスをつくろう音声合成のコーパスをつくろう
音声合成のコーパスをつくろう
 
やさしく音声分析法を学ぶ: ケプストラム分析とLPC分析
やさしく音声分析法を学ぶ: ケプストラム分析とLPC分析やさしく音声分析法を学ぶ: ケプストラム分析とLPC分析
やさしく音声分析法を学ぶ: ケプストラム分析とLPC分析
 
Statistical voice conversion with direct waveform modeling
Statistical voice conversion with direct waveform modelingStatistical voice conversion with direct waveform modeling
Statistical voice conversion with direct waveform modeling
 
環境音の特徴を活用した音響イベント検出・シーン分類
環境音の特徴を活用した音響イベント検出・シーン分類環境音の特徴を活用した音響イベント検出・シーン分類
環境音の特徴を活用した音響イベント検出・シーン分類
 
音素事後確率を利用した表現学習に基づく発話感情認識
音素事後確率を利用した表現学習に基づく発話感情認識音素事後確率を利用した表現学習に基づく発話感情認識
音素事後確率を利用した表現学習に基づく発話感情認識
 
音源分離 ~DNN音源分離の基礎から最新技術まで~ Tokyo bishbash #3
音源分離 ~DNN音源分離の基礎から最新技術まで~ Tokyo bishbash #3音源分離 ~DNN音源分離の基礎から最新技術まで~ Tokyo bishbash #3
音源分離 ~DNN音源分離の基礎から最新技術まで~ Tokyo bishbash #3
 
距離学習を導入した二値分類モデルによる異常音検知
距離学習を導入した二値分類モデルによる異常音検知距離学習を導入した二値分類モデルによる異常音検知
距離学習を導入した二値分類モデルによる異常音検知
 
Ea2015 7for ss
Ea2015 7for ssEa2015 7for ss
Ea2015 7for ss
 
Neural text-to-speech and voice conversion
Neural text-to-speech and voice conversionNeural text-to-speech and voice conversion
Neural text-to-speech and voice conversion
 
深層生成モデルに基づく音声合成技術
深層生成モデルに基づく音声合成技術深層生成モデルに基づく音声合成技術
深層生成モデルに基づく音声合成技術
 
Kameoka2017 ieice03
Kameoka2017 ieice03Kameoka2017 ieice03
Kameoka2017 ieice03
 
Deep Neural Networkに基づく日常生活行動認識における適応手法
Deep Neural Networkに基づく日常生活行動認識における適応手法Deep Neural Networkに基づく日常生活行動認識における適応手法
Deep Neural Networkに基づく日常生活行動認識における適応手法
 
双方向LSTMによるラウドネス及びMFCCからの振幅スペクトログラム予測と評価
双方向LSTMによるラウドネス及びMFCCからの振幅スペクトログラム予測と評価双方向LSTMによるラウドネス及びMFCCからの振幅スペクトログラム予測と評価
双方向LSTMによるラウドネス及びMFCCからの振幅スペクトログラム予測と評価
 

Similar a VoiceMOS Challenge 2022 Results and Analysis

Human-centered AI: how can we support lay users to understand AI?
Human-centered AI: how can we support lay users to understand AI?Human-centered AI: how can we support lay users to understand AI?
Human-centered AI: how can we support lay users to understand AI?Katrien Verbert
 
Natural Language Processing: From Human-Robot Interaction to Alzheimer’s Dete...
Natural Language Processing: From Human-Robot Interaction to Alzheimer’s Dete...Natural Language Processing: From Human-Robot Interaction to Alzheimer’s Dete...
Natural Language Processing: From Human-Robot Interaction to Alzheimer’s Dete...Jekaterina Novikova, PhD
 
Wearable Computing - Part IV: Ensemble classifiers & Insight into ongoing res...
Wearable Computing - Part IV: Ensemble classifiers & Insight into ongoing res...Wearable Computing - Part IV: Ensemble classifiers & Insight into ongoing res...
Wearable Computing - Part IV: Ensemble classifiers & Insight into ongoing res...Daniel Roggen
 
Explainable AI for non-expert users
Explainable AI for non-expert usersExplainable AI for non-expert users
Explainable AI for non-expert usersKatrien Verbert
 
Improving evaluations and utilization with statistical edge nested data desi...
Improving evaluations and utilization with statistical edge  nested data desi...Improving evaluations and utilization with statistical edge  nested data desi...
Improving evaluations and utilization with statistical edge nested data desi...CesToronto
 
HCI 3e - Ch 9: Evaluation techniques
HCI 3e - Ch 9:  Evaluation techniquesHCI 3e - Ch 9:  Evaluation techniques
HCI 3e - Ch 9: Evaluation techniquesAlan Dix
 
Mixed-initiative recommender systems: towards a next generation of recommende...
Mixed-initiative recommender systems: towards a next generation of recommende...Mixed-initiative recommender systems: towards a next generation of recommende...
Mixed-initiative recommender systems: towards a next generation of recommende...Katrien Verbert
 
Triantafyllia Voulibasi
Triantafyllia VoulibasiTriantafyllia Voulibasi
Triantafyllia VoulibasiISSEL
 
Leveraging Technology in a Music Classroom
Leveraging Technology in a Music ClassroomLeveraging Technology in a Music Classroom
Leveraging Technology in a Music ClassroomDan Massoth
 
Dissertation defense slides on "Semantic Analysis for Improved Multi-document...
Dissertation defense slides on "Semantic Analysis for Improved Multi-document...Dissertation defense slides on "Semantic Analysis for Improved Multi-document...
Dissertation defense slides on "Semantic Analysis for Improved Multi-document...Quinsulon Israel
 
MediaEval 2016 - UNIFESP Predicting Media Interestingness Task
MediaEval 2016 - UNIFESP Predicting Media Interestingness TaskMediaEval 2016 - UNIFESP Predicting Media Interestingness Task
MediaEval 2016 - UNIFESP Predicting Media Interestingness Taskmultimediaeval
 
G2.suntasig.guallichico.maicol.alexander.english.project.design.docx
G2.suntasig.guallichico.maicol.alexander.english.project.design.docxG2.suntasig.guallichico.maicol.alexander.english.project.design.docx
G2.suntasig.guallichico.maicol.alexander.english.project.design.docxMaicol Suntasig
 
Dstc6 an introduction
Dstc6 an introductionDstc6 an introduction
Dstc6 an introductionhkh
 
Explaining recommendations: design implications and lessons learned
Explaining recommendations: design implications and lessons learnedExplaining recommendations: design implications and lessons learned
Explaining recommendations: design implications and lessons learnedKatrien Verbert
 
A Large-Scale Study of the Impact of Feature Selection Techniques on Defect C...
A Large-Scale Study of the Impact of Feature Selection Techniques on Defect C...A Large-Scale Study of the Impact of Feature Selection Techniques on Defect C...
A Large-Scale Study of the Impact of Feature Selection Techniques on Defect C...SAIL_QU
 
Best Practices in Recommender System Challenges
Best Practices in Recommender System ChallengesBest Practices in Recommender System Challenges
Best Practices in Recommender System ChallengesAlan Said
 

Similar a VoiceMOS Challenge 2022 Results and Analysis (20)

The VoiceMOS Challenge 2022
The VoiceMOS Challenge 2022The VoiceMOS Challenge 2022
The VoiceMOS Challenge 2022
 
Human-centered AI: how can we support lay users to understand AI?
Human-centered AI: how can we support lay users to understand AI?Human-centered AI: how can we support lay users to understand AI?
Human-centered AI: how can we support lay users to understand AI?
 
Natural Language Processing: From Human-Robot Interaction to Alzheimer’s Dete...
Natural Language Processing: From Human-Robot Interaction to Alzheimer’s Dete...Natural Language Processing: From Human-Robot Interaction to Alzheimer’s Dete...
Natural Language Processing: From Human-Robot Interaction to Alzheimer’s Dete...
 
Wearable Computing - Part IV: Ensemble classifiers & Insight into ongoing res...
Wearable Computing - Part IV: Ensemble classifiers & Insight into ongoing res...Wearable Computing - Part IV: Ensemble classifiers & Insight into ongoing res...
Wearable Computing - Part IV: Ensemble classifiers & Insight into ongoing res...
 
Explainable AI for non-expert users
Explainable AI for non-expert usersExplainable AI for non-expert users
Explainable AI for non-expert users
 
Improving evaluations and utilization with statistical edge nested data desi...
Improving evaluations and utilization with statistical edge  nested data desi...Improving evaluations and utilization with statistical edge  nested data desi...
Improving evaluations and utilization with statistical edge nested data desi...
 
HCI 3e - Ch 9: Evaluation techniques
HCI 3e - Ch 9:  Evaluation techniquesHCI 3e - Ch 9:  Evaluation techniques
HCI 3e - Ch 9: Evaluation techniques
 
SLR.pdf
SLR.pdfSLR.pdf
SLR.pdf
 
SLR.pdf
SLR.pdfSLR.pdf
SLR.pdf
 
Mixed-initiative recommender systems: towards a next generation of recommende...
Mixed-initiative recommender systems: towards a next generation of recommende...Mixed-initiative recommender systems: towards a next generation of recommende...
Mixed-initiative recommender systems: towards a next generation of recommende...
 
Triantafyllia Voulibasi
Triantafyllia VoulibasiTriantafyllia Voulibasi
Triantafyllia Voulibasi
 
My experiment
My experimentMy experiment
My experiment
 
Leveraging Technology in a Music Classroom
Leveraging Technology in a Music ClassroomLeveraging Technology in a Music Classroom
Leveraging Technology in a Music Classroom
 
Dissertation defense slides on "Semantic Analysis for Improved Multi-document...
Dissertation defense slides on "Semantic Analysis for Improved Multi-document...Dissertation defense slides on "Semantic Analysis for Improved Multi-document...
Dissertation defense slides on "Semantic Analysis for Improved Multi-document...
 
MediaEval 2016 - UNIFESP Predicting Media Interestingness Task
MediaEval 2016 - UNIFESP Predicting Media Interestingness TaskMediaEval 2016 - UNIFESP Predicting Media Interestingness Task
MediaEval 2016 - UNIFESP Predicting Media Interestingness Task
 
G2.suntasig.guallichico.maicol.alexander.english.project.design.docx
G2.suntasig.guallichico.maicol.alexander.english.project.design.docxG2.suntasig.guallichico.maicol.alexander.english.project.design.docx
G2.suntasig.guallichico.maicol.alexander.english.project.design.docx
 
Dstc6 an introduction
Dstc6 an introductionDstc6 an introduction
Dstc6 an introduction
 
Explaining recommendations: design implications and lessons learned
Explaining recommendations: design implications and lessons learnedExplaining recommendations: design implications and lessons learned
Explaining recommendations: design implications and lessons learned
 
A Large-Scale Study of the Impact of Feature Selection Techniques on Defect C...
A Large-Scale Study of the Impact of Feature Selection Techniques on Defect C...A Large-Scale Study of the Impact of Feature Selection Techniques on Defect C...
A Large-Scale Study of the Impact of Feature Selection Techniques on Defect C...
 
Best Practices in Recommender System Challenges
Best Practices in Recommender System ChallengesBest Practices in Recommender System Challenges
Best Practices in Recommender System Challenges
 

Más de NU_I_TODALAB

異常音検知に対する深層学習適用事例
異常音検知に対する深層学習適用事例異常音検知に対する深層学習適用事例
異常音検知に対する深層学習適用事例NU_I_TODALAB
 
敵対的学習による統合型ソースフィルタネットワーク
敵対的学習による統合型ソースフィルタネットワーク敵対的学習による統合型ソースフィルタネットワーク
敵対的学習による統合型ソースフィルタネットワークNU_I_TODALAB
 
Interactive voice conversion for augmented speech production
Interactive voice conversion for augmented speech productionInteractive voice conversion for augmented speech production
Interactive voice conversion for augmented speech productionNU_I_TODALAB
 
Recent progress on voice conversion: What is next?
Recent progress on voice conversion: What is next?Recent progress on voice conversion: What is next?
Recent progress on voice conversion: What is next?NU_I_TODALAB
 
Weakly-Supervised Sound Event Detection with Self-Attention
Weakly-Supervised Sound Event Detection with Self-AttentionWeakly-Supervised Sound Event Detection with Self-Attention
Weakly-Supervised Sound Event Detection with Self-AttentionNU_I_TODALAB
 
楽曲中歌声加工における声質変換精度向上のための歌声・伴奏分離法
楽曲中歌声加工における声質変換精度向上のための歌声・伴奏分離法楽曲中歌声加工における声質変換精度向上のための歌声・伴奏分離法
楽曲中歌声加工における声質変換精度向上のための歌声・伴奏分離法NU_I_TODALAB
 
End-to-End音声認識ためのMulti-Head Decoderネットワーク
End-to-End音声認識ためのMulti-Head DecoderネットワークEnd-to-End音声認識ためのMulti-Head Decoderネットワーク
End-to-End音声認識ためのMulti-Head DecoderネットワークNU_I_TODALAB
 
空気/体内伝導マイクロフォンを用いた雑音環境下における自己発声音強調/抑圧法
空気/体内伝導マイクロフォンを用いた雑音環境下における自己発声音強調/抑圧法空気/体内伝導マイクロフォンを用いた雑音環境下における自己発声音強調/抑圧法
空気/体内伝導マイクロフォンを用いた雑音環境下における自己発声音強調/抑圧法NU_I_TODALAB
 
Hands on Voice Conversion
Hands on Voice ConversionHands on Voice Conversion
Hands on Voice ConversionNU_I_TODALAB
 
Advanced Voice Conversion
Advanced Voice ConversionAdvanced Voice Conversion
Advanced Voice ConversionNU_I_TODALAB
 
CTCに基づく音響イベントからの擬音語表現への変換
CTCに基づく音響イベントからの擬音語表現への変換CTCに基づく音響イベントからの擬音語表現への変換
CTCに基づく音響イベントからの擬音語表現への変換NU_I_TODALAB
 
Missing Component Restoration for Masked Speech Signals based on Time-Domain ...
Missing Component Restoration for Masked Speech Signals based on Time-Domain ...Missing Component Restoration for Masked Speech Signals based on Time-Domain ...
Missing Component Restoration for Masked Speech Signals based on Time-Domain ...NU_I_TODALAB
 
喉頭摘出者のための歌唱支援を目指した電気音声変換法
喉頭摘出者のための歌唱支援を目指した電気音声変換法喉頭摘出者のための歌唱支援を目指した電気音声変換法
喉頭摘出者のための歌唱支援を目指した電気音声変換法NU_I_TODALAB
 
実環境下におけるサイレント音声通話の実現に向けた雑音環境変動に頑健な非可聴つぶやき強調
実環境下におけるサイレント音声通話の実現に向けた雑音環境変動に頑健な非可聴つぶやき強調実環境下におけるサイレント音声通話の実現に向けた雑音環境変動に頑健な非可聴つぶやき強調
実環境下におけるサイレント音声通話の実現に向けた雑音環境変動に頑健な非可聴つぶやき強調NU_I_TODALAB
 
ケプストラム正則化NTFによるステレオチャネル楽曲音源分離
ケプストラム正則化NTFによるステレオチャネル楽曲音源分離ケプストラム正則化NTFによるステレオチャネル楽曲音源分離
ケプストラム正則化NTFによるステレオチャネル楽曲音源分離NU_I_TODALAB
 

Más de NU_I_TODALAB (15)

異常音検知に対する深層学習適用事例
異常音検知に対する深層学習適用事例異常音検知に対する深層学習適用事例
異常音検知に対する深層学習適用事例
 
敵対的学習による統合型ソースフィルタネットワーク
敵対的学習による統合型ソースフィルタネットワーク敵対的学習による統合型ソースフィルタネットワーク
敵対的学習による統合型ソースフィルタネットワーク
 
Interactive voice conversion for augmented speech production
Interactive voice conversion for augmented speech productionInteractive voice conversion for augmented speech production
Interactive voice conversion for augmented speech production
 
Recent progress on voice conversion: What is next?
Recent progress on voice conversion: What is next?Recent progress on voice conversion: What is next?
Recent progress on voice conversion: What is next?
 
Weakly-Supervised Sound Event Detection with Self-Attention
Weakly-Supervised Sound Event Detection with Self-AttentionWeakly-Supervised Sound Event Detection with Self-Attention
Weakly-Supervised Sound Event Detection with Self-Attention
 
楽曲中歌声加工における声質変換精度向上のための歌声・伴奏分離法
楽曲中歌声加工における声質変換精度向上のための歌声・伴奏分離法楽曲中歌声加工における声質変換精度向上のための歌声・伴奏分離法
楽曲中歌声加工における声質変換精度向上のための歌声・伴奏分離法
 
End-to-End音声認識ためのMulti-Head Decoderネットワーク
End-to-End音声認識ためのMulti-Head DecoderネットワークEnd-to-End音声認識ためのMulti-Head Decoderネットワーク
End-to-End音声認識ためのMulti-Head Decoderネットワーク
 
空気/体内伝導マイクロフォンを用いた雑音環境下における自己発声音強調/抑圧法
空気/体内伝導マイクロフォンを用いた雑音環境下における自己発声音強調/抑圧法空気/体内伝導マイクロフォンを用いた雑音環境下における自己発声音強調/抑圧法
空気/体内伝導マイクロフォンを用いた雑音環境下における自己発声音強調/抑圧法
 
Hands on Voice Conversion
Hands on Voice ConversionHands on Voice Conversion
Hands on Voice Conversion
 
Advanced Voice Conversion
Advanced Voice ConversionAdvanced Voice Conversion
Advanced Voice Conversion
 
CTCに基づく音響イベントからの擬音語表現への変換
CTCに基づく音響イベントからの擬音語表現への変換CTCに基づく音響イベントからの擬音語表現への変換
CTCに基づく音響イベントからの擬音語表現への変換
 
Missing Component Restoration for Masked Speech Signals based on Time-Domain ...
Missing Component Restoration for Masked Speech Signals based on Time-Domain ...Missing Component Restoration for Masked Speech Signals based on Time-Domain ...
Missing Component Restoration for Masked Speech Signals based on Time-Domain ...
 
喉頭摘出者のための歌唱支援を目指した電気音声変換法
喉頭摘出者のための歌唱支援を目指した電気音声変換法喉頭摘出者のための歌唱支援を目指した電気音声変換法
喉頭摘出者のための歌唱支援を目指した電気音声変換法
 
実環境下におけるサイレント音声通話の実現に向けた雑音環境変動に頑健な非可聴つぶやき強調
実環境下におけるサイレント音声通話の実現に向けた雑音環境変動に頑健な非可聴つぶやき強調実環境下におけるサイレント音声通話の実現に向けた雑音環境変動に頑健な非可聴つぶやき強調
実環境下におけるサイレント音声通話の実現に向けた雑音環境変動に頑健な非可聴つぶやき強調
 
ケプストラム正則化NTFによるステレオチャネル楽曲音源分離
ケプストラム正則化NTFによるステレオチャネル楽曲音源分離ケプストラム正則化NTFによるステレオチャネル楽曲音源分離
ケプストラム正則化NTFによるステレオチャネル楽曲音源分離
 

Último

Industrial Safety Unit-I SAFETY TERMINOLOGIES
Industrial Safety Unit-I SAFETY TERMINOLOGIESIndustrial Safety Unit-I SAFETY TERMINOLOGIES
Industrial Safety Unit-I SAFETY TERMINOLOGIESNarmatha D
 
Energy Awareness training ppt for manufacturing process.pptx
Energy Awareness training ppt for manufacturing process.pptxEnergy Awareness training ppt for manufacturing process.pptx
Energy Awareness training ppt for manufacturing process.pptxsiddharthjain2303
 
Concrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxConcrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxKartikeyaDwivedi3
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptSAURABHKUMAR892774
 
Steel Structures - Building technology.pptx
Steel Structures - Building technology.pptxSteel Structures - Building technology.pptx
Steel Structures - Building technology.pptxNikhil Raut
 
Earthing details of Electrical Substation
Earthing details of Electrical SubstationEarthing details of Electrical Substation
Earthing details of Electrical Substationstephanwindworld
 
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsyncWhy does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsyncssuser2ae721
 
Vishratwadi & Ghorpadi Bridge Tender documents
Vishratwadi & Ghorpadi Bridge Tender documentsVishratwadi & Ghorpadi Bridge Tender documents
Vishratwadi & Ghorpadi Bridge Tender documentsSachinPawar510423
 
US Department of Education FAFSA Week of Action
US Department of Education FAFSA Week of ActionUS Department of Education FAFSA Week of Action
US Department of Education FAFSA Week of ActionMebane Rash
 
THE SENDAI FRAMEWORK FOR DISASTER RISK REDUCTION
THE SENDAI FRAMEWORK FOR DISASTER RISK REDUCTIONTHE SENDAI FRAMEWORK FOR DISASTER RISK REDUCTION
THE SENDAI FRAMEWORK FOR DISASTER RISK REDUCTIONjhunlian
 
welding defects observed during the welding
welding defects observed during the weldingwelding defects observed during the welding
welding defects observed during the weldingMuhammadUzairLiaqat
 
Main Memory Management in Operating System
Main Memory Management in Operating SystemMain Memory Management in Operating System
Main Memory Management in Operating SystemRashmi Bhat
 
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)Dr SOUNDIRARAJ N
 
Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...121011101441
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AIabhishek36461
 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvLewisJB
 

Último (20)

Industrial Safety Unit-I SAFETY TERMINOLOGIES
Industrial Safety Unit-I SAFETY TERMINOLOGIESIndustrial Safety Unit-I SAFETY TERMINOLOGIES
Industrial Safety Unit-I SAFETY TERMINOLOGIES
 
Energy Awareness training ppt for manufacturing process.pptx
Energy Awareness training ppt for manufacturing process.pptxEnergy Awareness training ppt for manufacturing process.pptx
Energy Awareness training ppt for manufacturing process.pptx
 
Concrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxConcrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptx
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.ppt
 
Steel Structures - Building technology.pptx
Steel Structures - Building technology.pptxSteel Structures - Building technology.pptx
Steel Structures - Building technology.pptx
 
Earthing details of Electrical Substation
Earthing details of Electrical SubstationEarthing details of Electrical Substation
Earthing details of Electrical Substation
 
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsyncWhy does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
Why does (not) Kafka need fsync: Eliminating tail latency spikes caused by fsync
 
Vishratwadi & Ghorpadi Bridge Tender documents
Vishratwadi & Ghorpadi Bridge Tender documentsVishratwadi & Ghorpadi Bridge Tender documents
Vishratwadi & Ghorpadi Bridge Tender documents
 
US Department of Education FAFSA Week of Action
US Department of Education FAFSA Week of ActionUS Department of Education FAFSA Week of Action
US Department of Education FAFSA Week of Action
 
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
 
THE SENDAI FRAMEWORK FOR DISASTER RISK REDUCTION
THE SENDAI FRAMEWORK FOR DISASTER RISK REDUCTIONTHE SENDAI FRAMEWORK FOR DISASTER RISK REDUCTION
THE SENDAI FRAMEWORK FOR DISASTER RISK REDUCTION
 
young call girls in Green Park🔝 9953056974 🔝 escort Service
young call girls in Green Park🔝 9953056974 🔝 escort Serviceyoung call girls in Green Park🔝 9953056974 🔝 escort Service
young call girls in Green Park🔝 9953056974 🔝 escort Service
 
welding defects observed during the welding
welding defects observed during the weldingwelding defects observed during the welding
welding defects observed during the welding
 
Main Memory Management in Operating System
Main Memory Management in Operating SystemMain Memory Management in Operating System
Main Memory Management in Operating System
 
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
 
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Serviceyoung call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
 
Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AI
 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvv
 

VoiceMOS Challenge 2022 Results and Analysis

  • 1. The VoiceMOS Challenge 2022 Wen-Chin Huang¹, Erica Cooper², Yu Tsao³, Hsin-Min Wang³, Tomoki Toda¹, Junichi Yamagishi² ¹Nagoya University, Japan ²National Institute of Informatics, Japan ³Academia Sinica, Taiwan 第141回音声言語情報処理 研究発表会 招待講演
  • 2. Outline I. Introduction II. Challenge description A. Tracks and datasets B. Rules and timeline C. Evaluation metrics D. Baseline systems III. Challenge results A. Participants demographics B. Results, analysis and discussion 1. Comparison of baseline systems 2. Analysis of top systems 3. Sources of difficulty 4. Analysis of metrics IV. Conclusions
  • 4. Important to evaluate speech synthesis systems, ex. text-to-speech (TTS), voice conversion (VC). Mean opinion score (MOS) test: Rate quality of individual samples. Speech quality assessment Subjective test Drawbacks: 1. Expensive: Costs too much time and money. 2. Context-dependent: numbers cannot be meaningfully compared across different listening tests.
  • 5. Important to evaluate speech synthesis systems, ex. text-to-speech (TTS), voice conversion (VC). Mean opinion score (MOS) test: Rate quality of individual samples. Speech quality assessment Subjective test Objective assessment Data-driven MOS prediction (mostly based on deep learning) Drawback: bad generalization ability due to limited training data.
  • 6. Goals of the VoiceMOS challenge *Accepted as an Interspeech 2022 special session! Encourage research in automatic data-driven MOS prediction Compare different approaches using shared datasets and evaluation Focus on the challenging case of generalizing to a separate listening test Promote discussion about the future of this research field
  • 7. Challenge description I. Tracks and datasets II. Rules and timeline III. Evaluation metrics IV. Baseline systems
  • 8. Challenge platform: CodaLab Open-source web-based platform for reproducible machine learning research.
  • 9. Tracks and dataset: Main track The BVCC Dataset ● Samples from 187 different systems all rated together in one listening test ○ Past Blizzard Challenges (text-to-speech synthesis) since 2008 ○ Past Voice Conversion Challenges (voice conversion) since 2016 ○ ESPnet-TTS (implementations of modern TTS systems), 2020 ● Test set contains some unseen systems, unseen listeners, and unseen speakers and is balanced to match the distribution of scores in the training set
  • 10. Listening test data from the Blizzard Challenge 2019 ● “Out-of-domain” (OOD): Data from a completely separate listening test ● Chinese-language synthesis from systems submitted to the 2019 Blizzard Challenge ● Test set has some unseen systems and unseen listeners ● Designed to reflect a real-world setting where a small amount of labeled data is available ● Study generalization ability to a different listening test context ● Encourage unsupervised and semi-supervised approaches using unlabeled data Tracks and dataset: OOD track 10% 40% 10% 40% Labeled train set Unlabeled train set Dev set Test set
  • 12. Rules and timeline Training phase ✔Main track train/dev audio+rating ✔OOD track train/dev audio+rating Leaderboard available Need to submit at least once 2021/12/5 2022/2/21 2022/2/28 2022/3/7 Break phase No submission available. Conduct result analysis. Evaluation phase ✔Main track test audio ✔OOD track test audio Leaderboard available Max 3 submissions Post-evaluation phase ✔Main track test rating ✔OOD track test rating ✔Evaluation results Submission & leaderboard reopen General rules External data is permitted. All participants need to submit system description.
  • 13. Evaluation metrics System-level and Utterance-level ● Mean Squared Error (MSE): difference between predicted and actual MOS ● Linear Correlation Coefficient (LCC): a basic correlation measure ● Spearman Rank Correlation Coefficient (SRCC): non-parametric; measures ranking order ● Kendall Tau Rank Correlation (KTAU): more robust to errors import numpy as np import scipy.stats # `true_mean_scores` and `predict_mean_scores` are both 1-d numpy arrays. MSE = np.mean((true_mean_scores - predict_mean_scores)**2) LCC = np.corrcoef(true_mean_scores, predict_mean_scores)[0][1] SRCC = scipy.stats.spearmanr(true_mean_scores, predict_mean_scores)[0] KTAU = scipy.stats.kendalltau(true_mean_scores, predict_mean_scores)[0] Following prior work, we picked system-level SRCC as the main evaluation metric.
  • 14. Baseline system: SSL-MOS Fine-tune a self-supervised learning based (SSL) speech model for the MOS prediction task ● Pretrained wav2vec2 ● Simple mean pooling and a linear fine-tuning layer ● Wav2vec2 model parameters are updated during fine-tuning E. Cooper, W.-C. Huang, T. Toda, and J. Yamagishi, “Generalization ability of MOS prediction networks,” in Proc. ICASSP, 2022
  • 15. Baseline system: MOSANet ● Originally developed for noisy speech assessment ● Cross-domain input features: ○ Spectral information ○ Complex features ○ Raw waveform ○ Features extracted from SSL models R. E. Zezario, S.-W. Fu, F. Chen, C.-S. Fuh, H.-M. Wang, and Y. Tsao, “Deep Learning-based Non-Intrusive Multi-Objective Speech Assessment Model with Cross-Domain Features,” arXiv preprint arXiv:2111.02363, 2021.
  • 16. Baseline system: LDNet Listener-dependent modeling ● Specialized model structure and inference method allows making use of multiple ratings per audio sample. ● No external data is used! W.-C. Huang, E. Cooper, J. Yamagishi, and T. Toda, “LDNet: Unified Listener Dependent Modeling in MOS Prediction for Synthetic Speech,” in Proc. ICASSP, 2022
  • 17. Challenge results I. Participants demographics II. Results, analysis and discussion A. Comparison of baseline systems B. Analysis of top systems C. Sources of difficulty D. Analysis of metrics
  • 18. Participants demographics Number of teams: 22 teams + 3 baselines 14 teams are from academia, 5 teams are from industry, 3 teams are personal Main track: 21 teams + 3 baselines OOD track: 15 teams + 3 baselines Baseline systems: ● B01: SSL-MOS ● B02: MOSANet ● B03: LDNet
  • 19. Overall evaluation results: main track, OOD track
  • 20. Comparison of baseline systems: main track In terms of system-level SRCC, 11 teams outperformed the best baseline, B01! However, the gap between the best baseline and the top system is not large…
  • 21. Comparison of baseline systems: OOD track In terms of system-level SRCC, only 2 teams outperformed or on par with B01. The gap is even smaller… Participant feedback: “The baseline was too strong! Hard to get improvement!”
  • 22. Analysis of approaches used ● Main track: Finetuning SSL > using SSL features > not using SSL ● OOD track: finetuned SSL models were both the best and worst systems ● Popular approaches: ○ Ensembling (top team in main track; top 2 teams in OOD track) ○ Multi-task learning ○ Use of speech recognizers (top team in OOD track) Finetuned SSL SSL features No SSL
  • 23. Analysis of approaches used ● 7 teams used per-listener ratings ● No teams used listener demographics ○ One team used “listener group” ● OOD track: only 3 teams used the unlabeled data: Conducted their own listening test (top team) Task-adaptive pretraining “Pseudo-label” the unlabeled data using trained model
  • 24. Sources of difficulty Are unseen categories more difficult? Unseen systems Unseen speakers Unseen listeners Main track OOD track no yes (7 teams) Category yes (17 teams) yes (6 teams) N/A no
  • 25. Sources of difficulty Systems with large differences between their training and test set distributions are harder to predict.
  • 26. Sources of difficulty Low-quality systems are easy to predict. Middle and high quality systems are harder to predict.
  • 27. Analysis of metrics Why did you use system-level SRCC as main metric? Why are there 4 metrics? There are two main usage of MOS prediction models: Compare a lot of systems Ex. evaluation in scientific challenges. → Ranking-related metrics are preferred (LCC, SRCC, KTAU) Evaluate absolute goodness of a system Ex. use as objective function in training. → Numerical metrics are preferred (MSE)
  • 28. Analysis of metrics We calculated the linear correlation coefficients between all metrics using main track results. Correlation coefficients between ranking based metrics are close to 1. MSE is different from the other three metrics. ⇒ Future researchers can consider just reporting 2 metrics: MSE + {LCC, SRCC, KTAU}. ⇒ It is still of significant importance to develop a general metric that reflects both aspects.
  • 30. Conclusions ⇒ Attracted more than 20 participant teams. ⇒ SSL is very powerful in this task. ⇒ Generalizing to a different listening test is still very hard. ⇒ There will be a 2nd, 3rd, 4th,... version!! The goals of the VoiceMOS challenge:
  • 31. Useful materials The CodaLab challenge page is still open! Datasets are free to download. VoiceMOS Challenge The baseline systems are open-sourced! ● https://github.com/nii-yamagishilab/mos-finetune-ssl ● https://github.com/dhimasryan/MOSA-Net-Cross-Domain ● https://github.com/unilight/LDNet The arXiv paper is available! ● https://arxiv.org/abs/2203.11389