|Aug 1, 2022||Releasing the system descriptions|
|Jun 13, 2022||Releasing the FFSVC 2022 baseline system!|
|Apr 15, 2022||Releasing the FFSVC 2022 evaluation plan and starting the registration|
Welcome to FFSVC 2022! The success of FFSVC2020 indicates that more and more researchers are paying attention to the far-field speaker verification task. In this year, the challenge still focuses on the far-field speaker verification task and provides a new far-field development and test set collected by real speakers in complex environments under multiple conditions, e.g., text-dependent, text-independent, cross-channel enroll/test, etc. In addition, in-domain training speech data may be unlabeled in real scenario, which is difficult to fine-tune the pre-trained model. Therefore, a new focus of this year is cross-language self-supervised / semi-supervised learning, where participants are allowed to use the unlabeled training, development and supplementary set of the FFSVC2020 dataset (in Mandarin, in-domain) and the labeled VoxCeleb 1&2 dataset (mostly in English, out-of-domain) to build the model.
This year we focus on the far-field single-channel scenarios. There are two tasks in this challenge; both tasks are to determine whether two speech samples are from the same speaker:
- Task 1. Fully supervised far-field speaker verification.
- Task 2. Semi-supervised far-field speaker verification.
We define the task 1&2 as fixed training conditions that the participants can only use a fixed training set to build the speaker verification system. The fixed training set consists of the following two databases:
Note: Please refer to this website to download VoxCeleb 1&2 dataset and this website to download FFSVC 2020 dataset if you do not have these two datasets. In addition, in this challenge, we release a supplementary set of FFSVC2020, which consists of the same devices data as FFSVC2022.
In task 1, participants can use both VoxCeleb1&2 and FFSVC2020 datasets with speaker labels to train a far-field speaker verification system.
For task 2, in contrast to task1, participants cannot use the speaker labels of the FFSVC2020 dataset and FFSVC2020 supplementary set. In task 2, we encourage the participants to adopt self-supervised or semi-supervised methods to utilize the in-domain unlabeled data.
Using other speech datasets to train the system is forbidden, while participants are allowed to use public open-source non-speech dataset to perform data augmentation. The self-supervised pre-trained models, such as Wav2Vec and WavLM, cannot be used in this challenge.
- April 15th, 2022 : Releasing the FFSVC 2022 evaluation plan and starting the registration.
- April 20th, 2022 : Opening the submission system and releasing supplementary/dev/eval sets
- July 3th, 2022 : Deadline for registration.
- July 10th, 2022 : Deadline for results submission.
- July 15th, 2022 : Deadline for system description submission
- July 24th, 2022 : Deadline for workshop paper submission
- Aug 20th, 2022 : Workshop paper acceptance notification
- Sep 17th, 2022 : Interspeech 2022 Satellite Workshop
Prizes will be awarded to top three winning teams of each task.
Ming Li, Duke Kunshan University, China
Haizhou Li, National University of Singapore, Singapore
Shrikanth Narayanan, University of Southern California, USA
Hui Bu, AI Shell Foundation,China
Xiaoyi Qin, Duke Kunshan University, China
Yao Shi, Duke Kunshan University, China
SPEAKIN teamThe SpeakIn System for Far-Field Speaker Verification Challenge 2022ffsvc2022
HiMia teamNPU-HC Speaker Verification System for Far-field Speaker Verification Challenge 2022ffsvc2022
Nan7U teamThe Nan7U Speaker Verification System for the FFSVC 2022 Challengeffsvc2022
ZXIC teamZXIC Speaker Verification System for FFSVC 2022 Challengeffsvc2022
Evaluation PlanFar-field Speaker Verification Challenge (FFSVC) 2022 : Challenge Evaluation Plan2022