|
|
|
Yusuf Brima, Ulf Krumnack, Simone Pika and Gunther Heidemann
Self-supervised learning (SSL) has emerged as a promising paradigm for learning flexible speech representations from unlabeled data. By designing pretext tasks that exploit statistical regularities, SSL models can capture useful representations that are ...
ver más
|
|
|