JunyiPeng00 / SLT22_MultiHead-Factorized-Attentive-Pooling
View external linksLinks

An attention-based backend allowing efficient fine-tuning of transformer models for speaker verification
24Sep 22, 2024Updated last year

Alternatives and similar repositories for SLT22_MultiHead-Factorized-Attentive-Pooling

Users that are interested in SLT22_MultiHead-Factorized-Attentive-Pooling are comparing it to the libraries listed below

Sorting:

Are these results useful?