JunyiPeng00 / SLT22_MultiHead-Factorized-Attentive-Pooling

An attention-based backend allowing efficient fine-tuning of transformer models for speaker verification
18Updated 3 months ago

Alternatives and similar repositories for SLT22_MultiHead-Factorized-Attentive-Pooling:

Users that are interested in SLT22_MultiHead-Factorized-Attentive-Pooling are comparing it to the libraries listed below