JunyiPeng00 / SLT22_MultiHead-Factorized-Attentive-Pooling

An attention-based backend allowing efficient fine-tuning of transformer models for speaker verification
15Updated last month

Related projects

Alternatives and complementary repositories for SLT22_MultiHead-Factorized-Attentive-Pooling