JunyiPeng00 / SLT22_MultiHead-Factorized-Attentive-PoolingLinks

An attention-based backend allowing efficient fine-tuning of transformer models for speaker verification
19Updated 9 months ago

Alternatives and similar repositories for SLT22_MultiHead-Factorized-Attentive-Pooling

Users that are interested in SLT22_MultiHead-Factorized-Attentive-Pooling are comparing it to the libraries listed below

Sorting: