joaoantoniocn / AM-SincNetLinks
The Additive Margin SincNet (AM-SincNet) is a new approach for speaker recognition problems which is based in the neural network architecture SincNet and the additive margin softmax (AM-Softmax) loss function. It uses the architecture of the SincNet, but with an improved AM-Softmax layer.
☆45Updated last year
Alternatives and similar repositories for AM-SincNet
Users that are interested in AM-SincNet are comparing it to the libraries listed below
Sorting:
- Companion repository for the paper "A Comparison of Metric Learning Loss Functions for End-to-End Speaker Verification" published at SLSP…☆60Updated 4 years ago
- VoxSRC Challenge☆31Updated 6 years ago
- The Additive Margin MobileNet1D is a new light weight deep learning model for Speaker Recognition which is based on the MobileNetV2 archi…☆30Updated last year
- The codebase for Data-driven general-purpose voice activity detection.☆94Updated last year
- Development Toolkit for the VoxCeleb Speaker Recognition Challenge 2020☆42Updated 4 years ago
- an Audio-Visual Voice Activity Detection using Deep Learning☆49Updated 6 years ago
- implementation of "EFFICIENT KEYWORD SPOTTING USING DILATED CONVOLUTIONS AND GATING"☆36Updated 5 years ago
- PyTorch implementation of a self-attentive speaker embedding☆17Updated 5 years ago
- Pytorch implementation of "Generalized End-to-End Loss for Speaker Verification"☆102Updated 6 years ago
- Transformer-based online speech recognition system with TensorFlow 2☆26Updated 4 years ago
- DCASE2020 Challenge Task 1 baseline system☆25Updated 5 years ago
- fast SpecAugmentation code with numpy and scipy☆31Updated 5 years ago
- Pytorch implementation of Meta-Learning for Short Utterance Speaker Recognition with Imbalance Length Pairs (Interspeech, 2020)☆74Updated 4 years ago
- Deep multi-metric learning for text-independent speaker verification☆24Updated 5 years ago
- A better, faster, stronger version of the unbounded interleaved-state recurrent neural network (UIS-RNN)☆62Updated 5 years ago
- Code and instruction on replicating the experiments done in paper: Unified Hypersphere Embedding for Speaker Recognition☆31Updated 5 years ago
- Augmentation adversarial training for self-supervised speaker recognition☆78Updated 3 years ago
- Discriminative Neural Clustering for Speaker Diarisation☆78Updated 3 years ago
- Tensor2tensor experiment with SpecAugment☆46Updated 6 years ago
- Audio activity detector based on per-channel energy normalization (PCEN)☆29Updated 6 years ago
- GPU accelerated implementation of i-vector extractor training using PyTorch. Requires Kaldi for feature extraction and UBM training. An e…☆64Updated 5 years ago
- ☆60Updated 4 years ago
- This python code performs an efficient speech reverberation starting from a dataset of close-talking speech signals and a collection of a…☆95Updated 5 years ago
- ☆26Updated 3 years ago
- Jupyter notebook for DCASE 2020 challenge Task 1☆20Updated 5 years ago
- Audio data augmentation examples☆34Updated 7 years ago
- Author's repository for reproducing DcaseNet, an integrated pre-trained DNN that performs acoustic scene classification, audio tagging, a…☆41Updated 3 years ago
- Voxceleb1 i-vector based speaker recognition system☆43Updated 7 years ago
- DropClass and DropAdapt - repository for the paper accepted to Speaker Odyssey 2020☆22Updated 4 years ago
- Implementation of the paper "Keyword Transformer: A Self-Attention Model for Keyword Spotting"☆23Updated 4 years ago