TaoRuijie / TalkNet-ASD
ACM MM 2021: 'Is Someone Speaking? Exploring Long-term Temporal Features for Audio-visual Active Speaker Detection'
☆359Updated last year
Alternatives and similar repositories for TalkNet-ASD:
Users that are interested in TalkNet-ASD are comparing it to the libraries listed below
- The repository for IEEE CVPR 2023 (A Light Weight Model for Active Speaker Detection)☆122Updated last week
- Audio-Visual Speech Separation with Cross-Modal Consistency☆228Updated last year
- Out of time: automated lip sync in the wild☆740Updated last year
- Audio-Visual Active Speaker Detection with PyTorch on AVA-ActiveSpeaker dataset☆60Updated 3 years ago
- Official Implementation of Visual Transformer Pooling for Lip reading☆40Updated 2 years ago
- Disentangled Speech Embeddings using Cross-Modal Self-Supervision☆159Updated 4 years ago
- A PyTorch implementation of the Deep Audio-Visual Speech Recognition paper.☆224Updated last year
- A collection of datasets for the purpose of emotion recognition/detection in speech.☆320Updated 6 months ago
- Code for the Active Speakers in Context Paper (CVPR2020)☆54Updated 3 years ago
- Visual Speech Recognition for Multiple Languages☆394Updated last year
- ☆160Updated 8 months ago
- [ACL 2024] Official PyTorch code for extracting features and training downstream models with emotion2vec: Self-Supervised Pre-Training fo…☆777Updated 3 months ago
- A self-supervised learning framework for audio-visual speech☆887Updated last year
- VGGSound: A Large-scale Audio-Visual Dataset☆309Updated 3 years ago
- Official implementation for the paper Exploring Wav2vec 2.0 fine-tuning for improved speech emotion recognition☆148Updated 3 years ago
- Repository with the code of the paper: A proposal for Multimodal Emotion Recognition using auraltransformers and Action Units on RAVDESS …☆105Updated last year
- This is the GitHub page for publicly available emotional speech data.☆345Updated 3 years ago
- Auto-AVSR: Lip-Reading Sentences Project☆323Updated 2 months ago
- Crowd Sourced Emotional Multimodal Actors Dataset (CREMA-D)☆411Updated last week
- A pipeline to read lips and generate speech for the read content, i.e Lip to Speech Synthesis.☆83Updated 3 years ago
- The PyTorch Code and Model In "Learn an Effective Lip Reading Model without Pains", (https://arxiv.org/abs/2011.07557), which reaches the…☆159Updated last year
- A curated list of awesome voice conversion, projects and communities.☆227Updated 2 months ago
- Deep-Learning-Based Audio-Visual Speech Enhancement and Separation☆205Updated last year
- Implementation for ECCV20 paper "Self-Supervised Learning of audio-visual objects from video"☆113Updated 4 years ago
- [INTERSPEECH 2022] This dataset is designed for multi-modal speaker diarization and lip-speech synchronization in the wild.☆50Updated last year
- Official repository for the paper VocaLiST: An Audio-Visual Synchronisation Model for Lips and Voices☆64Updated 11 months ago
- Official implementation of INTERSPEECH 2021 paper 'Emotion Recognition from Speech Using Wav2vec 2.0 Embeddings'☆129Updated 2 months ago
- ICASSP'22 Training Strategies for Improved Lip-Reading; ICASSP'21 Towards Practical Lipreading with Distilled and Efficient Models; ICASS…☆409Updated last year
- [ICASSP 2023] Official Tensorflow implementation of "Temporal Modeling Matters: A Novel Temporal Emotional Modeling Approach for Speech E…☆168Updated 10 months ago
- MEAD: A Large-scale Audio-visual Dataset for Emotional Talking-face Generation [ECCV2020]☆257Updated 8 months ago