krantiparida / awesome-audio-visual
A curated list of different papers and datasets in various areas of audio-visual processing
☆715Updated last year
Alternatives and similar repositories for awesome-audio-visual:
Users that are interested in awesome-audio-visual are comparing it to the libraries listed below
- VGGSound: A Large-scale Audio-Visual Dataset☆314Updated 3 years ago
- A curated list of audio-visual learning methods and datasets.☆255Updated 4 months ago
- Audio-Visual Speech Separation with Cross-Modal Consistency☆230Updated last year
- Code and Pretrained Models for ICLR 2023 Paper "Contrastive Audio-Visual Masked Autoencoder".☆256Updated last year
- This repo hosts the code and models of "Masked Autoencoders that Listen".☆581Updated last year
- Implementation for ECCV20 paper "Self-Supervised Learning of audio-visual objects from video"☆113Updated 4 years ago
- Audio-Visual Event Localization in Unconstrained Videos, ECCV 2018☆181Updated 4 years ago
- Deep-Learning-Based Audio-Visual Speech Enhancement and Separation☆207Updated 2 years ago
- Pytorch port of Google Research's VGGish model used for extracting audio features.☆386Updated 3 years ago
- The official code repo of "HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound Classification and Detection"☆405Updated 8 months ago
- Efficient Training of Audio Transformers with Patchout☆332Updated last year
- Code for the AAAI 2022 paper "SSAST: Self-Supervised Audio Spectrogram Transformer".☆381Updated 2 years ago
- A PyTorch implementation of the Deep Audio-Visual Speech Recognition paper.☆227Updated last year
- Learning audio concepts from natural language supervision☆547Updated 7 months ago
- Co-Separating Sounds of Visual Objects (ICCV 2019)☆94Updated last year
- Code for the Interspeech 2021 paper "AST: Audio Spectrogram Transformer".☆1,270Updated last year
- Listen to Look: Action Recognition by Previewing Audio (CVPR 2020)☆129Updated 3 years ago
- Crowd Sourced Emotional Multimodal Actors Dataset (CREMA-D)☆422Updated last month
- INTERSPEECH 2023-2024 Papers: A complete collection of influential and exciting research papers from the INTERSPEECH 2023-24 conference. …☆666Updated 4 months ago
- Source code for "Taming Visually Guided Sound Generation" (Oral at the BMVC 2021)☆360Updated 9 months ago
- Code, Dataset, and Pretrained Models for Audio and Speech Large Language Model "Listen, Think, and Understand".☆429Updated last year
- A self-supervised learning framework for audio-visual speech☆897Updated last year
- ICASSP 2023-2024 Papers: A complete collection of influential and exciting research papers from the ICASSP 2023-24 conferences. Explore t…☆457Updated 3 months ago
- Official implementation of the paper WAV2CLIP: LEARNING ROBUST AUDIO REPRESENTATIONS FROM CLIP☆341Updated 3 years ago
- Codebase and Dataset for the paper: Learning to Localize Sound Source in Visual Scenes☆90Updated 4 months ago
- BYOL for Audio: Self-Supervised Learning for General-Purpose Audio Representation☆211Updated last year
- Script for converting the pretrained VGGish model provided with AudioSet from TensorFlow to PyTorch, along with a basic smoke test.☆87Updated 5 years ago
- The AVA dataset densely annotates 80 atomic visual actions in 351k movie clips with actions localized in space and time, resulting in 1.6…☆329Updated 3 years ago
- Source code for models described in the paper "AudioCLIP: Extending CLIP to Image, Text and Audio" (https://arxiv.org/abs/2106.13043)☆811Updated 3 years ago
- soundnet and localize sound source☆11Updated 4 years ago