krantiparida / awesome-audio-visualLinks
A curated list of different papers and datasets in various areas of audio-visual processing
☆745Updated last year
Alternatives and similar repositories for awesome-audio-visual
Users that are interested in awesome-audio-visual are comparing it to the libraries listed below
Sorting:
- VGGSound: A Large-scale Audio-Visual Dataset☆334Updated 4 years ago
- A curated list of audio-visual learning methods and datasets.☆273Updated 10 months ago
- Pytorch port of Google Research's VGGish model used for extracting audio features.☆401Updated 3 years ago
- Code and Pretrained Models for ICLR 2023 Paper "Contrastive Audio-Visual Masked Autoencoder".☆271Updated last year
- Audio-Visual Event Localization in Unconstrained Videos, ECCV 2018☆193Updated 4 years ago
- Audio-Visual Speech Separation with Cross-Modal Consistency☆236Updated 2 years ago
- This repo hosts the code and models of "Masked Autoencoders that Listen".☆622Updated last year
- Source code for models described in the paper "AudioCLIP: Extending CLIP to Image, Text and Audio" (https://arxiv.org/abs/2106.13043)☆848Updated 4 years ago
- Implementation for ECCV20 paper "Self-Supervised Learning of audio-visual objects from video"☆113Updated 4 years ago
- Code for the Interspeech 2021 paper "AST: Audio Spectrogram Transformer".☆1,360Updated 2 years ago
- A PyTorch implementation of the Deep Audio-Visual Speech Recognition paper.☆238Updated last year
- ACM MM 2021: 'Is Someone Speaking? Exploring Long-term Temporal Features for Audio-visual Active Speaker Detection'☆422Updated 2 years ago
- Extract video features from raw videos using multiple GPUs. We support RAFT flow frames as well as S3D, I3D, R(2+1)D, VGGish, CLIP, and T…☆622Updated 8 months ago
- Listen to Look: Action Recognition by Previewing Audio (CVPR 2020)☆129Updated 4 years ago
- Unified Multisensory Perception: Weakly-Supervised Audio-Visual Video Parsing, ECCV, 2020. (Spotlight)☆90Updated last year
- Code for the AAAI 2022 paper "SSAST: Self-Supervised Audio Spectrogram Transformer".☆402Updated 3 years ago
- Codebase and Dataset for the paper: Learning to Localize Sound Source in Visual Scenes☆93Updated 10 months ago
- Deep-Learning-Based Audio-Visual Speech Enhancement and Separation☆214Updated 2 years ago
- The official code repo of "HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound Classification and Detection"☆439Updated last month
- Localizing Visual Sounds the Hard Way☆82Updated 3 years ago
- Codebase for ECCV18 "The Sound of Pixels"☆386Updated 3 years ago
- Co-Separating Sounds of Visual Objects (ICCV 2019)☆97Updated 2 years ago
- Efficient Training of Audio Transformers with Patchout☆353Updated last year
- A self-supervised learning framework for audio-visual speech☆941Updated last year
- Crowd Sourced Emotional Multimodal Actors Dataset (CREMA-D)☆473Updated 7 months ago
- Learning audio concepts from natural language supervision☆604Updated last year
- Script for converting the pretrained VGGish model provided with AudioSet from TensorFlow to PyTorch, along with a basic smoke test.☆87Updated 6 years ago
- Code for Discriminative Sounding Objects Localization (NeurIPS 2020)☆59Updated 3 years ago
- download the vggsound dataset☆22Updated 3 years ago
- Code Release for the paper "TriBERT: Full-body Human-centric Audio-visual Representation Learning for Visual Sound Separation" in NeurIPS…☆14Updated 3 years ago