krantiparida / awesome-audio-visual
A curated list of different papers and datasets in various areas of audio-visual processing
☆698Updated last year
Alternatives and similar repositories for awesome-audio-visual:
Users that are interested in awesome-audio-visual are comparing it to the libraries listed below
- A curated list of audio-visual learning methods and datasets.☆252Updated 3 months ago
- VGGSound: A Large-scale Audio-Visual Dataset☆309Updated 3 years ago
- Audio-Visual Speech Separation with Cross-Modal Consistency☆228Updated last year
- Deep-Learning-Based Audio-Visual Speech Enhancement and Separation☆205Updated last year
- This repo hosts the code and models of "Masked Autoencoders that Listen".☆574Updated 11 months ago
- Pytorch port of Google Research's VGGish model used for extracting audio features.☆384Updated 3 years ago
- Implementation for ECCV20 paper "Self-Supervised Learning of audio-visual objects from video"☆113Updated 4 years ago
- Code and Pretrained Models for ICLR 2023 Paper "Contrastive Audio-Visual Masked Autoencoder".☆248Updated last year
- Audio-Visual Event Localization in Unconstrained Videos, ECCV 2018☆179Updated 3 years ago
- A PyTorch implementation of the Deep Audio-Visual Speech Recognition paper.☆223Updated last year
- Listen to Look: Action Recognition by Previewing Audio (CVPR 2020)☆129Updated 3 years ago
- Extract video features from raw videos using multiple GPUs. We support RAFT flow frames as well as S3D, I3D, R(2+1)D, VGGish, CLIP, and T…☆583Updated last month
- Co-Separating Sounds of Visual Objects (ICCV 2019)☆94Updated last year
- Code for the Interspeech 2021 paper "AST: Audio Spectrogram Transformer".☆1,246Updated last year
- Codebase and Dataset for the paper: Learning to Localize Sound Source in Visual Scenes☆88Updated 3 months ago
- Code for the AAAI 2022 paper "SSAST: Self-Supervised Audio Spectrogram Transformer".☆378Updated 2 years ago
- A self-supervised learning framework for audio-visual speech☆887Updated last year
- Source code for models described in the paper "AudioCLIP: Extending CLIP to Image, Text and Audio" (https://arxiv.org/abs/2106.13043)☆803Updated 3 years ago
- Collection of resources on the applications of Large Language Models (LLMs) in Audio AI.☆661Updated 7 months ago
- A curated list of deep learning resources for video-text retrieval.☆613Updated last year
- soundnet and localize sound source☆11Updated 4 years ago
- INTERSPEECH 2023-2024 Papers: A complete collection of influential and exciting research papers from the INTERSPEECH 2023-24 conference. …☆661Updated 3 months ago
- The official code repo of "HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound Classification and Detection"☆393Updated 7 months ago
- Script for converting the pretrained VGGish model provided with AudioSet from TensorFlow to PyTorch, along with a basic smoke test.☆86Updated 5 years ago
- Unified Multisensory Perception: Weakly-Supervised Audio-Visual Video Parsing, ECCV, 2020. (Spotlight)☆85Updated 8 months ago
- Crowd Sourced Emotional Multimodal Actors Dataset (CREMA-D)☆409Updated this week
- Audio Visual Instance Discrimination with Cross-Modal Agreement☆128Updated 3 years ago
- An Audio Language model for Audio Tasks☆301Updated 11 months ago
- Official implementation of the paper WAV2CLIP: LEARNING ROBUST AUDIO REPRESENTATIONS FROM CLIP☆338Updated 3 years ago
- Efficient Training of Audio Transformers with Patchout☆326Updated last year