YuanGongND / cav-maeLinks
Code and Pretrained Models for ICLR 2023 Paper "Contrastive Audio-Visual Masked Autoencoder".
☆260Updated last year
Alternatives and similar repositories for cav-mae
Users that are interested in cav-mae are comparing it to the libraries listed below
Sorting:
- VGGSound: A Large-scale Audio-Visual Dataset☆321Updated 3 years ago
- This repo hosts the code and models of "Masked Autoencoders that Listen".☆593Updated last year
- The repo host the code and model of MAViL.☆43Updated last year
- A curated list of audio-visual learning methods and datasets.☆263Updated 6 months ago
- Scripts for download AudioSet☆79Updated 7 years ago
- Official codebase for "Unveiling the Power of Audio-Visual Early Fusion Transformers with Dense Interactions through Masked Modeling".☆32Updated 10 months ago
- [IJCAI 2024] EAT: Self-Supervised Pre-Training with Efficient Audio Transformer☆161Updated last month
- av-SALMONN: Speech-Enhanced Audio-Visual Large Language Models☆13Updated last year
- Official implementation of RAVEn (ICLR 2023) and BRAVEn (ICASSP 2024)☆66Updated 4 months ago
- This reporsitory contains metadata of WavCaps dataset and codes for downstream tasks.☆234Updated 11 months ago
- Vision Transformers are Parameter-Efficient Audio-Visual Learners☆99Updated last year
- Official implementation of the paper WAV2CLIP: LEARNING ROBUST AUDIO REPRESENTATIONS FROM CLIP☆346Updated 3 years ago
- Code for the IEEE Signal Processing Letters 2022 paper "UAVM: Towards Unifying Audio and Visual Models".☆55Updated 2 years ago
- [AAAI 2023 (Oral)] CrissCross: Self-Supervised Audio-Visual Representation Learning with Relaxed Cross-Modal Synchronicity☆25Updated last year
- Unified Multisensory Perception: Weakly-Supervised Audio-Visual Video Parsing, ECCV, 2020. (Spotlight)☆88Updated 11 months ago
- [CVPR 2023] Code for "Learning Emotion Representations from Verbal and Nonverbal Communication"☆48Updated 4 months ago
- Source code for "Taming Visually Guided Sound Generation" (Oral at the BMVC 2021)☆363Updated 11 months ago
- PyTorch code for “TVLT: Textless Vision-Language Transformer” (NeurIPS 2022 Oral)☆125Updated 2 years ago
- Code for the AAAI 2022 paper "SSAST: Self-Supervised Audio Spectrogram Transformer".☆387Updated 2 years ago
- ☆65Updated 2 years ago
- 🔊 Repository for our NAACL-HLT 2019 paper: AudioCaps☆175Updated 4 months ago
- Official Implementation of the work "Audio Mamba: Bidirectional State Space Model for Audio Representation Learning"☆144Updated 7 months ago
- Emotion Recognition ToolKit (ERTK): tools for emotion recognition. Dataset processing, feature extraction, experiments,☆56Updated 7 months ago
- Official Codebase of "Localizing Visual Sounds the Easy Way" (ECCV 2022)☆34Updated 2 years ago
- download the vggsound dataset☆22Updated 3 years ago
- [Information Fusion 2024] HiCMAE: Hierarchical Contrastive Masked Autoencoder for Self-Supervised Audio-Visual Emotion Recognition☆111Updated 8 months ago
- ABAW3 (CVPRW): A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion Recognition☆45Updated last year
- [TPAMI2024] Codes and Models for VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset☆292Updated 6 months ago
- Source code for models described in the paper "AudioCLIP: Extending CLIP to Image, Text and Audio" (https://arxiv.org/abs/2106.13043)☆820Updated 3 years ago
- MUSIC-AVQA, CVPR2022 (ORAL)☆85Updated 2 years ago