AndreyGuzhov / AudioCLIPLinks
Source code for models described in the paper "AudioCLIP: Extending CLIP to Image, Text and Audio" (https://arxiv.org/abs/2106.13043)
☆857Updated 4 years ago
Alternatives and similar repositories for AudioCLIP
Users that are interested in AudioCLIP are comparing it to the libraries listed below
Sorting:
- Official implementation of the paper WAV2CLIP: LEARNING ROBUST AUDIO REPRESENTATIONS FROM CLIP☆356Updated 3 years ago
- This repo hosts the code and models of "Masked Autoencoders that Listen".☆635Updated last year
- Source code for "Taming Visually Guided Sound Generation" (Oral at the BMVC 2021)☆371Updated last year
- Code and Pretrained Models for ICLR 2023 Paper "Contrastive Audio-Visual Masked Autoencoder".☆282Updated last year
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆719Updated 2 years ago
- Learning audio concepts from natural language supervision☆623Updated last year
- Contrastive Language-Audio Pretraining☆1,945Updated 7 months ago
- Audio Dataset for training CLAP and other models☆723Updated last year
- VGGSound: A Large-scale Audio-Visual Dataset☆346Updated 4 years ago
- A curated list of different papers and datasets in various areas of audio-visual processing☆758Updated last year
- Code for the Interspeech 2021 paper "AST: Audio Spectrogram Transformer".☆1,394Updated 2 years ago
- [CVPR'23] MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation☆450Updated last year
- Pytorch port of Google Research's VGGish model used for extracting audio features.☆405Updated 4 years ago
- A self-supervised learning framework for audio-visual speech☆963Updated 2 years ago
- Code for the AAAI 2022 paper "SSAST: Self-Supervised Audio Spectrogram Transformer".☆410Updated 3 years ago
- Implementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch☆549Updated 2 years ago
- BYOL for Audio: Self-Supervised Learning for General-Purpose Audio Representation☆225Updated 2 years ago
- Efficient Training of Audio Transformers with Patchout☆363Updated last year
- [TPAMI2024] Codes and Models for VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset☆305Updated last year
- 🔊 Repository for our NAACL-HLT 2019 paper: AudioCaps☆200Updated 2 months ago
- This toolbox aims to unify audio generation model evaluation for easier comparison.☆370Updated last year
- ☆1,064Updated last year
- [ACM MM 2021 Best Paper Award] Video Background Music Generation with Controllable Music Transformer☆322Updated 6 months ago
- A curated list of audio-visual learning methods and datasets.☆279Updated last year
- The official code repo of "HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound Classification and Detection"☆452Updated 3 months ago
- Official PyTorch implementation of Contrastive Learning of Musical Representations☆335Updated last year
- Code, Dataset, and Pretrained Models for Audio and Speech Large Language Model "Listen, Think, and Understand".☆464Updated last year
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆717Updated 3 years ago
- An Audio Language model for Audio Tasks☆318Updated last year
- ACM MM 2021: 'Is Someone Speaking? Exploring Long-term Temporal Features for Audio-visual Active Speaker Detection'☆434Updated 2 years ago