AndreyGuzhov / AudioCLIPLinks
Source code for models described in the paper "AudioCLIP: Extending CLIP to Image, Text and Audio" (https://arxiv.org/abs/2106.13043)
☆820Updated 3 years ago
Alternatives and similar repositories for AudioCLIP
Users that are interested in AudioCLIP are comparing it to the libraries listed below
Sorting:
- Official implementation of the paper WAV2CLIP: LEARNING ROBUST AUDIO REPRESENTATIONS FROM CLIP☆346Updated 3 years ago
- This repo hosts the code and models of "Masked Autoencoders that Listen".☆593Updated last year
- Code and Pretrained Models for ICLR 2023 Paper "Contrastive Audio-Visual Masked Autoencoder".☆260Updated last year
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆713Updated last year
- Contrastive Language-Audio Pretraining☆1,703Updated last month
- Audio Dataset for training CLAP and other models☆688Updated last year
- Source code for "Taming Visually Guided Sound Generation" (Oral at the BMVC 2021)☆363Updated 11 months ago
- Learning audio concepts from natural language supervision☆567Updated 9 months ago
- VGGSound: A Large-scale Audio-Visual Dataset☆321Updated 3 years ago
- [CVPR'23] MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation☆433Updated last year
- Code for the Interspeech 2021 paper "AST: Audio Spectrogram Transformer".☆1,303Updated 2 years ago
- Implementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch☆549Updated 2 years ago
- Official implementation of VQ-Diffusion☆945Updated last year
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆769Updated 2 years ago
- This toolbox aims to unify audio generation model evaluation for easier comparison.☆347Updated 8 months ago
- A curated list of different papers and datasets in various areas of audio-visual processing☆734Updated last year
- Implementation of Parti, Google's pure attention-based text-to-image neural network, in Pytorch☆533Updated last year
- The source code of our paper "Diffsound: discrete diffusion model for text-to-sound generation"☆358Updated last year
- Code for the AAAI 2022 paper "SSAST: Self-Supervised Audio Spectrogram Transformer".☆387Updated 2 years ago
- This reporsitory contains metadata of WavCaps dataset and codes for downstream tasks.☆234Updated 11 months ago
- The official code repo of "HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound Classification and Detection"☆415Updated 10 months ago
- ☆392Updated 5 months ago
- DiffWave is a fast, high-quality neural vocoder and waveform synthesizer.☆844Updated last year
- [ACM MM 2021 Best Paper Award] Video Background Music Generation with Controllable Music Transformer☆313Updated 2 weeks ago
- Official PyTorch implementation of Contrastive Learning of Musical Representations☆326Updated 11 months ago
- Efficient Training of Audio Transformers with Patchout☆339Updated last year
- A curated list of audio-visual learning methods and datasets.☆263Updated 6 months ago
- Pytorch port of Google Research's VGGish model used for extracting audio features.☆392Updated 3 years ago
- An Audio Language model for Audio Tasks☆309Updated last year
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆657Updated 2 years ago