AndreyGuzhov / AudioCLIP
Source code for models described in the paper "AudioCLIP: Extending CLIP to Image, Text and Audio" (https://arxiv.org/abs/2106.13043)
☆803Updated 3 years ago
Alternatives and similar repositories for AudioCLIP:
Users that are interested in AudioCLIP are comparing it to the libraries listed below
- This repo hosts the code and models of "Masked Autoencoders that Listen".☆574Updated 11 months ago
- Official implementation of the paper WAV2CLIP: LEARNING ROBUST AUDIO REPRESENTATIONS FROM CLIP☆338Updated 3 years ago
- Contrastive Language-Audio Pretraining☆1,570Updated 4 months ago
- Audio Dataset for training CLAP and other models☆671Updated last year
- Source code for "Taming Visually Guided Sound Generation" (Oral at the BMVC 2021)☆358Updated 8 months ago
- Code and Pretrained Models for ICLR 2023 Paper "Contrastive Audio-Visual Masked Autoencoder".☆248Updated last year
- Learning audio concepts from natural language supervision☆537Updated 6 months ago
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆708Updated last year
- VGGSound: A Large-scale Audio-Visual Dataset☆306Updated 3 years ago
- Code for the AAAI 2022 paper "SSAST: Self-Supervised Audio Spectrogram Transformer".☆378Updated 2 years ago
- [CVPR'23] MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation☆417Updated 9 months ago
- Code for the Interspeech 2021 paper "AST: Audio Spectrogram Transformer".☆1,246Updated last year
- Efficient Training of Audio Transformers with Patchout☆326Updated last year
- The official code repo of "HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound Classification and Detection"☆393Updated 7 months ago
- A curated list of different papers and datasets in various areas of audio-visual processing☆698Updated last year
- Pytorch port of Google Research's VGGish model used for extracting audio features.☆384Updated 3 years ago
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆762Updated 2 years ago
- Code, Dataset, and Pretrained Models for Audio and Speech Large Language Model "Listen, Think, and Understand".☆419Updated 11 months ago
- The source code of our paper "Diffsound: discrete diffusion model for text-to-sound generation"☆357Updated last year
- This reporsitory contains metadata of WavCaps dataset and codes for downstream tasks.☆218Updated 7 months ago
- [TPAMI2024] Codes and Models for VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset☆283Updated 3 months ago
- Large-scale text-video dataset. 10 million captioned short videos.☆627Updated 7 months ago
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,235Updated 2 years ago
- Easily create large video dataset from video urls☆586Updated 7 months ago
- An Audio Language model for Audio Tasks☆301Updated 11 months ago
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆926Updated 11 months ago
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆273Updated 2 years ago
- Official PyTorch implementation of Contrastive Learning of Musical Representations☆318Updated 8 months ago
- Code release for "Learning Video Representations from Large Language Models"