LAION-AI / CLAPLinks
Contrastive Language-Audio Pretraining
☆1,945Updated 7 months ago
Alternatives and similar repositories for CLAP
Users that are interested in CLAP are comparing it to the libraries listed below
Sorting:
- Audio Dataset for training CLAP and other models☆723Updated last year
- Learning audio concepts from natural language supervision☆621Updated last year
- Audio generation using diffusion models, in PyTorch.☆2,089Updated 2 years ago
- Official PyTorch implementation of BigVGAN (ICLR 2023)☆1,158Updated last year
- Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Research, in Pytorch☆2,612Updated 11 months ago
- This repo hosts the code and models of "Masked Autoencoders that Listen".☆635Updated last year
- Source code for models described in the paper "AudioCLIP: Extending CLIP to Image, Text and Audio" (https://arxiv.org/abs/2106.13043)☆857Updated 4 years ago
- State-of-the-art audio codec with 90x compression factor. Supports 44.1kHz, 24kHz, and 16kHz mono/stereo audio.☆1,662Updated 3 weeks ago
- Official implementation of the paper "Acoustic Music Understanding Model with Large-Scale Self-supervised Training".☆421Updated 7 months ago
- Apply diffusion models using the new Hugging Face diffusers package to synthesize music instead of images.☆783Updated last year
- AI Audio Datasets (AI-ADS) 🎵, including Speech, Music, and Sound Effects, which can provide training data for Generative AI, AIGC, AI mo…☆875Updated 5 months ago
- Collection of resources on the applications of Large Language Models (LLMs) in Audio AI.☆706Updated 2 months ago
- Vocos: Closing the gap between time-domain and Fourier-based neural vocoders for high-quality audio synthesis☆1,028Updated last year
- Code for the Interspeech 2021 paper "AST: Audio Spectrogram Transformer".☆1,394Updated 2 years ago
- SALMONN family: A suite of advanced multi-modal LLMs☆1,373Updated 2 months ago
- Code, Dataset, and Pretrained Models for Audio and Speech Large Language Model "Listen, Think, and Understand".☆464Updated last year
- Implementation of Voicebox, new SOTA Text-to-speech network from MetaAI, in Pytorch☆668Updated last year
- Official implementation of "Separate Anything You Describe"☆1,853Updated last year
- Implementation of MusicLM, a text to music model published by Google Research, with a few modifications.☆552Updated 2 years ago
- Code for the paper "LLark: A Multimodal Instruction-Following Language Model for Music" by Josh Gardner, Simon Durand, Daniel Stoller, an…☆371Updated last year
- This toolbox aims to unify audio generation model evaluation for easier comparison.☆367Updated last year
- The Open Source Code of UniAudio☆593Updated last year
- Unified automatic quality assessment for speech, music, and sound.☆650Updated 6 months ago
- PyTorch implementation of Audio Flamingo: Series of Advanced Audio Understanding Language Models☆919Updated last week
- AudioLDM: Generate speech, sound effects, music and beyond, with text.☆2,790Updated 6 months ago
- LP-MusicCaps: LLM-Based Pseudo Music Captioning [ISMIR23]☆343Updated last year
- A timeline of the latest AI models for audio generation, starting in 2023!☆1,910Updated last year
- Fast audio data augmentation in PyTorch. Inspired by audiomentations. Useful for deep learning.☆1,118Updated last month
- A family of diffusion models for text-to-audio generation.☆1,221Updated 4 months ago
- Keep track of big models in audio domain, including speech, singing, music etc.☆503Updated last year