Yuan-ManX / ai-audio-datasetsLinks
AI Audio Datasets (AI-ADS) 🎵, including Speech, Music, and Sound Effects, which can provide training data for Generative AI, AIGC, AI model training, intelligent audio tool development, and audio applications.
☆885Updated 6 months ago
Alternatives and similar repositories for ai-audio-datasets
Users that are interested in ai-audio-datasets are comparing it to the libraries listed below
Sorting:
- Audio Dataset for training CLAP and other models☆725Updated last week
- Official PyTorch implementation of BigVGAN (ICLR 2023)☆1,171Updated last year
- Learning audio concepts from natural language supervision☆628Updated last year
- Vocos: Closing the gap between time-domain and Fourier-based neural vocoders for high-quality audio synthesis☆1,039Updated last year
- Official implementation of the paper "Acoustic Music Understanding Model with Large-Scale Self-supervised Training".☆422Updated 7 months ago
- A paper and project list about the cutting edge Speech Synthesis, Text-to-Speech (TTS), Singing Voice Synthesis (SVS), Voice Conversion (…☆465Updated 3 years ago
- AudioLDM training, finetuning, evaluation and inference.☆290Updated last year
- Collection of resources on the applications of Large Language Models (LLMs) in Audio AI.☆718Updated 3 months ago
- Keep track of big models in audio domain, including speech, singing, music etc.☆506Updated last year
- Unified automatic quality assessment for speech, music, and sound.☆657Updated 7 months ago
- This toolbox aims to unify audio generation model evaluation for easier comparison.☆370Updated last year
- Daily tracking of awesome audio papers, including music generation, zero-shot tts, asr, audio generation☆409Updated 2 months ago
- a list of demo websites for automatic music generation research☆767Updated last week
- Pytorch implementation of the CREPE pitch tracker☆497Updated 8 months ago
- PyTorch implementation of Audio Flamingo: Series of Advanced Audio Understanding Language Models☆960Updated last month
- Implementation of Band Split Roformer, SOTA Attention network for music source separation out of ByteDance AI Labs☆711Updated last week
- The Open Source Code of UniAudio☆595Updated last year
- Code, Dataset, and Pretrained Models for Audio and Speech Large Language Model "Listen, Think, and Understand".☆465Updated last year
- Collection of audio-focused loss functions in PyTorch☆839Updated last year
- An Open-source Streaming High-fidelity Neural Audio Codec☆498Updated 10 months ago
- VISinger 2: High-Fidelity End-to-End Singing Voice Synthesis Enhanced by Digital Signal Processing Synthesizer☆350Updated last year
- LP-MusicCaps: LLM-Based Pseudo Music Captioning [ISMIR23]☆343Updated last year
- Contrastive Language-Audio Pretraining☆1,984Updated 8 months ago
- All-In-One Music Structure Analyzer☆700Updated last year
- Mustango: Toward Controllable Text-to-Music Generation☆386Updated 7 months ago
- DeepAFx-ST - Style transfer of audio effects with differentiable signal processing. Please see https://csteinmetz1.github.io/DeepAFx-ST/☆401Updated 2 years ago
- A lightweight library for Frechet Audio Distance calculation.☆305Updated last week
- Metadata, scripts and baselines for the MTG-Jamendo dataset☆354Updated this week
- MU-LLaMA: Music Understanding Large Language Model☆299Updated 5 months ago
- State-of-the-art audio codec with 90x compression factor. Supports 44.1kHz, 24kHz, and 16kHz mono/stereo audio.☆1,695Updated this week