Yuan-ManX / ai-audio-datasetsLinks
AI Audio Datasets (AI-ADS) 🎵, including Speech, Music, and Sound Effects, which can provide training data for Generative AI, AIGC, AI model training, intelligent audio tool development, and audio applications.
☆810Updated last month
Alternatives and similar repositories for ai-audio-datasets
Users that are interested in ai-audio-datasets are comparing it to the libraries listed below
Sorting:
- Audio Dataset for training CLAP and other models☆701Updated last year
- Official PyTorch implementation of BigVGAN (ICLR 2023)☆1,089Updated 11 months ago
- Official implementation of the paper "Acoustic Music Understanding Model with Large-Scale Self-supervised Training".☆396Updated 3 months ago
- Vocos: Closing the gap between time-domain and Fourier-based neural vocoders for high-quality audio synthesis☆975Updated last year
- a list of demo websites for automatic music generation research☆718Updated this week
- AudioLDM training, finetuning, evaluation and inference.☆272Updated 8 months ago
- Collection of resources on the applications of Large Language Models (LLMs) in Audio AI.☆689Updated last year
- Pytorch implementation of the CREPE pitch tracker☆470Updated 3 months ago
- Keep track of big models in audio domain, including speech, singing, music etc.☆492Updated 11 months ago
- A paper and project list about the cutting edge Speech Synthesis, Text-to-Speech (TTS), Singing Voice Synthesis (SVS), Voice Conversion (…☆444Updated 2 years ago
- Daily tracking of awesome audio papers, including music generation, zero-shot tts, asr, audio generation☆400Updated last week
- Learning audio concepts from natural language supervision☆586Updated 11 months ago
- This toolbox aims to unify audio generation model evaluation for easier comparison.☆355Updated 10 months ago
- Unified automatic quality assessment for speech, music, and sound.☆566Updated 2 months ago
- All-In-One Music Structure Analyzer☆615Updated last year
- Code, Dataset, and Pretrained Models for Audio and Speech Large Language Model "Listen, Think, and Understand".☆450Updated last year
- The Open Source Code of UniAudio☆574Updated last year
- PyTorch implementation of Audio Flamingo: Series of Advanced Audio Understanding Language Models☆724Updated last week
- Implementation of MusicLM, a text to music model published by Google Research, with a few modifications.☆549Updated 2 years ago
- Mustango: Toward Controllable Text-to-Music Generation☆373Updated 2 months ago
- MU-LLaMA: Music Understanding Large Language Model☆285Updated last week
- VISinger 2: High-Fidelity End-to-End Singing Voice Synthesis Enhanced by Digital Signal Processing Synthesizer☆345Updated 9 months ago
- A lightweight library for Frechet Audio Distance calculation.☆289Updated 2 weeks ago
- An Open-source Streaming High-fidelity Neural Audio Codec☆485Updated 5 months ago
- LP-MusicCaps: LLM-Based Pseudo Music Captioning [ISMIR23]☆337Updated last year
- Metadata, scripts and baselines for the MTG-Jamendo dataset☆325Updated last month
- Collection of audio-focused loss functions in PyTorch☆803Updated last year
- State-of-the-art audio codec with 90x compression factor. Supports 44.1kHz, 24kHz, and 16kHz mono/stereo audio.☆1,542Updated this week
- DeepAFx-ST - Style transfer of audio effects with differentiable signal processing. Please see https://csteinmetz1.github.io/DeepAFx-ST/☆390Updated 2 years ago
- Implementation of Band Split Roformer, SOTA Attention network for music source separation out of ByteDance AI Labs☆603Updated last week