Yuan-ManX / ai-audio-datasetsLinks
AI Audio Datasets (AI-ADS) π΅, including Speech, Music, and Sound Effects, which can provide training data for Generative AI, AIGC, AI model training, intelligent audio tool development, and audio applications.
β793Updated 3 weeks ago
Alternatives and similar repositories for ai-audio-datasets
Users that are interested in ai-audio-datasets are comparing it to the libraries listed below
Sorting:
- Audio Dataset for training CLAP and other modelsβ693Updated last year
- Official PyTorch implementation of BigVGAN (ICLR 2023)β1,079Updated 11 months ago
- Keep track of big models in audio domain, including speech, singing, music etc.β491Updated 10 months ago
- Collection of resources on the applications of Large Language Models (LLMs) in Audio AI.β685Updated last year
- Pytorch implementation of the CREPE pitch trackerβ466Updated 2 months ago
- a list of demo websites for automatic music generation researchβ717Updated this week
- Official implementation of the paper "Acoustic Music Understanding Model with Large-Scale Self-supervised Training".β388Updated 2 months ago
- A paper and project list about the cutting edge Speech Synthesis, Text-to-Speech (TTS), Singing Voice Synthesis (SVS), Voice Conversion (β¦β439Updated 2 years ago
- Unified automatic quality assessment for speech, music, and sound.β553Updated 2 months ago
- AudioLDM training, finetuning, evaluation and inference.β268Updated 7 months ago
- Vocos: Closing the gap between time-domain and Fourier-based neural vocoders for high-quality audio synthesisβ964Updated last year
- All-In-One Music Structure Analyzerβ601Updated last year
- Daily tracking of awesome audio papers, including music generation, zero-shot tts, asr, audio generationβ392Updated last week
- Learning audio concepts from natural language supervisionβ578Updated 10 months ago
- MU-LLaMA: Music Understanding Large Language Modelβ283Updated last year
- Code, Dataset, and Pretrained Models for Audio and Speech Large Language Model "Listen, Think, and Understand".β447Updated last year
- This toolbox aims to unify audio generation model evaluation for easier comparison.β351Updated 10 months ago
- LP-MusicCaps: LLM-Based Pseudo Music Captioning [ISMIR23]β336Updated last year
- Implementation of Band Split Roformer, SOTA Attention network for music source separation out of ByteDance AI Labsβ588Updated this week
- Mustango: Toward Controllable Text-to-Music Generationβ373Updated 2 months ago
- The Open Source Code of UniAudioβ572Updated last year
- State-of-the-art audio codec with 90x compression factor. Supports 44.1kHz, 24kHz, and 16kHz mono/stereo audio.β1,521Updated this week
- An Open-source Streaming High-fidelity Neural Audio Codecβ481Updated 5 months ago
- A lightweight library for Frechet Audio Distance calculation.β286Updated 11 months ago
- DeepAFx-ST - Style transfer of audio effects with differentiable signal processing. Please see https://csteinmetz1.github.io/DeepAFx-ST/β388Updated 2 years ago
- Collection of audio-focused loss functions in PyTorchβ798Updated last year
- VISinger 2: High-Fidelity End-to-End Singing Voice Synthesis Enhanced by Digital Signal Processing Synthesizerβ343Updated 9 months ago
- Code for the paper "LLark: A Multimodal Instruction-Following Language Model for Music" by Josh Gardner, Simon Durand, Daniel Stoller, anβ¦β356Updated last year
- This repository is an implementation of this article: https://arxiv.org/pdf/2107.03312.pdfβ398Updated 3 years ago
- Implementation of MusicLM, a text to music model published by Google Research, with a few modifications.β549Updated 2 years ago