Stability-AI / stable-audio-toolsLinks
Generative models for conditional audio generation
☆3,349Updated last month
Alternatives and similar repositories for stable-audio-tools
Users that are interested in stable-audio-tools are comparing it to the libraries listed below
Sorting:
- AudioLDM: Generate speech, sound effects, music and beyond, with text.☆2,691Updated 2 weeks ago
- Stable diffusion for real-time music generation☆3,749Updated 11 months ago
- Official implementation of "Separate Anything You Describe"☆1,752Updated 7 months ago
- Text-to-Audio/Music Generation☆2,460Updated 9 months ago
- A single Gradio + React WebUI with extensions for ACE-Step, Kimi Audio, Piper TTS, GPT-SoVITS, CosyVoice, XTTSv2, DIA, Kokoro, OpenVoice,…☆2,338Updated this week
- Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor…☆619Updated 10 months ago
- AI powered speech denoising and enhancement☆1,868Updated 7 months ago
- ACE-Step: A Step Towards Music Generation Foundation Model☆2,654Updated 2 weeks ago
- Audio generation using diffusion models, in PyTorch.☆2,058Updated 2 years ago
- StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models☆5,828Updated 11 months ago
- InspireMusic: A toolkit designed for music, song, and audio generation☆1,130Updated last month
- Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Research, in Pytorch☆2,561Updated 5 months ago
- A family of diffusion models for text-to-audio generation.☆1,180Updated 6 months ago
- Versatile audio super resolution (any -> 48kHz) with AudioSR.☆1,480Updated 2 months ago
- Generate music based on natural language prompts using LLMs running locally☆1,073Updated 5 months ago
- A webui for different audio related Neural Networks☆1,186Updated last month
- A simple, high-quality voice conversion tool focused on ease of use and performance.☆2,468Updated this week
- Foundational model for human-like, expressive TTS☆4,135Updated 11 months ago
- zero-shot voice conversion & singing voice conversion, with real-time support☆2,710Updated 2 months ago
- [CVPR 2025] MMAudio: Taming Multimodal Joint Training for High-Quality Video-to-Audio Synthesis☆1,675Updated 2 months ago
- Inference and training library for high-quality TTS models.☆5,336Updated 7 months ago
- Di♪♪Rhythm: Blazingly Fast and Embarrassingly Simple End-to-End Full-Length Song Generation with Latent Diffusion☆1,777Updated last month
- ☆777Updated last month
- Implementation of MusicLM, Google's new SOTA model for music generation using attention networks, in Pytorch☆3,266Updated last year
- Apply diffusion models using the new Hugging Face diffusers package to synthesize music instead of images.☆767Updated 9 months ago
- TangoFlux: Super Fast and Faithful Text to Audio Generation with Flow Matching☆756Updated last month
- Lumina-T2X is a unified framework for Text to Any Modality Generation☆2,200Updated 4 months ago
- The code for the bark-voicecloning model. Training and inference.☆703Updated last year
- Contrastive Language-Audio Pretraining☆1,729Updated last month
- The ultimate training toolkit for finetuning diffusion models☆5,116Updated this week