Stability-AI / stable-audio-toolsLinks
Generative models for conditional audio generation
☆3,373Updated 2 weeks ago
Alternatives and similar repositories for stable-audio-tools
Users that are interested in stable-audio-tools are comparing it to the libraries listed below
Sorting:
- Text-to-Audio/Music Generation☆2,481Updated 10 months ago
- Official implementation of "Separate Anything You Describe"☆1,763Updated 8 months ago
- Stable diffusion for real-time music generation☆3,764Updated last year
- Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor…☆620Updated 11 months ago
- AudioLDM: Generate speech, sound effects, music and beyond, with text.☆2,715Updated last month
- ACE-Step: A Step Towards Music Generation Foundation Model☆2,782Updated last month
- InspireMusic: A toolkit designed for music, song, and audio generation☆1,156Updated 2 months ago
- [CVPR 2025] MMAudio: Taming Multimodal Joint Training for High-Quality Video-to-Audio Synthesis☆1,742Updated 2 months ago
- YuE: Open Full-song Music Generation Foundation Model, something similar to Suno.ai but open☆5,250Updated last month
- Di♪♪Rhythm: Blazingly Fast and Embarrassingly Simple End-to-End Full-Length Song Generation with Latent Diffusion☆1,829Updated last week
- Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Research, in Pytorch☆2,567Updated 6 months ago
- Audio generation using diffusion models, in PyTorch.☆2,062Updated 2 years ago
- Text-to-Music Generation with Rectified Flow Transformers☆1,705Updated 7 months ago
- A single Gradio + React WebUI with extensions for ACE-Step, Kimi Audio, Piper TTS, GPT-SoVITS, CosyVoice, XTTSv2, DIA, Kokoro, OpenVoice,…☆2,386Updated 3 weeks ago
- A family of diffusion models for text-to-audio generation.☆1,182Updated this week
- AI powered speech denoising and enhancement☆1,893Updated 7 months ago
- Foundational model for human-like, expressive TTS☆4,142Updated last year
- Implementation of MusicLM, Google's new SOTA model for music generation using attention networks, in Pytorch☆3,272Updated last year
- Versatile audio super resolution (any -> 48kHz) with AudioSR.☆1,494Updated 2 months ago
- Inference and training library for high-quality TTS models.☆5,370Updated 7 months ago
- TangoFlux: Super Fast and Faithful Text to Audio Generation with Flow Matching☆763Updated this week
- zero-shot voice conversion & singing voice conversion, with real-time support☆2,768Updated 3 months ago
- StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models☆5,867Updated 11 months ago
- A webui for different audio related Neural Networks☆1,189Updated 2 months ago
- Implementation of MusicLM, a text to music model published by Google Research, with a few modifications.☆549Updated 2 years ago
- PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis☆3,136Updated 9 months ago
- NotaGen: Advancing Musicality in Symbolic Music Generation with Large Language Model Training Paradigms☆1,065Updated 3 months ago
- Lumina-T2X is a unified framework for Text to Any Modality Generation☆2,210Updated 5 months ago
- MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation☆2,575Updated 4 months ago
- A simple, high-quality voice conversion tool focused on ease of use and performance.☆2,496Updated this week