Stability-AI / stable-audio-toolsLinks
Generative models for conditional audio generation
☆3,542Updated 2 months ago
Alternatives and similar repositories for stable-audio-tools
Users that are interested in stable-audio-tools are comparing it to the libraries listed below
Sorting:
- Text-to-Audio/Music Generation☆2,545Updated last year
- Official implementation of "Separate Anything You Describe"☆1,855Updated last year
- Stable diffusion for real-time music generation☆3,853Updated last year
- AudioLDM: Generate speech, sound effects, music and beyond, with text.☆2,792Updated 6 months ago
- Text-to-Music Generation with Rectified Flow Transformers☆1,712Updated last year
- ACE-Step: A Step Towards Music Generation Foundation Model☆3,512Updated 6 months ago
- Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor…☆631Updated last year
- [CVPR 2025] MMAudio: Taming Multimodal Joint Training for High-Quality Video-to-Audio Synthesis☆2,021Updated 3 weeks ago
- Di♪♪Rhythm: Blazingly Fast and Embarrassingly Simple End-to-End Full-Length Song Generation with Latent Diffusion☆2,156Updated last month
- AI powered speech denoising and enhancement☆2,123Updated last year
- YuE: Open Full-song Music Generation Foundation Model, something similar to Suno.ai but open☆5,860Updated 6 months ago
- A family of diffusion models for text-to-audio generation.☆1,222Updated 5 months ago
- A fundamental toolkit designed for music, song, and audio generation☆1,274Updated 7 months ago
- A single Gradio + React WebUI with extensions for ACE-Step, Kimi Audio, Piper TTS, GPT-SoVITS, CosyVoice, XTTSv2, DIA, Kokoro, OpenVoice,…☆2,828Updated last month
- Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Research, in Pytorch☆2,613Updated 11 months ago
- Foundational model for human-like, expressive TTS☆4,194Updated last year
- Versatile audio super resolution (any -> 48kHz) with AudioSR.☆1,683Updated 4 months ago
- StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models☆6,099Updated last year
- Audio generation using diffusion models, in PyTorch.☆2,093Updated 2 years ago
- MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation☆2,639Updated 9 months ago
- MARS5 speech model (TTS) from CAMB.AI☆2,809Updated last year
- Implementation of MusicLM, Google's new SOTA model for music generation using attention networks, in Pytorch☆3,287Updated 2 years ago
- Implementation of SoundStorm, Efficient Parallel Audio Generation from Google Deepmind, in Pytorch☆1,539Updated 8 months ago
- Inference and training library for high-quality TTS models.☆5,500Updated last year
- V-Express aims to generate a talking head video under the control of a reference image, an audio, and a sequence of V-Kps images.☆2,361Updated 11 months ago
- TangoFlux: Super Fast and Faithful Text to Audio Generation with Flow Matching☆809Updated 5 months ago
- A webui for different audio related Neural Networks☆1,219Updated 7 months ago
- Implementation of MusicLM, a text to music model published by Google Research, with a few modifications.☆555Updated 2 years ago
- zero-shot voice conversion & singing voice conversion, with real-time support☆3,475Updated 8 months ago
- Character Animation (AnimateAnyone, Face Reenactment)☆3,470Updated last year