Stability-AI / stable-audio-toolsLinks
Generative models for conditional audio generation
☆3,566Updated 2 weeks ago
Alternatives and similar repositories for stable-audio-tools
Users that are interested in stable-audio-tools are comparing it to the libraries listed below
Sorting:
- Text-to-Audio/Music Generation☆2,559Updated last year
- AudioLDM: Generate speech, sound effects, music and beyond, with text.☆2,804Updated 6 months ago
- Official implementation of "Separate Anything You Describe"☆1,860Updated last year
- Stable diffusion for real-time music generation☆3,862Updated last year
- Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor…☆634Updated last year
- Text-to-Music Generation with Rectified Flow Transformers☆1,714Updated last year
- AI powered speech denoising and enhancement☆2,152Updated last year
- A single Gradio + React WebUI with extensions for ACE-Step, Kimi Audio, Piper TTS, GPT-SoVITS, CosyVoice, XTTSv2, DIA, Kokoro, OpenVoice,…☆2,902Updated this week
- ACE-Step: A Step Towards Music Generation Foundation Model☆3,630Updated 6 months ago
- [CVPR 2025] MMAudio: Taming Multimodal Joint Training for High-Quality Video-to-Audio Synthesis☆2,054Updated last month
- Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Research, in Pytorch☆2,617Updated last year
- A fundamental toolkit designed for music, song, and audio generation☆1,286Updated 7 months ago
- A family of diffusion models for text-to-audio generation.☆1,223Updated 5 months ago
- Audio generation using diffusion models, in PyTorch.☆2,093Updated 2 years ago
- Foundational model for human-like, expressive TTS☆4,192Updated last year
- Implementation of MusicLM, Google's new SOTA model for music generation using attention networks, in Pytorch☆3,291Updated 2 years ago
- Di♪♪Rhythm: Blazingly Fast and Embarrassingly Simple End-to-End Full-Length Song Generation with Latent Diffusion☆2,194Updated last month
- Versatile audio super resolution (any -> 48kHz) with AudioSR.☆1,720Updated 4 months ago
- MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation☆2,648Updated 10 months ago
- TangoFlux: Super Fast and Faithful Text to Audio Generation with Flow Matching☆813Updated 5 months ago
- Lumina-T2X is a unified framework for Text to Any Modality Generation☆2,250Updated 11 months ago
- Inference and training library for high-quality TTS models.☆5,504Updated last year
- YuE: Open Full-song Music Generation Foundation Model, something similar to Suno.ai but open☆5,928Updated 7 months ago
- V-Express aims to generate a talking head video under the control of a reference image, an audio, and a sequence of V-Kps images.☆2,361Updated 11 months ago
- PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis☆3,260Updated last year
- A webui for different audio related Neural Networks☆1,224Updated 7 months ago
- [ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors☆2,980Updated last year
- StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models☆6,122Updated last year
- Amphion (/æmˈfaɪən/) is a toolkit for Audio, Music, and Speech Generation. Its purpose is to support reproducible research and help junio…☆9,652Updated 7 months ago
- The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.☆6,411Updated last year