Stability-AI / stable-audio-tools
Generative models for conditional audio generation
☆3,016Updated 3 weeks ago
Alternatives and similar repositories for stable-audio-tools:
Users that are interested in stable-audio-tools are comparing it to the libraries listed below
- Text-to-Audio/Music Generation☆2,405Updated 6 months ago
- Stable diffusion for real-time music generation☆3,639Updated 8 months ago
- AudioLDM: Generate speech, sound effects, music and beyond, with text.☆2,617Updated 4 months ago
- Official implementation of "Separate Anything You Describe"☆1,721Updated 4 months ago
- Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Research, in Pytorch☆2,523Updated 3 months ago
- TTS Generation Web UI (Bark, MusicGen + AudioGen, Tortoise, RVC, Vocos, Demucs, SeamlessM4T, MAGNet, StyleTTS2, MMS, Stable Audio, Mars5,…☆2,105Updated this week
- Implementation of MusicLM, Google's new SOTA model for music generation using attention networks, in Pytorch☆3,242Updated last year
- AI powered speech denoising and enhancement☆1,730Updated 4 months ago
- Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor…☆597Updated 8 months ago
- A webui for different audio related Neural Networks☆1,153Updated 8 months ago
- Implementation of MusicLM, a text to music model published by Google Research, with a few modifications.☆541Updated last year
- A family of diffusion models for text-to-audio generation.☆1,159Updated 3 months ago
- The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.☆5,847Updated 9 months ago
- [ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors☆2,834Updated 7 months ago
- simple trainer for musicgen/audiocraft☆21Updated 9 months ago
- Audio generation using diffusion models, in PyTorch.☆2,035Updated last year
- PixArt-Σ: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation☆1,796Updated 5 months ago
- Lumina-T2X is a unified framework for Text to Any Modality Generation☆2,181Updated 2 months ago
- StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models☆5,647Updated 8 months ago
- Character Animation (AnimateAnyone, Face Reenactment)☆3,368Updated 10 months ago
- 🔊 Text-Prompted Generative Audio Model with Gradio☆692Updated last year
- Foundational model for human-like, expressive TTS☆4,084Updated 8 months ago
- Inference and training library for high-quality TTS models.☆5,188Updated 4 months ago
- Apply diffusion models using the new Hugging Face diffusers package to synthesize music instead of images.☆754Updated 6 months ago
- InspireMusic: A Unified Framework for Music, Song, Audio Generation.☆1,056Updated this week
- A simple, high-quality voice conversion tool focused on ease of use and performance.☆2,260Updated this week
- ✨ Hotshot-XL: State-of-the-art AI text-to-GIF model trained to work alongside Stable Diffusion XL☆1,099Updated last year
- Versatile audio super resolution (any -> 48kHz) with AudioSR.☆1,400Updated 2 months ago
- [CVPR 2025] MMAudio: Taming Multimodal Joint Training for High-Quality Video-to-Audio Synthesis☆1,308Updated this week
- Auto1111 extension implementing text2video diffusion models (like ModelScope or VideoCrafter) using only Auto1111 webui dependencies☆1,315Updated 9 months ago