Stability-AI / stable-audio-tools
Generative models for conditional audio generation
☆2,968Updated 3 weeks ago
Alternatives and similar repositories for stable-audio-tools:
Users that are interested in stable-audio-tools are comparing it to the libraries listed below
- Official implementation of "Separate Anything You Describe"☆1,702Updated 3 months ago
- AudioLDM: Generate speech, sound effects, music and beyond, with text.☆2,589Updated 3 months ago
- Text-to-Audio/Music Generation☆2,390Updated 5 months ago
- Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Research, in Pytorch☆2,514Updated 2 months ago
- Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor…☆593Updated 7 months ago
- TTS Generation Web UI (Bark, MusicGen + AudioGen, Tortoise, RVC, Vocos, Demucs, SeamlessM4T, MAGNet, StyleTTS2, MMS, Stable Audio, Mars5,…☆2,076Updated this week
- Inference and training library for high-quality TTS models.☆5,161Updated 3 months ago
- Stable diffusion for real-time music generation☆3,601Updated 8 months ago
- Foundational model for human-like, expressive TTS☆4,070Updated 7 months ago
- A family of diffusion models for text-to-audio generation.☆1,152Updated 2 months ago
- Audio generation using diffusion models, in PyTorch.☆2,028Updated last year
- Versatile audio super resolution (any -> 48kHz) with AudioSR.☆1,381Updated last month
- Zero-Shot Speech Editing and Text-to-Speech in the Wild☆8,203Updated last week
- AI powered speech denoising and enhancement☆1,702Updated 3 months ago
- StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models☆5,560Updated 7 months ago
- Implementation of MusicLM, Google's new SOTA model for music generation using attention networks, in Pytorch☆3,238Updated last year
- MARS5 speech model (TTS) from CAMB.AI☆2,639Updated 7 months ago
- InspireMusic: A Unified Framework for Music, Song, Audio Generation.☆1,000Updated last week
- Amphion (/æmˈfaɪən/) is a toolkit for Audio, Music, and Speech Generation. Its purpose is to support reproducible research and help junio…☆8,831Updated 3 weeks ago
- Vocos: Closing the gap between time-domain and Fourier-based neural vocoders for high-quality audio synthesis☆901Updated 7 months ago
- [CVPR 2025] Taming Multimodal Joint Training for High-Quality Video-to-Audio Synthesis☆1,237Updated last week
- PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis☆3,013Updated 4 months ago
- [ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors☆2,807Updated 6 months ago
- State-of-the-art audio codec with 90x compression factor. Supports 44.1kHz, 24kHz, and 16kHz mono/stereo audio.☆1,331Updated 8 months ago
- Contrastive Language-Audio Pretraining☆1,570Updated 4 months ago
- Implementation of MusicLM, a text to music model published by Google Research, with a few modifications.☆538Updated last year
- PyTorch implementation of VALL-E(Zero-Shot Text-To-Speech), Reproduced Demo https://lifeiteng.github.io/valle/index.html☆2,107Updated last year
- Apply diffusion models using the new Hugging Face diffusers package to synthesize music instead of images.☆749Updated 6 months ago
- A simple, high-quality voice conversion tool focused on ease of use and performance.☆2,211Updated this week
- Accepted as [NeurIPS 2024] Spotlight Presentation Paper☆6,242Updated 6 months ago