Stability-AI / stable-audio-tools
Generative models for conditional audio generation
☆3,050Updated last week
Alternatives and similar repositories for stable-audio-tools:
Users that are interested in stable-audio-tools are comparing it to the libraries listed below
- AudioLDM: Generate speech, sound effects, music and beyond, with text.☆2,637Updated 4 months ago
- Text-to-Audio/Music Generation☆2,418Updated 7 months ago
- AI powered speech denoising and enhancement☆1,770Updated 5 months ago
- Foundational model for human-like, expressive TTS☆4,104Updated 9 months ago
- Stable diffusion for real-time music generation☆3,666Updated 9 months ago
- Official implementation of "Separate Anything You Describe"☆1,727Updated 5 months ago
- Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor…☆601Updated 8 months ago
- TTS Generation Web UI (Bark, MusicGen + AudioGen, Tortoise, RVC, Vocos, Demucs, SeamlessM4T, MAGNet, StyleTTS2, MMS, Stable Audio, Mars5,…☆2,136Updated last week
- Audio generation using diffusion models, in PyTorch.☆2,042Updated last year
- A family of diffusion models for text-to-audio generation.☆1,163Updated 4 months ago
- Versatile audio super resolution (any -> 48kHz) with AudioSR.☆1,418Updated 2 months ago
- [ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors☆2,847Updated 8 months ago
- StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models☆5,699Updated 8 months ago
- Easy to use stem (e.g. instrumental/vocals) separation from CLI or as a python package, using a variety of amazing pre-trained models (pr…☆732Updated this week
- A webui for different audio related Neural Networks☆1,159Updated 8 months ago
- Implementation of MusicLM, Google's new SOTA model for music generation using attention networks, in Pytorch☆3,247Updated last year
- Implementation of Natural Speech 2, Zero-shot Speech and Singing Synthesizer, in Pytorch☆1,319Updated last year
- Apply diffusion models using the new Hugging Face diffusers package to synthesize music instead of images.☆758Updated 7 months ago
- Text-to-Music Generation with Rectified Flow Transformers☆1,690Updated 4 months ago
- 🔊 Text-prompted Generative Audio Model - With the ability to clone voices☆3,290Updated 10 months ago
- Inference and training library for high-quality TTS models.☆5,229Updated 4 months ago
- Repository for training models for music source separation.☆724Updated this week
- Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Research, in Pytorch☆2,536Updated 3 months ago
- Vocos: Closing the gap between time-domain and Fourier-based neural vocoders for high-quality audio synthesis☆922Updated 9 months ago
- A simple, high-quality voice conversion tool focused on ease of use and performance.☆2,307Updated last week
- Implementation of MusicLM, a text to music model published by Google Research, with a few modifications.☆543Updated last year
- 🔊 Text-Prompted Generative Audio Model with Gradio☆692Updated last year
- Contrastive Language-Audio Pretraining☆1,631Updated this week
- PyTorch implementation of VALL-E(Zero-Shot Text-To-Speech), Reproduced Demo https://lifeiteng.github.io/valle/index.html☆2,123Updated last year
- MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation☆2,522Updated 2 months ago