Stability-AI / stable-audio-tools
Generative models for conditional audio generation
☆2,896Updated last month
Alternatives and similar repositories for stable-audio-tools:
Users that are interested in stable-audio-tools are comparing it to the libraries listed below
- Text-to-Audio/Music Generation☆2,370Updated 4 months ago
- AudioLDM: Generate speech, sound effects, music and beyond, with text.☆2,559Updated 2 months ago
- Official implementation of "Separate Anything You Describe"☆1,686Updated 2 months ago
- TTS Generation Web UI (Bark, MusicGen + AudioGen, Tortoise, RVC, Vocos, Demucs, SeamlessM4T, MAGNet, StyleTTS2, MMS, Stable Audio, Mars5,…☆2,016Updated this week
- Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Research, in Pytorch☆2,502Updated last month
- AI powered speech denoising and enhancement☆1,641Updated 2 months ago
- [ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors☆2,773Updated 5 months ago
- Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor…☆588Updated 6 months ago
- Stable diffusion for real-time music generation☆3,538Updated 6 months ago
- Audio generation using diffusion models, in PyTorch.☆2,011Updated last year
- A family of diffusion models for text-to-audio generation.☆1,141Updated last month
- Vocos: Closing the gap between time-domain and Fourier-based neural vocoders for high-quality audio synthesis☆882Updated 6 months ago
- Text-to-Music Generation with Rectified Flow Transformers☆1,667Updated 2 months ago
- StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models☆5,465Updated 6 months ago
- Implementation of MusicLM, Google's new SOTA model for music generation using attention networks, in Pytorch☆3,226Updated last year
- [arXiv 2024] Taming Multimodal Joint Training for High-Quality Video-to-Audio Synthesis☆1,096Updated this week
- The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.☆5,637Updated 7 months ago
- AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animation☆4,838Updated 7 months ago
- WavJourney: Compositional Audio Creation with LLMs☆531Updated last year
- Inference and training library for high-quality TTS models.☆5,025Updated 2 months ago
- Contrastive Language-Audio Pretraining☆1,537Updated 2 months ago
- State-of-the-art deep learning based audio codec supporting both mono 24 kHz audio and stereo 48 kHz audio.☆3,598Updated last year
- Foundational model for human-like, expressive TTS☆4,035Updated 6 months ago
- zero-shot voice conversion & singing voice conversion, with real-time support☆1,080Updated this week
- Amphion (/æmˈfaɪən/) is a toolkit for Audio, Music, and Speech Generation. Its purpose is to support reproducible research and help junio…☆8,508Updated 2 weeks ago
- PyTorch implementation of VALL-E(Zero-Shot Text-To-Speech), Reproduced Demo https://lifeiteng.github.io/valle/index.html☆2,089Updated last year
- Implementation of MusicLM, a text to music model published by Google Research, with a few modifications.☆536Updated last year
- Controllable and fast Text-to-Speech for over 7000 languages!☆1,545Updated 3 months ago
- Official Pytorch Implementation for "TokenFlow: Consistent Diffusion Features for Consistent Video Editing" presenting "TokenFlow" (ICLR …☆1,631Updated 2 weeks ago
- Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models☆1,673Updated last year