pollinations / dance-diffusion
Tools to train a generative model on arbitrary audio samples
☆62Updated 2 years ago
Alternatives and similar repositories for dance-diffusion:
Users that are interested in dance-diffusion are comparing it to the libraries listed below
- tools to manipulate audio with riffusion☆93Updated last year
- A collection of pre-trained audio models, in PyTorch.☆113Updated 2 years ago
- A Python library and CLI for generating audio samples using Harmonai Dance Diffusion models.☆94Updated 2 years ago
- Trainer for audio-diffusion-pytorch☆129Updated 2 years ago
- ☆84Updated last year
- Wiggle animation keyframe creator☆40Updated 2 years ago
- fine-tuning MusicGen without prompts to generate music with a specific style☆63Updated last year
- GPT3-based Multi-Instrumental MIDI Music AI Implementation☆48Updated 2 years ago
- ☆27Updated last year
- Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor…☆120Updated last week
- Doohickey is a stable diffusion tool for technical artists who want to stay up-to-date with the latest developments in the field.☆39Updated 2 years ago
- A novel diffusion-based model for synthesizing long-context, high-fidelity music efficiently.☆196Updated 2 years ago
- OpenAI MuseNet API Colab Notebook☆32Updated 2 years ago
- Google Colab-backed Web UI for creating music with OpenAI Jukebox☆84Updated last year
- GUI toolkit using various audio diffusion repos.☆75Updated last year
- SOTA Google's Perceiver-AR Music Transformer Implementation and Model☆99Updated last year
- ☆13Updated 2 years ago
- Examples of apps built with Nendo, the AI Audio Tool Suite☆55Updated last year
- ☆83Updated 2 years ago
- Deep learning toolkit for image, video, and audio synthesis☆108Updated 2 years ago
- Implementation of DreamBooth with Stable Diffusion (tweaked)☆85Updated 2 years ago
- Audio generation using diffusion models, in PyTorch.☆47Updated last year
- ☆107Updated last year
- Audio datasets, easier.☆84Updated last year
- Generate audio using ComfyUI and dance diffusion models.☆50Updated last year
- ☆32Updated 2 years ago
- MusicGen conditioned with chord progression.☆11Updated last year
- jupyter/colab implementation of stable-diffusion using k_lms sampler, cpu draw manual seeding, and quantize.py fix☆38Updated 2 years ago
- Gradio client for harmonai sample diffusion.☆27Updated last year
- Text prompt to MIDI File using OpenAI's GPT-4☆66Updated 11 months ago