kangruix / WaveBlenderLinks
[SIGGRAPH Asia 2024] WaveBlender: Practical Sound-Source Animation in Blended Domains
☆110Updated 5 months ago
Alternatives and similar repositories for WaveBlender
Users that are interested in WaveBlender are comparing it to the libraries listed below
Sorting:
- Dual Diffusion is a generative diffusion model for music trained on video game soundtracks.☆89Updated this week
- Stable Diffusers for studies☆130Updated 8 months ago
- Resonance: Audio-Image Interconversion for AI Diffusion Models☆40Updated last year
- A sample editing application allowing for hosted, asynchronous, remote processing of audio and midi with machine learning.☆64Updated this week
- [SOTA] [92% acc] 786M-8k-44L-32H multi-instrumental music transformer with true full MIDI instruments range, efficient encoding, octo-vel…☆88Updated last year
- Official repository for the paper "Audio ControlNet for Fine-Grained Audio Generation and Editing".☆56Updated this week
- Flexible LoRA Implementation to use with stable-audio-tools☆79Updated last year
- Text-to-Music Generation with Rectified Flow Transformer☆64Updated 8 months ago
- Generative models for conditional audio generation☆166Updated last week
- ☆107Updated 2 years ago
- [SiggraphAsia25] OmnimatteZero: Fast Training-free Omnimatte with Pre-trained Video Diffusion Models☆91Updated last week
- ☆24Updated last year
- Symbolic Music Generation, NotaGen node for ComfyUI.☆57Updated 8 months ago
- Windows compatible code for the paper "Jukebox: A Generative Model for Music"☆13Updated 3 years ago
- SIGGRAPH 2024 Conference Paper: Deep Fourier-based Arbitrary-scale Super-resolution for Real-time Rendering☆149Updated last year
- Real-time latent exploration of diffusion models☆29Updated last year
- ☆111Updated last year
- Midi event transformer for symbolic music generation☆346Updated last year
- An experiment training a diffusion model on 32x32 pixel art characters☆38Updated 2 years ago
- Awesome music generation model——MG²☆165Updated 10 months ago
- ☆16Updated last year
- Official code for SEE-2-SOUND: Zero-Shot Spatial Environment-to-Spatial Sound☆139Updated 10 months ago
- Blender Add-on to convert texture maps into a mesh.☆28Updated this week
- We present a model that can generate accurate 3D sound fields of human bodies from headset microphones and body pose as inputs.☆89Updated last year
- Inference script for Oasis 500M☆50Updated last year
- Official Implementation of NeuralSound and DeepModal☆37Updated 3 years ago
- Text2midi is the first end-to-end model for generating MIDI files from textual descriptions. By leveraging pretrained large language mode…☆146Updated 11 months ago
- a new family of super small music generation models focusing on experimental music and latent space exploration capabilities☆36Updated last year
- Music production for silent film clips.☆32Updated 9 months ago
- Official Implementation of "Instance Segmentation of Scene Sketches Using Natural Image Priors" (SIGGRAPH 2025)☆88Updated 5 months ago