IFICL / images-that-soundLinks
Official repo for Images that sound: a special spectrogram that can be seen as images and played as sound generated by diffusions
☆247Updated 8 months ago
Alternatives and similar repositories for images-that-sound
Users that are interested in images-that-sound are comparing it to the libraries listed below
Sorting:
- ☆181Updated 10 months ago
- Offical code for the CVPR 2024 Paper: Separating the "Chirp" from the "Chat": Self-supervised Visual Grounding of Sound and Language☆84Updated last year
- The Song Describer dataset is an evaluation dataset made of ~1.1k captions for 706 permissively licensed music recordings.☆162Updated last year
- Metrics for evaluating music and audio generative models – with a focus on long-form, full-band, and stereo generations.☆249Updated 2 weeks ago
- High-quality Text-to-Audio Generation with Efficient Diffusion Transformer☆312Updated 3 weeks ago
- Official code for SEE-2-SOUND: Zero-Shot Spatial Environment-to-Spatial Sound☆133Updated 7 months ago
- ☆106Updated 2 years ago
- We present a model that can generate accurate 3D sound fields of human bodies from headset microphones and body pose as inputs.☆88Updated last year
- ☆26Updated 7 months ago
- Pytorch implementation of MIMO, Controllable Character Video Synthesis with Spatial Decomposed Modeling, from Alibaba Intelligence Group☆136Updated last year
- Fine-tune Stable Audio Open with DiT ControlNet.☆248Updated 5 months ago
- Official codes and models of the paper "Auffusion: Leveraging the Power of Diffusion and Large Language Models for Text-to-Audio Generati…☆188Updated last year
- Official PyTorch implementation of TokenSet.☆126Updated 7 months ago
- Diff-Foley: Synchronized Video-to-Audio Synthesis with Latent Diffusion Models☆195Updated last year
- Mustango: Toward Controllable Text-to-Music Generation☆377Updated 5 months ago
- A novel diffusion-based model for synthesizing long-context, high-fidelity music efficiently.☆194Updated 2 years ago
- ☆58Updated last year
- This repo contains the official PyTorch implementation of: Diverse and Aligned Audio-to-Video Generation via Text-to-Video Model Adaptati…☆128Updated 8 months ago
- Code for the paper "LLark: A Multimodal Instruction-Following Language Model for Music" by Josh Gardner, Simon Durand, Daniel Stoller, an…☆364Updated last year
- The official GitHub page for the survey paper "Foundation Models for Music: A Survey".☆216Updated last year
- [ISMIR 2025] A curated list of vision-to-music generation: methods, datasets, evaluation and challenges.☆102Updated 2 months ago
- Video2Music: Suitable Music Generation from Videos using an Affective Multimodal Transformer model☆185Updated last year
- The official implementation of the IJCAI 2024 paper "MusicMagus: Zero-Shot Text-to-Music Editing via Diffusion Models".☆45Updated last year
- We are committing code.☆44Updated 2 years ago
- Code for Toon3D https://toon3d.studio/☆216Updated 6 months ago
- Optical illusions using stable diffusion☆218Updated 2 years ago
- Official PyTorch implementation of "Conditional Generation of Audio from Video via Foley Analogies".☆90Updated last year
- Implementation of Lumiere, SOTA text-to-video generation from Google Deepmind, in Pytorch☆280Updated last year
- ☆20Updated last year
- ☆75Updated last year