IFICL / images-that-soundLinks
Official repo for Images that sound: a special spectrogram that can be seen as images and played as sound generated by diffusions
☆246Updated 9 months ago
Alternatives and similar repositories for images-that-sound
Users that are interested in images-that-sound are comparing it to the libraries listed below
Sorting:
- ☆182Updated this week
- Offical code for the CVPR 2024 Paper: Separating the "Chirp" from the "Chat": Self-supervised Visual Grounding of Sound and Language☆84Updated last year
- Official code for SEE-2-SOUND: Zero-Shot Spatial Environment-to-Spatial Sound☆134Updated 7 months ago
- Video2Music: Suitable Music Generation from Videos using an Affective Multimodal Transformer model☆187Updated last year
- The Song Describer dataset is an evaluation dataset made of ~1.1k captions for 706 permissively licensed music recordings.☆163Updated last year
- Official codes and models of the paper "Auffusion: Leveraging the Power of Diffusion and Large Language Models for Text-to-Audio Generati…☆189Updated last year
- Metrics for evaluating music and audio generative models – with a focus on long-form, full-band, and stereo generations.☆259Updated 3 weeks ago
- A novel diffusion-based model for synthesizing long-context, high-fidelity music efficiently.☆195Updated 2 years ago
- This repo contains the official PyTorch implementation of: Diverse and Aligned Audio-to-Video Generation via Text-to-Video Model Adaptati…☆128Updated 9 months ago
- The official GitHub page for the survey paper "Foundation Models for Music: A Survey".☆220Updated last year
- ☆26Updated 8 months ago
- ☆107Updated 2 years ago
- We present a model that can generate accurate 3D sound fields of human bodies from headset microphones and body pose as inputs.☆88Updated last year
- Diff-Foley: Synchronized Video-to-Audio Synthesis with Latent Diffusion Models☆197Updated last year
- [ISMIR 2025] A curated list of vision-to-music generation: methods, datasets, evaluation and challenges.☆106Updated 3 months ago
- Mustango: Toward Controllable Text-to-Music Generation☆380Updated 5 months ago
- Fine-tune Stable Audio Open with DiT ControlNet.☆249Updated 6 months ago
- Official PyTorch implementation of "Conditional Generation of Audio from Video via Foley Analogies".☆90Updated last year
- The official implementation of the IJCAI 2024 paper "MusicMagus: Zero-Shot Text-to-Music Editing via Diffusion Models".☆45Updated last year
- ☆58Updated last year
- We are committing code.☆44Updated 2 years ago
- ☆75Updated last year
- Optical illusions using stable diffusion☆221Updated 2 years ago
- High-quality Text-to-Audio Generation with Efficient Diffusion Transformer☆318Updated last month
- Official PyTorch implementation of TokenSet.☆127Updated 8 months ago
- Code for Toon3D https://toon3d.studio/☆216Updated 7 months ago
- Pytorch implementation of MIMO, Controllable Character Video Synthesis with Spatial Decomposed Modeling, from Alibaba Intelligence Group☆136Updated last year
- Guide diffusion on ImageBind embedding similarity☆29Updated 2 years ago
- ☆83Updated last year
- Music production for silent film clips.☆29Updated 6 months ago