IFICL / images-that-soundLinks
Official repo for Images that sound: a special spectrogram that can be seen as images and played as sound generated by diffusions
☆242Updated 7 months ago
Alternatives and similar repositories for images-that-sound
Users that are interested in images-that-sound are comparing it to the libraries listed below
Sorting:
- Offical code for the CVPR 2024 Paper: Separating the "Chirp" from the "Chat": Self-supervised Visual Grounding of Sound and Language☆83Updated last year
- Official code for SEE-2-SOUND: Zero-Shot Spatial Environment-to-Spatial Sound☆130Updated 5 months ago
- ☆181Updated 8 months ago
- ☆24Updated 5 months ago
- Official codes and models of the paper "Auffusion: Leveraging the Power of Diffusion and Large Language Models for Text-to-Audio Generati…☆187Updated last year
- [ISMIR 2025] A curated list of vision-to-music generation: methods, datasets, evaluation and challenges.☆92Updated 3 weeks ago
- ☆107Updated last year
- We present a model that can generate accurate 3D sound fields of human bodies from headset microphones and body pose as inputs.☆88Updated last year
- Metrics for evaluating music and audio generative models – with a focus on long-form, full-band, and stereo generations.☆239Updated 2 months ago
- The official GitHub page for the survey paper "Foundation Models for Music: A Survey".☆214Updated last year
- Music production for silent film clips.☆27Updated 4 months ago
- This repo contains the official PyTorch implementation of: Diverse and Aligned Audio-to-Video Generation via Text-to-Video Model Adaptati…☆127Updated 6 months ago
- Pytorch implementation of MIMO, Controllable Character Video Synthesis with Spatial Decomposed Modeling, from Alibaba Intelligence Group☆133Updated 11 months ago
- The Song Describer dataset is an evaluation dataset made of ~1.1k captions for 706 permissively licensed music recordings.☆157Updated last year
- Official PyTorch implementation of TokenSet.☆122Updated 5 months ago
- ☆56Updated 10 months ago
- Fine-tune Stable Audio Open with DiT ControlNet.☆243Updated 3 months ago
- ☆164Updated 2 weeks ago
- Video2Music: Suitable Music Generation from Videos using an Affective Multimodal Transformer model☆185Updated last year
- ☆20Updated last year
- This is the official repository of ISMIR 2024 paper "Emotion-driven Piano Music Generation via Two-stage Disentanglement and Functional R…☆58Updated 11 months ago
- High-quality Text-to-Audio Generation with Efficient Diffusion Transformer☆308Updated 2 months ago
- VoiceRestore: Flow-Matching Transformers for Universal Speech Restoration☆183Updated 4 months ago
- Action2Sound: Ambient-Aware Generation of Action Sounds from Egocentric Videos☆23Updated 11 months ago
- Official PyTorch implementation of "Conditional Generation of Audio from Video via Foley Analogies".☆88Updated last year
- Code for Toon3D https://toon3d.studio/☆218Updated 5 months ago
- Optical illusions using stable diffusion☆217Updated 2 years ago
- A novel diffusion-based model for synthesizing long-context, high-fidelity music efficiently.☆196Updated 2 years ago
- Official code of the paper: Draw an Audio: Leveraging Multi-Instruction for Video-to-Audio Synthesis.☆46Updated 11 months ago
- Official Implementation of weights2weights☆147Updated 5 months ago