IFICL / images-that-sound
Official repo for Images that sound: a special spectrogram that can be seen as images and played as sound generated by diffusions
☆240Updated 3 months ago
Alternatives and similar repositories for images-that-sound:
Users that are interested in images-that-sound are comparing it to the libraries listed below
- ☆166Updated 4 months ago
- Metrics for evaluating music and audio generative models – with a focus on long-form, full-band, and stereo generations.☆214Updated 2 weeks ago
- Diff-Foley: Synchronized Video-to-Audio Synthesis with Latent Diffusion Models☆184Updated 11 months ago
- ☆49Updated 6 months ago
- Official repository of the paper "MuQ: Self-Supervised Music Representation Learning with Mel Residual Vector Quantization".☆180Updated 3 months ago
- This repo contains the official PyTorch implementation of: Diverse and Aligned Audio-to-Video Generation via Text-to-Video Model Adaptati…☆119Updated 2 months ago
- PyTorch implementation of Audio Flamingo 2: An Audio-Language Model with Long-Audio Understanding and Expert Reasoning Abilities.☆465Updated last week
- [ECCV 2024 Oral] Audio-Synchronized Visual Animation☆48Updated 7 months ago
- Official PyTorch implementation of "Conditional Generation of Audio from Video via Foley Analogies".☆86Updated last year
- Offical code for the CVPR 2024 Paper: Separating the "Chirp" from the "Chat": Self-supervised Visual Grounding of Sound and Language☆77Updated 10 months ago
- Video2Music: Suitable Music Generation from Videos using an Affective Multimodal Transformer model☆164Updated 9 months ago
- Official PyTorch implementation of TokenSet.☆116Updated last month
- A curated list of vision-to-music generation: methods, datasets, evaluation and challenges.☆58Updated 3 weeks ago
- Official codes and models of the paper "Auffusion: Leveraging the Power of Diffusion and Large Language Models for Text-to-Audio Generati…☆182Updated last year
- Official code for SEE-2-SOUND: Zero-Shot Spatial Environment-to-Spatial Sound☆123Updated last month
- The Song Describer dataset is an evaluation dataset made of ~1.1k captions for 706 permissively licensed music recordings.☆151Updated last year
- We present a model that can generate accurate 3D sound fields of human bodies from headset microphones and body pose as inputs.☆84Updated 11 months ago
- Encode and decode audio samples to/from compressed latent representations!☆203Updated 2 months ago
- High-quality Text-to-Audio Generation with Efficient Diffusion Transformer☆268Updated 3 weeks ago
- Mustango: Toward Controllable Text-to-Music Generation☆361Updated last month
- Code for Toon3D https://toon3d.studio/☆213Updated last month
- VoiceRestore: Flow-Matching Transformers for Universal Speech Restoration☆163Updated 2 weeks ago
- official code for CVPR'24 paper Diff-BGM☆61Updated 6 months ago
- [CVPR 2024] Seeing and Hearing: Open-domain Visual-Audio Generation with Diffusion Latent Aligners☆143Updated 10 months ago
- Unified automatic quality assessment for speech, music, and sound.☆475Updated this week
- ☆107Updated last year
- ☆63Updated last year
- The official GitHub page for the survey paper "Foundation Models for Music: A Survey".☆201Updated 8 months ago
- Fine-tune Stable Audio Open with DiT ControlNet.☆218Updated 2 months ago
- Action2Sound: Ambient-Aware Generation of Action Sounds from Egocentric Videos☆21Updated 7 months ago