kaist-ami / Sound2SceneLinks
☆35Updated 2 months ago
Alternatives and similar repositories for Sound2Scene
Users that are interested in Sound2Scene are comparing it to the libraries listed below
Sorting:
- ☆33Updated 7 months ago
- [ECCV 2024 Oral] Audio-Synchronized Visual Animation☆52Updated 9 months ago
- ☆54Updated 8 months ago
- This repo contains the official PyTorch implementation of AudioToken: Adaptation of Text-Conditioned Diffusion Models for Audio-to-Image …☆84Updated last year
- [NAACL'24] Repository for "SMILE: Multimodal Dataset for Understanding Laughter in Video with Language Models"☆13Updated last year
- [CVPR 2024] Seeing and Hearing: Open-domain Visual-Audio Generation with Diffusion Latent Aligners☆148Updated 11 months ago
- [NeurIPS 2023] AV-NeRF: Learning Neural Fields for Real-World Audio-Visual Scene Synthesis☆27Updated last year
- This repository is for The Power of Sound(TPoS): Audio Reactive Video Generation with Stable Diffusion (ICCV2023)☆23Updated last year
- ☆26Updated 10 months ago
- The official code for “Dance-to-Music Generation with Encoder-based Textual Inversion“☆22Updated last week
- [ICCV 2023] Video Background Music Generation: Dataset, Method and Evaluation☆74Updated last year
- [ArXiv 2025] Official Implementation of "Shot-by-Shot: Film-Grammar-Aware Training-Free Audio Description Generation". Junyu Xie, Tengda …☆13Updated 2 months ago
- Official PyTorch implementation of ReWaS (AAAI'25) "Read, Watch and Scream! Sound Generation from Text and Video"☆42Updated 6 months ago
- [Arxiv 2024] Official code for MMTrail: A Multimodal Trailer Video Dataset with Language and Music Descriptions☆31Updated 4 months ago
- [CVPR 2025] UniPose: A Unified Multimodal Framework for Human Pose Comprehension, Generation and Editing☆27Updated 2 months ago
- ☆31Updated last year
- ☆16Updated 6 months ago
- Codebase for the paper: "TIM: A Time Interval Machine for Audio-Visual Action Recognition"☆41Updated 7 months ago
- Diff-Foley: Synchronized Video-to-Audio Synthesis with Latent Diffusion Models☆190Updated last year
- Data and Pytorch implementation of IEEE TMM "EmotionGesture: Audio-Driven Diverse Emotional Co-Speech 3D Gesture Generation"☆26Updated last year
- Source code for "Synchformer: Efficient Synchronization from Sparse Cues" (ICASSP 2024)☆64Updated 4 months ago
- Official PyTorch implementation of "Conditional Generation of Audio from Video via Foley Analogies".☆86Updated last year
- Demo page of TAVGBench: Benchmarking Text to Audible-Video Generation☆13Updated 2 months ago
- Vision Transformers are Parameter-Efficient Audio-Visual Learners☆99Updated last year
- "SlimFlow: Training Smaller One-Step Diffusion Models with Rectified Flow", Yuanzhi Zhu, Xingchao Liu, Qiang Liu☆51Updated 7 months ago
- Official codebase for "Unveiling the Power of Audio-Visual Early Fusion Transformers with Dense Interactions through Masked Modeling".☆32Updated 10 months ago
- A toolkit for computing Fréchet Inception Distance (FID) & Fréchet Video Distance (FVD) metrics.☆30Updated 3 weeks ago
- [AAAI 2023 Summer Symposium, Best Paper Award] Taming Diffusion Models for Music-driven Conducting Motion Generation☆26Updated last year
- CVPR 24 paper: Dysen-VDM: Empowering Dynamics-aware Text-to-Video Diffusion with LLMs☆13Updated last year
- Implementation of the paper "MaskBit: Embedding-free Image Generation from Bit Tokens"☆78Updated 2 months ago