jaeyeonkim99 / visageLinks
Official implementation of "ViSAGe: Video-to-Spatial AUdio Generation" (ICLR 2025)
β38Updated 3 months ago
Alternatives and similar repositories for visage
Users that are interested in visage are comparing it to the libraries listed below
Sorting:
- π¦ Encoder of BAT (Learning to Reason about Spatial Sounds with Large Language Models)β67Updated 9 months ago
- This package aims at simplifying the download of the AudioCaps dataset.β36Updated 2 years ago
- β42Updated 2 years ago
- The official repo for Both Ears Wide Open: Towards Language-Driven Spatial Audio Generationβ55Updated 5 months ago
- [CVPR 2024] AV2AV: Direct Audio-Visual Speech to Audio-Visual Speech Translation with Unified Audio-Visual Speech Representationβ43Updated last year
- The official implementation of V-AURA: Temporally Aligned Audio for Video with Autoregression (ICASSP 2025) (Oral)β31Updated 11 months ago
- Official PyTorch implementation of ReWaS (AAAI'25) "Read, Watch and Scream! Sound Generation from Text and Video"β44Updated 11 months ago
- Implementation of Frieren: Efficient Video-to-Audio Generation Network with Rectified Flow Matching (NeurIPS'24)β55Updated 8 months ago
- β41Updated 8 months ago
- Ego4DSounds: A diverse egocentric dataset with high action-audio correspondenceβ18Updated last year
- A Multi-Task Evaluation Benchmark for Audio-Visual Representation Models (ICASSP 2024)β58Updated last year
- A 6-million Audio-Caption Paired Dataset Built with a LLMs and ALMs-based Automatic Pipelineβ191Updated 11 months ago
- Visually-Aware Audio Captioningβ43Updated 2 years ago
- β114Updated 6 months ago
- Real Acoustic Fields An Audio-Visual Room Acoustics Dataset and Benchmarkβ59Updated last year
- Source code for "Synchformer: Efficient Synchronization from Sparse Cues" (ICASSP 2024)β97Updated 2 months ago
- β19Updated last year
- [IJCAI 2024] EAT: Self-Supervised Pre-Training with Efficient Audio Transformerβ204Updated last week
- Audio Captioning datasets for PyTorch.β124Updated 4 months ago
- Implementation of the paper, T-FOLEY: A Controllable Waveform-Domain Diffusion Model for Temporal-Event-Guided Foley Sound Synthesis, acβ¦β34Updated last year
- [ACL 2024] This is the Pytorch code for our paper "StyleDubber: Towards Multi-Scale Style Learning for Movie Dubbing"β94Updated last year
- [Official Implementation] Acoustic Autoregressive Modeling π₯β73Updated last year
- Code for "Simple Pooling Front-ends for Efficient Audio Calssification", ICASSP 2023β57Updated 2 years ago
- [NeurIPS 2024] Code, Dataset, Samples for the VATT paper β Tell What You Hear From What You See - Video to Audio Generation Through Textββ34Updated 4 months ago
- Repository of the WACV'24 paper "Can CLIP Help Sound Source Localization?"β33Updated 9 months ago
- Inference codebase for "Cacophony: An Improved Contrastive Audio-Text Model". Preprint: https://arxiv.org/abs/2402.06986β48Updated last year
- Pytorch implementation for βV2C: Visual Voice Cloningββ32Updated 2 years ago
- Code and generated sounds for "Conditional Sound Generation Using Neural Discrete Time-Frequency Representation Learning", MLSP 2021β69Updated 4 years ago
- Code for paper Learning Audio-Visual Dereverberationβ30Updated 3 years ago
- Official implementation of the pipeline presented in I hear your true colors: Image Guided Audio Generationβ124Updated 2 years ago