v-iashin / SynchformerLinks
Source code for "Synchformer: Efficient Synchronization from Sparse Cues" (ICASSP 2024)
☆99Updated 3 months ago
Alternatives and similar repositories for Synchformer
Users that are interested in Synchformer are comparing it to the libraries listed below
Sorting:
- The official implementation of V-AURA: Temporally Aligned Audio for Video with Autoregression (ICASSP 2025) (Oral)☆31Updated last year
- Official PyTorch implementation of "Conditional Generation of Audio from Video via Foley Analogies".☆92Updated 2 years ago
- ☆47Updated 8 months ago
- [AAAI 2024] V2A-Mapper: A Lightweight Solution for Vision-to-Audio Generation by Connecting Foundation Models☆27Updated 2 years ago
- Implementation of Frieren: Efficient Video-to-Audio Generation Network with Rectified Flow Matching (NeurIPS'24)☆57Updated 8 months ago
- Make-An-Audio-3: Transforming Text/Video into Audio via Flow-based Large Diffusion Transformers☆115Updated 7 months ago
- Diff-Foley: Synchronized Video-to-Audio Synthesis with Latent Diffusion Models☆200Updated last year
- Official PyTorch implementation of ReWaS (AAAI'25) "Read, Watch and Scream! Sound Generation from Text and Video"☆44Updated last year
- VAE modified from Descript Audio Codec, which replaces the RVQ with VAE☆87Updated last year
- Official Implementation of EnCLAP (ICASSP 2024)☆94Updated last year
- The official repo for Both Ears Wide Open: Towards Language-Driven Spatial Audio Generation☆56Updated 5 months ago
- [Official Implementation] Acoustic Autoregressive Modeling 🔥☆73Updated last year
- A 6-million Audio-Caption Paired Dataset Built with a LLMs and ALMs-based Automatic Pipeline☆192Updated last year
- small audio language model for reasoning☆83Updated 3 weeks ago
- This repo contains the official PyTorch implementation of AudioToken: Adaptation of Text-Conditioned Diffusion Models for Audio-to-Image …☆87Updated last year
- LAFMA: A Latent Flow Matching Model for Text-to-Audio Generation (INTERSPEECH 2024)☆43Updated last year
- Official implementation of the pipeline presented in I hear your true colors: Image Guided Audio Generation☆124Updated 2 years ago
- [NeurIPS 2024] Code, Dataset, Samples for the VATT paper “ Tell What You Hear From What You See - Video to Audio Generation Through Text”☆34Updated 5 months ago
- A Multi-Task Evaluation Benchmark for Audio-Visual Representation Models (ICASSP 2024)☆58Updated last year
- ☆41Updated 8 months ago
- Official implementation of "ViSAGe: Video-to-Spatial AUdio Generation" (ICLR 2025)☆39Updated 3 months ago
- Ego4DSounds: A diverse egocentric dataset with high action-audio correspondence☆18Updated last year
- An neural full-band audio codec for general audio sampled at 48 kHz with 7.5 kps or 4.5 kbps.☆193Updated 5 months ago
- Official codes and models of the paper "Auffusion: Leveraging the Power of Diffusion and Large Language Models for Text-to-Audio Generati…☆190Updated last year
- Official code for "EmoVoice: LLM-based Emotional Text-To-Speech Model with Freestyle Text Prompting"☆104Updated 2 months ago
- PAM is a no-reference audio quality metric for audio generation tasks☆76Updated last year
- ☆42Updated 2 years ago
- Pytorch implementation for “V2C: Visual Voice Cloning”☆32Updated 2 years ago
- official code for CVPR'24 paper Diff-BGM☆72Updated last year
- [ACL 2024] This is the Pytorch code for our paper "StyleDubber: Towards Multi-Scale Style Learning for Movie Dubbing"☆95Updated last year