yzxing87 / Seeing-and-HearingLinks
[CVPR 2024] Seeing and Hearing: Open-domain Visual-Audio Generation with Diffusion Latent Aligners
☆145Updated 10 months ago
Alternatives and similar repositories for Seeing-and-Hearing
Users that are interested in Seeing-and-Hearing are comparing it to the libraries listed below
Sorting:
- [ECCV 2024 Oral] Audio-Synchronized Visual Animation☆50Updated 8 months ago
- Diff-Foley: Synchronized Video-to-Audio Synthesis with Latent Diffusion Models☆186Updated last year
- ☆33Updated 6 months ago
- [Arxiv 2024] Official code for MMTrail: A Multimodal Trailer Video Dataset with Language and Music Descriptions☆31Updated 3 months ago
- PyTorch implementation of InstructAny2Pix: Flexible Visual Editing via Multimodal Instruction Following☆30Updated 4 months ago
- UniEdit: A Unified Tuning-Free Framework for Video Motion and Appearance Editing☆106Updated last month
- [AAAI 2025] VQTalker: Towards Multilingual Talking Avatars through Facial Motion Tokenization☆49Updated 5 months ago
- ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation (TMLR 2024)☆240Updated 11 months ago
- This repo contains the official PyTorch implementation of AudioToken: Adaptation of Text-Conditioned Diffusion Models for Audio-to-Image …☆83Updated 11 months ago
- Official PyTorch implementation of "Conditional Generation of Audio from Video via Foley Analogies".☆86Updated last year
- ☆16Updated 5 months ago
- [NeurIPS 2024] CV-VAE: A Compatible Video VAE for Latent Generative Video Models☆275Updated 6 months ago
- The official PyTorch implementation for Improving Long-Text Alignment for Text-to-Image Diffusion Models (LongAlign)☆73Updated last month
- ☆52Updated 7 months ago
- ☆59Updated 10 months ago
- Improving Video Generation with Human Feedback☆182Updated 2 months ago
- [ICLR 2024] LLM-grounded Video Diffusion Models (LVD): official implementation for the LVD paper☆154Updated last year
- [CVPR 2024] EvalCrafter: Benchmarking and Evaluating Large Video Generation Models☆166Updated 8 months ago
- ☆67Updated 2 months ago
- Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models (ICLR 2024)☆138Updated last year
- [NeurIPS 2024] Token Merging for Training-Free Semantic Binding in Text-to-Image Synthesis☆67Updated 4 months ago
- This is the official implementation of 2024 CVPR paper "EmoGen: Emotional Image Content Generation with Text-to-Image Diffusion Models".☆82Updated 4 months ago
- Magic Mirror: ID-Preserved Video Generation in Video Diffusion Transformers☆117Updated 4 months ago
- [NeurIPS 2024 Spotlight] The official implement of research paper "MotionBooth: Motion-Aware Customized Text-to-Video Generation"☆132Updated 7 months ago
- Ming - facilitating advanced multimodal understanding and generation capabilities built upon the Ling LLM.☆107Updated last week
- Official implementation of "JavisDiT: Joint Audio-Video Diffusion Transformer with Hierarchical Spatio-Temporal Prior Synchronization"☆61Updated last month
- STAR: Scale-wise Text-to-image generation via Auto-Regressive representations☆141Updated 3 months ago
- [NeurIPS 2023] Free-Bloom: Zero-Shot Text-to-Video Generator with LLM Director and LDM Animator☆96Updated last year
- [CVPR 2024] Official PyTorch implementation of FreeCustom: Tuning-Free Customized Image Generation for Multi-Concept Composition☆154Updated 4 months ago
- ☆35Updated last month