YingqingHe / Awesome-LLMs-meet-Multimodal-Generation
π₯π₯π₯ A curated list of papers on LLMs-based multimodal generation (image, video, 3D and audio).
β293Updated 3 weeks ago
Related projects: β
- A list for Text-to-Video, Image-to-Video worksβ167Updated last month
- π This is a repository for organizing papers, codes and other resources related to unified multimodal models.β134Updated last week
- [CVPR2024 Highlight] VBench - We Evaluate Video Generationβ490Updated 2 weeks ago
- A reading list of video generationβ362Updated this week
- A collection of awesome video generation studies.β258Updated last week
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creationβ378Updated 5 months ago
- Official code of SmartEdit [CVPR-2024 Highlight]β227Updated 3 months ago
- A list of works on evaluation of visual generation models, including evaluation metrics, models, and systemsβ158Updated last month
- [CVPR 2024] Intelligent Grimm - Open-ended Visual Storytelling via Latent Diffusion Modelsβ196Updated last week
- [Neurips 2023] T2I-CompBench: A Comprehensive Benchmark for Open-world Compositional Text-to-image Generationβ190Updated last month
- [CVPR 2024] | LAMP: Learn a Motion Pattern for Few-Shot Based Video Generationβ252Updated 4 months ago
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachersβ490Updated 2 months ago
- [ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captionsβ112Updated 2 months ago
- Open-MAGVIT2: Democratizing Autoregressive Visual Generationβ595Updated last week
- Official repo for paper "MiraData: A Large-Scale Video Dataset with Long Durations and Structured Captions"β351Updated 2 weeks ago
- [CVPR 2024] EvalCrafter: Benchmarking and Evaluating Large Video Generation Modelsβ118Updated 2 weeks ago
- Long Context Transfer from Language to Visionβ293Updated 3 weeks ago
- Diffusion Model-Based Image Editing: A Survey (arXiv)β411Updated last month
- β168Updated 2 months ago
- [ICLR2024] The official implementation of paper "VDT: General-purpose Video Diffusion Transformers via Mask Modeling", by Haoyu Lu, Guoxiβ¦β205Updated 4 months ago
- UniEdit: A Unified Tuning-Free Framework for Video Motion and Appearance Editingβ87Updated 5 months ago
- A Collection of Papers and Codes for CVPR2024/ECCV2024 AIGCβ409Updated last week
- Official implementation of SEED-LLaMA (ICLR 2024).β557Updated 5 months ago
- [ICML 2024 Spotlight] FiT: Flexible Vision Transformer for Diffusion Modelβ357Updated 7 months ago
- β93Updated 2 months ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Contentβ498Updated 2 months ago
- Diffusion Feedback Helps CLIP See Betterβ200Updated 3 weeks ago
- [CVPR2024] MotionEditor is the first diffusion-based model capable of video motion editing.β129Updated 2 months ago
- A collection of awesome text-to-image generation studies.β326Updated last week
- β¨β¨Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysisβ365Updated 3 months ago