shxie2020 / Awesome-UGVFMLinks
A collection of vision foundation models unifying understanding and generation.
☆56Updated 6 months ago
Alternatives and similar repositories for Awesome-UGVFM
Users that are interested in Awesome-UGVFM are comparing it to the libraries listed below
Sorting:
- [ICCV2025]Code Release of Harmonizing Visual Representations for Unified Multimodal Understanding and Generation☆141Updated last month
- WISE: A World Knowledge-Informed Semantic Evaluation for Text-to-Image Generation☆126Updated last month
- Empowering Unified MLLM with Multi-granular Visual Generation☆126Updated 5 months ago
- [CVPR 2025 (Oral)] Open implementation of "RandAR"☆177Updated 3 months ago
- GoT-R1: Unleashing Reasoning Capability of MLLM for Visual Generation with Reinforcement Learning☆87Updated last month
- Official Implementation of Paper Transfer between Modalities with MetaQueries☆139Updated this week
- [ICML2025] The code and data of Paper: Towards World Simulator: Crafting Physical Commonsense-Based Benchmark for Video Generation☆114Updated 8 months ago
- official code repo of CVPR 2025 paper PhyT2V: LLM-Guided Iterative Self-Refinement for Physics-Grounded Text-to-Video Generation☆38Updated 3 months ago
- [CVPR 25] A framework named B^2-DiffuRL for RL-based diffusion model fine-tuning.☆32Updated 3 months ago
- 【COLING 2025🔥】Code for the paper "Is Parameter Collision Hindering Continual Learning in LLMs?".☆34Updated 7 months ago
- TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generation☆101Updated last month
- UniFork: Exploring Modality Alignment for Unified Multimodal Understanding and Generation☆38Updated last week
- Code for: "Long-Context Autoregressive Video Modeling with Next-Frame Prediction"