mit-han-lab / vila-uLinks
[ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation
☆402Updated 6 months ago
Alternatives and similar repositories for vila-u
Users that are interested in vila-u are comparing it to the libraries listed below
Sorting:
- Long-RL: Scaling RL to Long Sequences (NeurIPS 2025)☆648Updated last month
- Code for MetaMorph Multimodal Understanding and Generation via Instruction Tuning☆218Updated 6 months ago
- [CVPR 2025] 🔥 Official impl. of "TokenFlow: Unified Image Tokenizer for Multimodal Understanding and Generation".☆398Updated 3 months ago
- Official repository of "GoT: Unleashing Reasoning Capability of Multimodal Large Language Model for Visual Generation and Editing"☆294Updated last month
- [NeurIPS 2025 Spotlight] A Unified Tokenizer for Visual Generation and Understanding☆442Updated this week
- ☆267Updated 3 weeks ago
- Official implementation of UnifiedReward & [NeurIPS 2025] UnifiedReward-Think☆605Updated this week
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆357Updated 3 months ago
- Pytorch implementation for the paper titled "SimpleAR: Pushing the Frontier of Autoregressive Visual Generation"☆413Updated 4 months ago
- Selftok: Discrete Visual Tokens of Autoregression, by Diffusion, and for Reasoning☆228Updated 5 months ago
- [ICLR'25] Reconstructive Visual Instruction Tuning☆125Updated 7 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆130Updated 5 months ago
- This is a repo to track the latest autoregressive visual generation papers.☆409Updated 4 months ago
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey☆456Updated 9 months ago
- Empowering Unified MLLM with Multi-granular Visual Generation☆130Updated 9 months ago
- Cambrian-S: Towards Spatial Supersensing in Video☆128Updated this week
- [ICLR 2025] Diffusion Feedback Helps CLIP See Better☆292Updated 9 months ago
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creation☆460Updated 11 months ago
- [ICLR 2025] Autoregressive Video Generation without Vector Quantization☆589Updated 2 weeks ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆194Updated 4 months ago
- TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generation☆228Updated 2 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆736Updated last month
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)☆190Updated 3 months ago
- 📖 This is a repository for organizing papers, codes, and other resources related to unified multimodal models.☆324Updated 3 weeks ago
- Official repository for VisionZip (CVPR 2025)☆368Updated 3 months ago
- Official Implementation of LaViDa: :A Large Diffusion Language Model for Multimodal Understanding☆165Updated 3 weeks ago
- [COLM'25] Official implementation of the Law of Vision Representation in MLLMs☆168Updated last month
- [NIPS2025] VideoChat-R1 & R1.5: Enhancing Spatio-Temporal Perception and Reasoning via Reinforcement Fine-Tuning☆220Updated 3 weeks ago
- Pixel-Level Reasoning Model trained with RL [NeuIPS25]☆248Updated last week
- Official Code for "Mini-o3: Scaling Up Reasoning Patterns and Interaction Turns for Visual Search"☆360Updated last month