mit-han-lab / vila-u
[ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation
☆313Updated last week
Alternatives and similar repositories for vila-u:
Users that are interested in vila-u are comparing it to the libraries listed below
- Code for MetaMorph Multimodal Understanding and Generation via Instruction Tuning☆156Updated 2 weeks ago
- Official repository of "GoT: Unleashing Reasoning Capability of Multimodal Large Language Model for Visual Generation and Editing"☆236Updated last week
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆324Updated 2 months ago
- [CVPR 2025] 🔥 Official impl. of "TokenFlow: Unified Image Tokenizer for Multimodal Understanding and Generation".☆318Updated 2 months ago
- Pytorch implementation for the paper titled "SimpleAR: Pushing the Frontier of Autoregressive Visual Generation"☆333Updated 2 weeks ago
- [ICLR 2025] Autoregressive Video Generation without Vector Quantization☆492Updated 2 weeks ago
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creation☆438Updated 5 months ago
- A Unified Tokenizer for Visual Generation and Understanding☆270Updated 3 weeks ago
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey☆422Updated 3 months ago
- Explore the Limits of Omni-modal Pretraining at Scale☆97Updated 8 months ago
- Long Context Transfer from Language to Vision☆374Updated last month
- [ICLR 2025] OpenVid-1M: A Large-Scale High-Quality Dataset for Text-to-video Generation☆288Updated 2 months ago
- This is a repo to track the latest autoregressive visual generation papers.☆300Updated this week
- Official implementation of Unified Reward Model for Multimodal Understanding and Generation.☆243Updated this week
- Official implementation of the Law of Vision Representation in MLLMs☆154Updated 5 months ago
- Adaptive Caching for Faster Video Generation with Diffusion Transformers☆147Updated 6 months ago
- Empowering Unified MLLM with Multi-granular Visual Generation☆119Updated 3 months ago
- [CVPR2025 Highlight] PAR: Parallelized Autoregressive Visual Generation. https://yuqingwang1029.github.io/PAR-project☆151Updated last month
- Scaling Diffusion Transformers with Mixture of Experts☆317Updated 7 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆489Updated last week
- Official repo and evaluation implementation of VSI-Bench☆475Updated 2 months ago
- Official implementation of the paper: REPA-E: Unlocking VAE for End-to-End Tuning of Latent Diffusion Transformers☆190Updated 3 weeks ago
- High-performance Image Tokenizers for VAR and AR☆255Updated last week
- [ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captions☆218Updated 10 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆201Updated 4 months ago
- [ICLR 2025] Diffusion Feedback Helps CLIP See Better☆277Updated 3 months ago
- Official Implementation for our NeurIPS 2024 paper, "Don't Look Twice: Run-Length Tokenization for Faster Video Transformers".☆210Updated last month
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆188Updated last month
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆155Updated 2 months ago
- A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models!☆125Updated last year