jacklishufan / LaViDaLinks
Official Implementation of LaViDa: :A Large Diffusion Language Model for Multimodal Understanding
☆158Updated 3 months ago
Alternatives and similar repositories for LaViDa
Users that are interested in LaViDa are comparing it to the libraries listed below
Sorting:
- Code for MetaMorph Multimodal Understanding and Generation via Instruction Tuning☆214Updated 6 months ago
- ☆254Updated last week
- Official Implementation of Muddit [Meissonic II]: Liberating Generation Beyond Text-to-Image with a Unified Discrete Diffusion Model.☆92Updated 2 weeks ago
- Dimple, the first Discrete Diffusion Multimodal Large Language Model☆108Updated 3 months ago
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)☆177Updated 2 months ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆191Updated 4 months ago
- Pixel-Level Reasoning Model trained with RL [NeuIPS25]☆238Updated last month
- Uni-CoT: Towards Unified Chain-of-Thought Reasoning Across Text and Vision☆160Updated last month
- [COLM'25] Official implementation of the Law of Vision Representation in MLLMs☆168Updated 2 weeks ago
- Visual Planning: Let's Think Only with Images☆278Updated 5 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆87Updated last year
- ☆130Updated last week
- An open source implementation of CLIP (With TULIP Support)☆162Updated 5 months ago
- ☆75Updated 4 months ago
- Code for the paper "Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers" [ICCV 2025]☆90Updated 2 months ago
- [NeurIPS 2025] HermesFlow: Seamlessly Closing the Gap in Multimodal Understanding and Generation☆71Updated last month
- TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generation☆226Updated 2 months ago
- [ICLR 2025] Source code for paper "A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegr…☆77Updated 10 months ago
- Code for "Scaling Language-Free Visual Representation Learning" paper (Web-SSL).☆187Updated 5 months ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning" [NeurIPS25]☆157Updated 4 months ago
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation☆398Updated 6 months ago
- Implementation for "The Scalability of Simplicity: Empirical Analysis of Vision-Language Learning with a Single Transformer"☆67Updated last month
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆148Updated 11 months ago
- The author's implementation of FUDOKI, a multimodal large language model purely based on discrete flow matching.☆59Updated last month
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆211Updated 9 months ago
- ☆90Updated 4 months ago
- ☆61Updated 5 months ago
- [CVPR2025 Highlight] PAR: Parallelized Autoregressive Visual Generation. https://yuqingwang1029.github.io/PAR-project☆176Updated 7 months ago
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆90Updated 7 months ago
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆224Updated 3 months ago