jacklishufan / LaViDaLinks
Official Implementation of LaViDa: :A Large Diffusion Language Model for Multimodal Understanding
☆174Updated last month
Alternatives and similar repositories for LaViDa
Users that are interested in LaViDa are comparing it to the libraries listed below
Sorting:
- Code for MetaMorph Multimodal Understanding and Generation via Instruction Tuning☆222Updated 7 months ago
- ☆288Updated last month
- Official Implementation of Muddit [Meissonic II]: Liberating Generation Beyond Text-to-Image with a Unified Discrete Diffusion Model.☆95Updated last month
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)☆197Updated 4 months ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆195Updated 5 months ago
- Uni-CoT: Towards Unified Chain-of-Thought Reasoning Across Text and Vision☆173Updated last week
- [NeurIPS 2025] HermesFlow: Seamlessly Closing the Gap in Multimodal Understanding and Generation☆72Updated 2 months ago
- Pixel-Level Reasoning Model trained with RL [NeuIPS25]☆251Updated 3 weeks ago
- [ICLR 2025] Source code for paper "A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegr…☆79Updated 11 months ago
- https://huggingface.co/datasets/multimodal-reasoning-lab/Zebra-CoT☆103Updated last month
- ☆79Updated 5 months ago
- [COLM'25] Official implementation of the Law of Vision Representation in MLLMs☆170Updated last month
- Dimple, the first Discrete Diffusion Multimodal Large Language Model☆112Updated 4 months ago
- Official implementation of Bifrost-1: Bridging Multimodal LLMs and Diffusion Models with Patch-level CLIP Latents (NeurIPS 2025)☆43Updated last week
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆90Updated last year
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆147Updated last year
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning" [NeurIPS25]☆166Updated 6 months ago
- ☆135Updated last month
- ☆64Updated 6 months ago
- Code for "Scaling Language-Free Visual Representation Learning" paper (Web-SSL).☆189Updated 7 months ago
- An open source implementation of CLIP (With TULIP Support)☆163Updated 6 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆133Updated 6 months ago
- Code for the paper "Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers" [ICCV 2025]☆95Updated 4 months ago
- Visual Planning: Let's Think Only with Images☆283Updated 6 months ago
- [NeurIPS 2024] Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective☆73Updated last year
- Official repository of 'ScaleCap: Inference-Time Scalable Image Captioning via Dual-Modality Debiasing’☆58Updated 5 months ago
- ☆63Updated 4 months ago
- ☆94Updated 5 months ago
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆94Updated 9 months ago
- Ego-R1: Chain-of-Tool-Thought for Ultra-Long Egocentric Video Reasoning☆131Updated 3 months ago