multimodal-reasoning-lab / Bagel-Zebra-CoTLinks
https://huggingface.co/datasets/multimodal-reasoning-lab/Zebra-CoT
☆78Updated last month
Alternatives and similar repositories for Bagel-Zebra-CoT
Users that are interested in Bagel-Zebra-CoT are comparing it to the libraries listed below
Sorting:
- Uni-CoT: Towards Unified Chain-of-Thought Reasoning Across Text and Vision☆142Updated last week
- ☆75Updated 3 months ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆74Updated 2 months ago
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆64Updated 2 months ago
- Ego-R1: Chain-of-Tool-Thought for Ultra-Long Egocentric Video Reasoning☆117Updated last month
- (ICLR 2025 Spotlight) Official code repository for Interleaved Scene Graph.☆28Updated last month
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆86Updated last year
- Code and dataset link for "DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World"☆108Updated 3 months ago
- ☆89Updated 3 months ago
- Official implementation of Next Block Prediction: Video Generation via Semi-Autoregressive Modeling☆39Updated 7 months ago
- ☆45Updated 9 months ago
- Official Implementation of OpenING: A Comprehensive Benchmark for Judging Open-ended Interleaved Image-Text Generation☆32Updated 2 months ago
- Official implement of MIA-DPO☆66Updated 8 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆128Updated 4 months ago
- Official code for NeurIPS 2025 paper "GRIT: Teaching MLLMs to Think with Images"☆139Updated 2 months ago
- [ICLR'25] Reconstructive Visual Instruction Tuning☆118Updated 5 months ago
- The code repository of UniRL☆42Updated 4 months ago
- M2-Reasoning: Empowering MLLMs with Unified General and Spatial Reasoning☆43Updated 2 months ago
- High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning☆48Updated 2 months ago
- Official repository of 'ScaleCap: Inference-Time Scalable Image Captioning via Dual-Modality Debiasing’☆57Updated 3 months ago
- Empowering Unified MLLM with Multi-granular Visual Generation☆130Updated 8 months ago
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆87Updated 7 months ago
- ☆39Updated 4 months ago
- Code for Commonsense-T2I Challenge: Can Text-to-Image Generation Models Understand Commonsense? [COLM 2024]☆22Updated last year
- Reinforcing Spatial Reasoning in Vision-Language Models with Interwoven Thinking and Visual Drawing☆70Updated 2 months ago
- [ICCV2025]Code Release of Harmonizing Visual Representations for Unified Multimodal Understanding and Generation☆169Updated 4 months ago
- Codes for ICLR 2025 Paper: Towards Semantic Equivalence of Tokenization in Multimodal LLM☆72Updated 5 months ago
- [CVPR 25] A framework named B^2-DiffuRL for RL-based diffusion model fine-tuning.☆37Updated 6 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆60Updated 2 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆88Updated 3 months ago