multimodal-reasoning-lab / Bagel-Zebra-CoTLinks
https://huggingface.co/datasets/multimodal-reasoning-lab/Zebra-CoT
☆67Updated 2 weeks ago
Alternatives and similar repositories for Bagel-Zebra-CoT
Users that are interested in Bagel-Zebra-CoT are comparing it to the libraries listed below
Sorting:
- ☆45Updated 7 months ago
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆50Updated last month
- M2-Reasoning: Empowering MLLMs with Unified General and Spatial Reasoning☆40Updated last month
- ☆70Updated 2 months ago
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆39Updated last year
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆83Updated 5 months ago
- ☆87Updated 2 months ago
- Ego-R1: Chain-of-Tool-Thought for Ultra-Long Egocentric Video Reasoning☆103Updated last month
- Uni-CoT: Towards Unified Chain-of-Thought Reasoning Across Text and Vision☆94Updated 2 weeks ago
- VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and Generation☆86Updated 11 months ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?☆66Updated last month
- High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning☆47Updated last month
- Code and dataset link for "DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World"☆98Updated last month
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)☆138Updated 3 weeks ago
- Official implementation of Next Block Prediction: Video Generation via Semi-Autoregressive Modeling☆38Updated 6 months ago
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆117Updated 3 weeks ago
- ☆23Updated 2 months ago
- The code repository of UniRL☆37Updated 2 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆85Updated 11 months ago
- ☆38Updated last month
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆62Updated 6 months ago
- Autoregressive Semantic Visual Reconstruction Helps VLMs Understand Better☆36Updated 2 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆55Updated last month
- Official repo of the ICLR 2025 paper "MMWorld: Towards Multi-discipline Multi-faceted World Model Evaluation in Videos"☆29Updated last month
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward☆72Updated 2 weeks ago
- [CVPR 25] A framework named B^2-DiffuRL for RL-based diffusion model fine-tuning.☆34Updated 4 months ago
- Codes for ICLR 2025 Paper: Towards Semantic Equivalence of Tokenization in Multimodal LLM☆70Updated 4 months ago
- This repo contains the code for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks" [ICLR2025]☆74Updated last month
- [ICLR'25] Reconstructive Visual Instruction Tuning☆103Updated 4 months ago
- (ICLR 2025 Spotlight) Official code repository for Interleaved Scene Graph.☆27Updated 2 weeks ago