Yushi-Hu / VisualSketchpadLinks
Codes for Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models
☆222Updated 7 months ago
Alternatives and similar repositories for VisualSketchpad
Users that are interested in VisualSketchpad are comparing it to the libraries listed below
Sorting:
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆195Updated 2 months ago
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆88Updated 2 weeks ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆141Updated 6 months ago
- OpenThinkIMG is an end-to-end open-source framework that empowers LVLMs to think with images.☆205Updated last week
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR2024]☆215Updated 2 months ago
- The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆158Updated 2 months ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning"☆103Updated 2 weeks ago
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆112Updated last month
- Auto Interpretation Pipeline and many other functionalities for Multimodal SAE Analysis.☆132Updated 4 months ago
- [ACL 2025 🔥] Rethinking Step-by-step Visual Reasoning in LLMs☆299Updated 2 weeks ago
- Long Context Transfer from Language to Vision☆378Updated 2 months ago
- [CVPR 2024] Prompt Highlighter: Interactive Control for Multi-Modal LLMs☆147Updated 10 months ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆181Updated 8 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆125Updated 11 months ago
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆120Updated last year
- ☆59Updated 3 months ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆302Updated 4 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆103Updated last week
- Code for the paper "AutoPresent: Designing Structured Visuals From Scratch" (CVPR 2025)☆79Updated last week
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆148Updated 11 months ago
- [NeurIPS 2024] MATH-Vision dataset and code to measure multimodal mathematical reasoning capabilities.☆107Updated 3 weeks ago
- Pixel-Level Reasoning Model trained with RL☆92Updated this week
- A Survey on Benchmarks of Multimodal Large Language Models☆105Updated 2 months ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆151Updated 8 months ago
- [NeurIPS 2024] A task generation and model evaluation system for multimodal language models.☆71Updated 6 months ago
- MMR1: Advancing the Frontiers of Multimodal Reasoning☆159Updated 2 months ago
- A RLHF Infrastructure for Vision-Language Models☆176Updated 6 months ago
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision☆63Updated 10 months ago
- ☆142Updated last year
- ☆102Updated last month