stanfordmlgroup / ManyICL
☆137Updated 10 months ago
Alternatives and similar repositories for ManyICL:
Users that are interested in ManyICL are comparing it to the libraries listed below
- Codes for Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models☆175Updated 4 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆65Updated last month
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆96Updated last month
- [NeurIPS 2024] A task generation and model evaluation system for multimodal language models.☆70Updated 3 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆199Updated 2 months ago
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(…☆273Updated 4 months ago
- The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆146Updated last week
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆75Updated 5 months ago
- ☆169Updated last year
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆99Updated 3 months ago
- Auto Interpretation Pipeline and many other functionalities for Multimodal SAE Analysis.☆120Updated 2 months ago
- What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆63Updated 3 weeks ago
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆112Updated last year
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆148Updated last week
- This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for E…☆403Updated 2 weeks ago
- [ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents☆193Updated this week
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆90Updated 3 months ago
- [CVPR2025] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆165Updated last week
- Code & Dataset for Paper: "Distill Visual Chart Reasoning Ability from LLMs to MLLMs"☆50Updated 4 months ago
- ☆83Updated 2 weeks ago
- [ICLR 2025] Benchmarking Agentic Workflow Generation☆62Updated last month
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆65Updated 9 months ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆135Updated 5 months ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR2024]☆208Updated this week
- ☆88Updated last month
- Rethinking Step-by-step Visual Reasoning in LLMs☆279Updated 2 months ago
- [ICLR 2024] Towards Robust Multi-Modal Reasoning via Model Selection☆10Updated last year
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆49Updated 3 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆113Updated 4 months ago
- A Survey on Benchmarks of Multimodal Large Language Models☆92Updated last week