stanfordmlgroup / ManyICL
☆139Updated 10 months ago
Alternatives and similar repositories for ManyICL:
Users that are interested in ManyICL are comparing it to the libraries listed below
- Codes for Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models☆189Updated 5 months ago
- Auto Interpretation Pipeline and many other functionalities for Multimodal SAE Analysis.☆126Updated 2 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆65Updated 2 months ago
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆104Updated 3 months ago
- [NeurIPS 2024] A task generation and model evaluation system for multimodal language models.☆70Updated 4 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆99Updated last month
- [NeurIPS 2024] MATH-Vision dataset and code to measure multimodal mathematical reasoning capabilities.☆102Updated this week
- ☆43Updated this week
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆116Updated last year
- ICML'2024 | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI☆106Updated 8 months ago
- Code & Dataset for Paper: "Distill Visual Chart Reasoning Ability from LLMs to MLLMs"☆52Updated 5 months ago
- [ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents☆203Updated 3 weeks ago
- ☆35Updated last month
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆115Updated 4 months ago
- The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆151Updated last month
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆65Updated 10 months ago
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆82Updated 11 months ago
- Resources for our paper: "EvoAgent: Towards Automatic Multi-Agent Generation via Evolutionary Algorithms"☆91Updated 5 months ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆76Updated 2 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆112Updated 3 weeks ago
- Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"☆120Updated 8 months ago
- ☆173Updated last year
- What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆63Updated last month
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆69Updated 6 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆131Updated 5 months ago
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(…☆280Updated 5 months ago
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆71Updated 3 weeks ago
- ☆121Updated 10 months ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR2024]☆214Updated 3 weeks ago
- ☆87Updated 2 months ago