stanfordmlgroup / ManyICLLinks
☆145Updated last year
Alternatives and similar repositories for ManyICL
Users that are interested in ManyICL are comparing it to the libraries listed below
Sorting:
- ☆136Updated 9 months ago
- [NAACL 2025 Oral] Multimodal Needle in a Haystack (MMNeedle): Benchmarking Long-Context Capability of Multimodal Large Language Models☆54Updated 7 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆86Updated 10 months ago
- [TMLR 25] SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆145Updated 2 months ago
- ☆203Updated last year
- [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs☆136Updated 8 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆109Updated 7 months ago
- [ICCV 2025] Auto Interpretation Pipeline and many other functionalities for Multimodal SAE Analysis.☆171Updated 3 months ago
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆91Updated last year
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆112Updated last year
- [ACL'25 Oral] What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆74Updated 6 months ago
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆123Updated 5 months ago
- [ICCV 2025 Highlight] The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆182Updated 9 months ago
- ☆73Updated 7 months ago
- ☆226Updated 10 months ago
- [ACL2025] Unsolvable Problem Detection: Robust Understanding Evaluation for Large Multimodal Models☆79Updated 6 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆212Updated 11 months ago
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆123Updated last year
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆70Updated last year
- A curated list of awesome LLM Inference-Time Self-Improvement (ITSI, pronounced "itsy") papers from our recent survey: A Survey on Large …☆97Updated last year
- [EMNLP 2025] Distill Visual Chart Reasoning Ability from LLMs to MLLMs☆58Updated 4 months ago
- [NeurIPS 2024 D&B Track] GTA: A Benchmark for General Tool Agents☆131Updated 9 months ago
- [NeurIPS 2024] MATH-Vision dataset and code to measure multimodal mathematical reasoning capabilities.☆126Updated 7 months ago
- [ECCV 2024] BenchLMM: Benchmarking Cross-style Visual Capability of Large Multimodal Models☆86Updated last year
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆146Updated last year
- ☆105Updated 6 months ago
- Official implementation of MAIA, A Multimodal Automated Interpretability Agent☆99Updated 2 months ago
- ☆106Updated 2 weeks ago
- ☆140Updated 3 months ago
- ☆213Updated 6 months ago