JieyuZ2 / TaskMeAnything
[NeurIPS 2024] A task generation and model evaluation system for multimodal language models.
☆63Updated 2 months ago
Alternatives and similar repositories for TaskMeAnything:
Users that are interested in TaskMeAnything are comparing it to the libraries listed below
- Auto Interpretation Pipeline and many other functionalities for Multimodal SAE Analysis.☆104Updated 3 weeks ago
- m&ms: A Benchmark to Evaluate Tool-Use for multi-step multi-modal tasks☆36Updated 4 months ago
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision☆59Updated 7 months ago
- A instruction data generation system for multimodal language models.☆30Updated 2 weeks ago
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆63Updated 8 months ago
- Official implementation of the paper "MMInA: Benchmarking Multihop Multimodal Internet Agents"☆41Updated last week
- Official implementation of "Gemini in Reasoning: Unveiling Commonsense in Multimodal Large Language Models"☆35Updated last year
- Official Pytorch implementation of "Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations" (ICLR '25)☆55Updated 2 weeks ago
- [NAACL 2025] Multimodal Needle in a Haystack (MMNeedle): Benchmarking Long-Context Capability of Multimodal Large Language Models☆38Updated this week
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆48Updated 3 months ago
- ☆45Updated last month
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆115Updated 7 months ago
- Unsolvable Problem Detection: Evaluating Trustworthiness of Vision Language Models☆73Updated 5 months ago
- Code for the paper "AutoPresent: Designing Structured Visuals From Scratch"☆48Updated last month
- Codes for Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models☆160Updated 3 months ago
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆46Updated 2 months ago
- Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆128Updated last month
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆78Updated 9 months ago
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆73Updated 3 months ago
- ☆58Updated last month
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆127Updated 3 months ago
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆111Updated last year
- (ICLR 2025 Spotlight) Official code repository for Interleaved Scene Graph.☆16Updated last week
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆76Updated 7 months ago
- The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆140Updated 3 weeks ago
- What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆58Updated 3 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆62Updated 2 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆86Updated last month
- Code repo for "Read Anywhere Pointed: Layout-aware GUI Screen Reading with Tree-of-Lens Grounding"☆25Updated 6 months ago
- An Easy-to-use Hallucination Detection Framework for LLMs.☆57Updated 9 months ago