visual-haystacks / mirageLinks
π₯ [ICLR 2025] Official PyTorch Model "Visual Haystacks: A Vision-Centric Needle-In-A-Haystack Benchmark"
β15Updated 5 months ago
Alternatives and similar repositories for mirage
Users that are interested in mirage are comparing it to the libraries listed below
Sorting:
- π₯ [ICLR 2025] Official Benchmark Toolkits for "Visual Haystacks: A Vision-Centric Needle-In-A-Haystack Benchmark"β29Updated 5 months ago
- Official implementation of the Law of Vision Representation in MLLMsβ160Updated 8 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"β143Updated 8 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Modelsβ76Updated last year
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Groundingβ64Updated last month
- Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)β63Updated last month
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.β68Updated last year
- Matryoshka Multimodal Modelsβ111Updated 5 months ago
- β45Updated 6 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.orβ¦β130Updated last year
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervisionβ41Updated 3 months ago
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuningβ86Updated last year
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervisionβ65Updated last year
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Modelsβ81Updated 10 months ago
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Modelsβ124Updated 2 months ago
- Code and datasets for "Whatβs βupβ with vision-language models? Investigating their struggle with spatial reasoning".β54Updated last year
- β27Updated 8 months ago
- [CVPR 2024] Prompt Highlighter: Interactive Control for Multi-Modal LLMsβ147Updated 11 months ago
- β18Updated 2 months ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"β185Updated 9 months ago
- Official Pytorch implementation of "Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations" (ICLR '25)β75Updated last month
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentationβ78Updated last month
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".β176Updated 3 weeks ago
- [ACL2025] Unsolvable Problem Detection: Robust Understanding Evaluation for Large Multimodal Modelsβ77Updated last month
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''β85Updated last year
- code for "Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization"β55Updated 10 months ago
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Diβ¦β55Updated 8 months ago
- Official repository for CoMM Datasetβ43Updated 6 months ago
- Code for "Scaling Language-Free Visual Representation Learning" paper (Web-SSL).β157Updated 2 months ago
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"β29Updated 9 months ago