zeyofu / BLINK_BenchmarkLinks
This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.org/abs/2404.12390 [ECCV 2024]
☆147Updated 3 weeks ago
Alternatives and similar repositories for BLINK_Benchmark
Users that are interested in BLINK_Benchmark are comparing it to the libraries listed below
Sorting:
- [COLM'25] Official implementation of the Law of Vision Representation in MLLMs☆168Updated 2 weeks ago
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆61Updated 8 months ago
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆75Updated last year
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆124Updated 6 months ago
- Matryoshka Multimodal Models☆112Updated 9 months ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆37Updated 6 months ago
- code for "Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization"☆59Updated last year
- [Neurips 24' D&B] Official Dataloader and Evaluation Scripts for LongVideoBench.☆110Updated last year
- ☆138Updated last year
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆87Updated last year
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆88Updated last year
- [CVPR 2024] Prompt Highlighter: Interactive Control for Multi-Modal LLMs☆154Updated last year
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆148Updated 11 months ago
- A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models!☆135Updated last year
- ☆60Updated last month
- ☆155Updated 11 months ago
- [ICLR 2025] Source code for paper "A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegr…☆77Updated 10 months ago
- Official implementation of "Describing Differences in Image Sets with Natural Language" (CVPR 2024 Oral)☆124Updated last year
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆32Updated last year
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆62Updated last year
- Official repo for StableLLAVA☆94Updated last year
- ☆99Updated last year
- Python Library to evaluate VLM models' robustness across diverse benchmarks☆217Updated this week
- Official Implementation for "MyVLM: Personalizing VLMs for User-Specific Queries" (ECCV 2024)☆179Updated last year
- Official Repository of Personalized Visual Instruct Tuning☆32Updated 7 months ago
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)☆91Updated last year
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024]☆230Updated 7 months ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆191Updated 4 months ago
- This repo contains the code for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks" [ICLR 2025]☆77Updated 3 months ago
- ☆75Updated 4 months ago