facebookresearch / unibenchLinks
Python Library to evaluate VLM models' robustness across diverse benchmarks
☆220Updated 2 months ago
Alternatives and similar repositories for unibench
Users that are interested in unibench are comparing it to the libraries listed below
Sorting:
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆153Updated 3 months ago
- Matryoshka Multimodal Models☆121Updated 11 months ago
- [COLM'25] Official implementation of the Law of Vision Representation in MLLMs☆172Updated 2 months ago
- Official implementation of paper: SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training☆313Updated 8 months ago
- [ICCVW 25] LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction Tuning☆157Updated 4 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆212Updated 11 months ago
- An open source implementation of CLIP (With TULIP Support)☆164Updated 7 months ago
- Official implementation of "Describing Differences in Image Sets with Natural Language" (CVPR 2024 Oral)☆129Updated last month
- VLM Evaluation: Benchmark for VLMs, spanning text generation tasks from VQA to Captioning☆132Updated last year
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024]☆237Updated 9 months ago
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆161Updated last year
- Code for "Scaling Language-Free Visual Representation Learning" paper (Web-SSL).☆194Updated 8 months ago
- [Fully open] [Encoder-free MLLM] Vision as LoRA☆359Updated 6 months ago
- [NeurIPS 2024] Official PyTorch implementation code for realizing the technical part of Mamba-based traversal of rationale (Meteor) to im…☆116Updated last year
- [ICML 2025] This is the official repository of our paper "What If We Recaption Billions of Web Images with LLaMA-3 ?"☆147Updated last year
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision☆72Updated last year
- Multimodal language model benchmark, featuring challenging examples☆182Updated last year
- E5-V: Universal Embeddings with Multimodal Large Language Models☆274Updated 3 weeks ago
- [ICCV 2025] Auto Interpretation Pipeline and many other functionalities for Multimodal SAE Analysis.☆171Updated 3 months ago
- Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models. TMLR 2025.☆133Updated 3 months ago
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆125Updated 5 months ago
- This repo contains the code for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks" [ICLR 2025]☆77Updated 6 months ago
- ☆191Updated last year
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆248Updated 11 months ago
- This repository is maintained to release dataset and models for multimodal puzzle reasoning.☆113Updated 10 months ago
- ☆80Updated last year
- [COLM-2024] List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs☆145Updated last year
- PyTorch Implementation of Zero-Shot Vision Encoder Grafting via LLM Surrogates [ICCV'25]☆51Updated 5 months ago
- [ACL 2025 🔥] Rethinking Step-by-step Visual Reasoning in LLMs☆310Updated 7 months ago
- 🔥 [ICLR 2025] Official Benchmark Toolkits for "Visual Haystacks: A Vision-Centric Needle-In-A-Haystack Benchmark"☆37Updated last month