DavidMChan / clairLinks
CLAIR: A (surprisingly) simple semantic text metric with large language models.
☆21Updated 2 years ago
Alternatives and similar repositories for clair
Users that are interested in clair are comparing it to the libraries listed below
Sorting:
- TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering☆180Updated last year
- ☆59Updated 2 years ago
- Official repository for the A-OKVQA dataset☆109Updated last year
- ICCV 2023 (Oral) Open-domain Visual Entity Recognition Towards Recognizing Millions of Wikipedia Entities☆43Updated 8 months ago
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆89Updated last year
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆159Updated 4 months ago
- Implementation and dataset for paper "Can MLLMs Perform Text-to-Image In-Context Learning?"☆42Updated 8 months ago
- Repository for the paper: dense and aligned captions (dac) promote compositional reasoning in vl models☆27Updated 2 years ago
- LLMScore: Unveiling the Power of Large Language Models in Text-to-Image Synthesis Evaluation☆134Updated 2 years ago
- NegCLIP.☆38Updated 3 years ago
- How well can Text-to-Image Generative Models understand Ethical Natural Language Interventions?☆13Updated 2 years ago
- We introduce new approach, Token Reduction using CLIP Metric (TRIM), aimed at improving the efficiency of MLLMs without sacrificing their…☆20Updated 3 weeks ago
- Official Repository of Personalized Visual Instruct Tuning☆34Updated 11 months ago
- VPEval Codebase from Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆45Updated 2 years ago
- [NAACL 2024] Vision language model that reduces hallucinations through self-feedback guided revision. Visualizes attentions on image feat…☆47Updated last year
- Official implementation of our paper "Finetuned Multimodal Language Models are High-Quality Image-Text Data Filters".☆69Updated 9 months ago
- code for "Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization"☆60Updated last year
- RichHF-18K dataset contains rich human feedback labels we collected for our CVPR'24 paper: https://arxiv.org/pdf/2312.10240, along with t…☆153Updated last year
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆70Updated last year
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆42Updated last month
- [TACL'23] VSR: A probing benchmark for spatial undersranding of vision-language models.☆139Updated 2 years ago
- CLIPScore EMNLP code☆244Updated 3 years ago
- FaithScore: Fine-grained Evaluations of Hallucinations in Large Vision-Language Models☆32Updated 2 months ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆51Updated last year
- [CVPR 2023 & IJCV 2025] Positive-Augmented Contrastive Learning for Image and Video Captioning Evaluation☆64Updated 6 months ago
- Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆57Updated 2 years ago
- Preference Learning for LLaVA☆59Updated last year
- Repo for paper: "Paxion: Patching Action Knowledge in Video-Language Foundation Models" Neurips 23 Spotlight☆37Updated 2 years ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆84Updated 3 months ago
- The First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models?☆40Updated last year