kaist-ami / BEAFLinks
[ECCV’24] Official repository for "BEAF: Observing Before-AFter Changes to Evaluate Hallucination in Vision-language Models"
☆21Updated 10 months ago
Alternatives and similar repositories for BEAF
Users that are interested in BEAF are comparing it to the libraries listed below
Sorting:
- ☆10Updated last year
- ☆56Updated 5 months ago
- ☆35Updated 2 years ago
- ☆40Updated last year
- Code and datasets for "Text encoders are performance bottlenecks in contrastive vision-language models". Coming soon!☆11Updated 2 years ago
- ☆16Updated last year
- [CVPR 2024 Highlight] ImageNet-D☆46Updated last year
- Official Repository of Personalized Visual Instruct Tuning☆34Updated 11 months ago
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆46Updated last year
- [ICCV 2023] Going Beyond Nouns With Vision & Language Models Using Synthetic Data☆14Updated 2 years ago
- Code and data setup for the paper "Are Diffusion Models Vision-and-language Reasoners?"☆33Updated last year
- Official code repo of PIN: Positional Insert Unlocks Object Localisation Abilities in VLMs☆26Updated last year
- CycleReward is a reward model trained on cycle consistency preferences to measure image-text alignment.☆53Updated 3 months ago
- Code and data for the paper "Emergent Visual-Semantic Hierarchies in Image-Text Representations" (ECCV 2024)☆33Updated last year
- Code for the paper "If at First You Don't Succeed, Try, Try Again: Faithful Diffusion-based Text-to-Image Generation by Selection"☆27Updated 2 years ago
- (NeurIPS 2024) What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights☆28Updated last year
- Implementation and dataset for paper "Can MLLMs Perform Text-to-Image In-Context Learning?"☆42Updated 8 months ago
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.☆31Updated last year
- ☆11Updated last year
- [ECCV 2024] Official repository for "DataDream: Few-shot Guided Dataset Generation"☆49Updated last year
- Code base of SynthCLIP: CLIP training with purely synthetic text-image pairs from LLMs and TTIs.☆102Updated 10 months ago
- ☆23Updated 2 years ago
- VPEval Codebase from Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆45Updated 2 years ago
- ☆61Updated 2 years ago
- This is the implementation of CounterCurate, the data curation pipeline of both physical and semantic counterfactual image-caption pairs.☆19Updated last year
- Benchmarking Multi-Image Understanding in Vision and Language Models☆12Updated last year
- Bias-to-Text: Debiasing Unknown Visual Biases through Language Interpretation☆32Updated 2 years ago
- Code for our ICLR 2024 paper "PerceptionCLIP: Visual Classification by Inferring and Conditioning on Contexts"☆79Updated last year
- Code for "Are “Hierarchical” Visual Representations Hierarchical?" in NeurIPS Workshop for Symmetry and Geometry in Neural Representation…☆22Updated 2 years ago
- VisualGPTScore for visio-linguistic reasoning☆27Updated 2 years ago