Stanford-AIMI / RaVL
[NeurIPS 2024] RaVL: Discovering and Mitigating Spurious Correlations in Fine-Tuned Vision-Language Models
☆20Updated 5 months ago
Alternatives and similar repositories for RaVL:
Users that are interested in RaVL are comparing it to the libraries listed below
- Do Vision and Language Models Share Concepts? A Vector Space Alignment Study☆14Updated 5 months ago
- Sparse Autoencoders Learn Monosemantic Features in Vision-Language Models☆11Updated 3 weeks ago
- [ECCV’24] Official repository for "BEAF: Observing Before-AFter Changes to Evaluate Hallucination in Vision-language Models"☆19Updated last month
- Unsolvable Problem Detection: Evaluating Trustworthiness of Vision Language Models☆76Updated 7 months ago
- [NeurIPS'24] Multilinear Mixture of Experts: Scalable Expert Specialization through Factorization☆30Updated 7 months ago
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆54Updated 4 months ago
- An Enhanced CLIP Framework for Learning with Synthetic Captions☆28Updated 2 weeks ago
- Official implementation of Scaling Laws in Patchification: An Image Is Worth 50,176 Tokens And More☆17Updated 2 months ago
- [CVPR 2025] MicroVQA eval and 🤖RefineBot code for "MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific Research"…☆20Updated last month
- Official Implementation of DiffCLIP: Differential Attention Meets CLIP☆26Updated last month
- Official implementation of "Automated Generation of Challenging Multiple-Choice Questions for Vision Language Model Evaluation" (CVPR 202…☆25Updated last month
- [ICLR 2025] Official code repository for "TULIP: Token-length Upgraded CLIP"☆25Updated 2 months ago
- [ICCV 2023] ViLLA: Fine-grained vision-language representation learning from real-world data☆44Updated last year
- [NeurIPS 2023] Official repository for "Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models"☆12Updated 10 months ago
- Code for "Are “Hierarchical” Visual Representations Hierarchical?" in NeurIPS Workshop for Symmetry and Geometry in Neural Representation…☆20Updated last year
- "Worse than Random? An Embarrassingly Simple Probing Evaluation of Large Multimodal Models in Medical VQA"☆16Updated 2 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆52Updated last month
- This repository is related to 'Intriguing Properties of Hyperbolic Embeddings in Vision-Language Models', published at TMLR (2024), https…☆19Updated 10 months ago
- Official implementation of "Connect, Collapse, Corrupt: Learning Cross-Modal Tasks with Uni-Modal Data" (ICLR 2024)☆30Updated 6 months ago
- Evaluation and dataset construction code for the CVPR 2025 paper "Vision-Language Models Do Not Understand Negation"☆20Updated last week
- Official PyTorch implementation for NeurIPS'24 paper "Knowledge Composition using Task Vectors with Learned Anisotropic Scaling"☆19Updated 2 months ago
- Official code for the paper "Does CLIP's Generalization Performance Mainly Stem from High Train-Test Similarity?" (ICLR 2024)☆11Updated 8 months ago
- Code for "CLIP Behaves like a Bag-of-Words Model Cross-modally but not Uni-modally"☆11Updated 2 months ago
- [CVPR'24] Validation-free few-shot adaptation of CLIP, using a well-initialized Linear Probe (ZSLP) and class-adaptive constraints (CLAP)…☆72Updated last week
- [NeurIPS 2023] Factorized Contrastive Learning: Going Beyond Multi-view Redundancy☆66Updated last year
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆39Updated 5 months ago
- [ICLR 2025] VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning☆54Updated 3 months ago
- ☆17Updated 9 months ago
- If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions☆16Updated last year
- Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.☆32Updated last year