brendel-group / clip-oodLinks
Official code for the paper "Does CLIP's Generalization Performance Mainly Stem from High Train-Test Similarity?" (ICLR 2024)
☆10Updated last year
Alternatives and similar repositories for clip-ood
Users that are interested in clip-ood are comparing it to the libraries listed below
Sorting:
- Code for "CLIP Behaves like a Bag-of-Words Model Cross-modally but not Uni-modally"☆16Updated 9 months ago
- Official repository for the ICCV 2023 paper: "Waffling around for Performance: Visual Classification with Random Words and Broad Concepts…☆61Updated 2 years ago
- If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions☆17Updated last year
- ☆35Updated last year
- Code release for "Understanding Bias in Large-Scale Visual Datasets"☆22Updated last year
- ☆25Updated 2 years ago
- Official Code Release for "Diagnosing and Rectifying Vision Models using Language" (ICLR 2023)☆34Updated 2 years ago
- Python package to download and use the SSB datasets☆11Updated 2 years ago
- Dataset Interfaces: Diagnosing Model Failures Using Controllable Counterfactual Generation☆45Updated 2 years ago
- ImageNetV2 Pytorch Dataset☆42Updated 2 years ago
- Create generated datasets and train robust classifiers☆36Updated 2 years ago
- [NeurIPS 2023] Official Pytorch code for LOVM: Language-Only Vision Model Selection☆21Updated last year
- Compress conventional Vision-Language Pre-training data☆52Updated 2 years ago
- Test-Time Distribution Normalization For Contrastively Learned Vision-language Models☆27Updated last year
- Code release for paper Extremely Simple Activation Shaping for Out-of-Distribution Detection☆54Updated last year
- ☆29Updated 3 years ago
- Code and datasets for "Text encoders are performance bottlenecks in contrastive vision-language models". Coming soon!☆11Updated 2 years ago
- Code for T-MARS data filtering☆35Updated 2 years ago
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆35Updated 2 years ago
- (NeurIPS 2024) What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights☆28Updated last year
- Patching open-vocabulary models by interpolating weights☆91Updated 2 years ago
- Code for Debiasing Vision-Language Models via Biased Prompts☆58Updated 2 years ago
- Code for "Are “Hierarchical” Visual Representations Hierarchical?" in NeurIPS Workshop for Symmetry and Geometry in Neural Representation…☆21Updated 2 years ago
- Code for a research paper "Part-Based Models Improve Adversarial Robustness" (ICLR 2023)☆23Updated 2 years ago
- https://arxiv.org/abs/2209.15162☆53Updated 2 years ago
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆25Updated last year
- ☆35Updated 2 years ago
- Official Pytorch implementation of 'Facing the Elephant in the Room: Visual Prompt Tuning or Full Finetuning'? (ICLR2024)☆13Updated last year
- Official codebase for the NeurIPS 2023 paper: Towards Last-layer Retraining for Group Robustness with Fewer Annotations. https://arxiv.or…☆11Updated last year
- An Enhanced CLIP Framework for Learning with Synthetic Captions☆37Updated 7 months ago