mlfoundations / clip_quality_not_quantityLinks
☆29Updated 2 years ago
Alternatives and similar repositories for clip_quality_not_quantity
Users that are interested in clip_quality_not_quantity are comparing it to the libraries listed below
Sorting:
- Official Code Release for "Diagnosing and Rectifying Vision Models using Language" (ICLR 2023)☆34Updated 2 years ago
- Code for T-MARS data filtering☆35Updated last year
- Patching open-vocabulary models by interpolating weights☆91Updated last year
- Official code for the paper "Does CLIP's Generalization Performance Mainly Stem from High Train-Test Similarity?" (ICLR 2024)☆10Updated 10 months ago
- If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions☆17Updated last year
- [ICCV23] Official implementation of eP-ALM: Efficient Perceptual Augmentation of Language Models.☆27Updated last year
- Implementation for the paper "Reliable Visual Question Answering Abstain Rather Than Answer Incorrectly" (ECCV 2022: https//arxiv.org/abs…☆34Updated 2 years ago
- Code for ACL 2023 Oral Paper: ManagerTower: Aggregating the Insights of Uni-Modal Experts for Vision-Language Representation Learning☆11Updated 6 months ago
- Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.☆33Updated last year
- Code for 'Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality', EMNLP 2022☆30Updated 2 years ago
- Official repository for the ICCV 2023 paper: "Waffling around for Performance: Visual Classification with Random Words and Broad Concepts…☆57Updated 2 years ago
- Command-line tool for downloading and extending the RedCaps dataset.☆48Updated last year
- The released data for paper "Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models".☆33Updated last year
- Repository for the paper: dense and aligned captions (dac) promote compositional reasoning in vl models☆27Updated last year
- ☆33Updated last year
- https://arxiv.org/abs/2209.15162☆50Updated 2 years ago
- [ICLR 23] Contrastive Aligned of Vision to Language Through Parameter-Efficient Transfer Learning☆39Updated last year
- ☆29Updated 2 years ago
- ☆51Updated 2 years ago
- ☆11Updated 2 years ago
- PyTorch code for "Perceiver-VL: Efficient Vision-and-Language Modeling with Iterative Latent Attention" (WACV 2023)☆33Updated 2 years ago
- VaLM: Visually-augmented Language Modeling. ICLR 2023.☆56Updated 2 years ago
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆44Updated last year
- Compress conventional Vision-Language Pre-training data☆51Updated last year
- Test-Time Distribution Normalization For Contrastively Learned Vision-language Models☆28Updated last year
- Code for the paper titled "CiT Curation in Training for Effective Vision-Language Data".☆78Updated 2 years ago
- [ACL 2023] Delving into the Openness of CLIP☆23Updated 2 years ago
- [Findings of ACL-2023] This is the official implementation of On the Difference of BERT-style and CLIP-style Text Encoders.☆14Updated 2 years ago
- An Enhanced CLIP Framework for Learning with Synthetic Captions☆37Updated 2 months ago
- The SVO-Probes Dataset for Verb Understanding☆31Updated 3 years ago