tripletclip / TripletCLIPView external linksLinks
[NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"
☆46Dec 1, 2024Updated last year
Alternatives and similar repositories for TripletCLIP
Users that are interested in TripletCLIP are comparing it to the libraries listed below
Sorting:
- ☆29Oct 18, 2022Updated 3 years ago
- ☆56Aug 16, 2025Updated 5 months ago
- Code for "CLIP Behaves like a Bag-of-Words Model Cross-modally but not Uni-modally"☆19Feb 14, 2025Updated 11 months ago
- CLIP-MoE: Mixture of Experts for CLIP☆55Oct 10, 2024Updated last year
- Code for ACL 2023 Oral Paper: ManagerTower: Aggregating the Insights of Uni-Modal Experts for Vision-Language Representation Learning☆12Aug 23, 2025Updated 5 months ago
- [ICCV 2023] Going Beyond Nouns With Vision & Language Models Using Synthetic Data☆14Sep 30, 2023Updated 2 years ago
- Code and datasets for "Text encoders are performance bottlenecks in contrastive vision-language models". Coming soon!☆11May 24, 2023Updated 2 years ago
- (NeurIPS 2024) What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights☆28Oct 28, 2024Updated last year
- [EMNLP 2024] Preserving Multi-Modal Capabilities of Pre-trained VLMs for Improving Vision-Linguistic Compositionality☆21Oct 8, 2024Updated last year
- ☆20Apr 23, 2024Updated last year
- ☆18Sep 23, 2024Updated last year
- An Enhanced CLIP Framework for Learning with Synthetic Captions☆39Apr 18, 2025Updated 9 months ago
- If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions☆17Apr 4, 2024Updated last year
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆61Dec 10, 2024Updated last year
- This repository is related to 'Intriguing Properties of Hyperbolic Embeddings in Vision-Language Models', published at TMLR (2024), https…☆22Jul 5, 2024Updated last year
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆138May 8, 2025Updated 9 months ago
- Stochastic Optimization for Global Contrastive Learning without Large Mini-batches☆20Mar 31, 2023Updated 2 years ago
- Hyperbolic Safety-Aware Vision-Language Models. CVPR 2025☆31Apr 8, 2025Updated 10 months ago
- Source code related to the research paper entitled RVENet: A Large Echocardiographic Dataset for the Deep Learning-Based Assessment of Ri…☆12Mar 10, 2024Updated last year
- Safe-CLIP: Removing NSFW Concepts from Vision-and-Language Models. ECCV 2024☆67Aug 10, 2024Updated last year
- [CVPR 2024 Highlight] ImageNet-D☆46Oct 15, 2024Updated last year
- AlignCLIP: Improving Cross-Modal Alignment in CLIP (ICLR 2025)☆56Mar 1, 2025Updated 11 months ago
- Official repository for Fourier model that can generate periodic signals☆10Mar 10, 2022Updated 3 years ago
- ☆10Jul 5, 2024Updated last year
- A Novel Semantic Segmentation Network using Enhanced Boundaries in Cluttered Scenes (WACV 2025)☆11Aug 11, 2025Updated 6 months ago
- ☆11Oct 20, 2023Updated 2 years ago
- Official code for the paper "Does CLIP's Generalization Performance Mainly Stem from High Train-Test Similarity?" (ICLR 2024)☆10Aug 26, 2024Updated last year
- An official PyTorch implementation for CLIPPR☆30Jul 22, 2023Updated 2 years ago
- [ECCV2024][ICCV2023] Official PyTorch implementation of SeiT++ and SeiT☆56Aug 12, 2024Updated last year
- [CVPR 2024] The official implementation of paper "synthesize, diagnose, and optimize: towards fine-grained vision-language understanding"☆50Jun 16, 2025Updated 7 months ago
- ViCToR: Improving Visual Comprehension via Token Reconstruction for Pretraining LMMs☆28Aug 15, 2025Updated 5 months ago
- Beyond Masking: Demystifying Token-Based Pre-Training for Vision Transformers☆26Apr 12, 2022Updated 3 years ago
- [CVPR25] CoLLM: A Large Language Model for Composed Image Retrieval☆28Mar 26, 2025Updated 10 months ago
- Code for T-MARS data filtering☆35Aug 23, 2023Updated 2 years ago
- Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs?☆15Jun 3, 2025Updated 8 months ago
- Code for the paper "Understanding and Evaluating Racial Biases in Image Captioning"☆12Oct 19, 2021Updated 4 years ago
- Repository for the paper "Data Efficient Masked Language Modeling for Vision and Language".☆18Sep 17, 2021Updated 4 years ago
- ☆11Oct 2, 2024Updated last year
- ☆13Jul 2, 2025Updated 7 months ago