raytrun / mamba-clipView external linksLinks
CLIP-Mamba: CLIP Pretrained Mamba Models with OOD and Hessian Evaluation
☆79Aug 15, 2024Updated last year
Alternatives and similar repositories for mamba-clip
Users that are interested in mamba-clip are comparing it to the libraries listed below
Sorting:
- ☆11May 17, 2024Updated last year
- Project for SNARE benchmark☆11Jun 5, 2024Updated last year
- [ICCV 2023] Going Beyond Nouns With Vision & Language Models Using Synthetic Data☆14Sep 30, 2023Updated 2 years ago
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆25Nov 23, 2024Updated last year
- Official implementation and dataset for the NAACL 2024 paper "ComCLIP: Training-Free Compositional Image and Text Matching"☆37Aug 18, 2024Updated last year
- This repository houses the code for the paper - "The Neglected of VLMs"☆30Dec 31, 2025Updated last month
- ☆14Dec 31, 2024Updated last year
- Code and datasets for "Text encoders are performance bottlenecks in contrastive vision-language models". Coming soon!☆11May 24, 2023Updated 2 years ago
- Pytorch implementation of TSE attention☆16Jul 9, 2021Updated 4 years ago
- Repository for the paper: dense and aligned captions (dac) promote compositional reasoning in vl models☆27Nov 29, 2023Updated 2 years ago
- This is the repository for "SELECT: A Large-Scale Benchmark of Data Curation Strategies for Image Recognition"☆16Oct 8, 2024Updated last year
- Code release for "Improved baselines for vision-language pre-training"☆62May 6, 2024Updated last year
- ☆15Dec 12, 2023Updated 2 years ago
- Code and Dataset for the paper "LAMM: Label Alignment for Multi-Modal Prompt Learning" AAAI 2024☆33Jan 3, 2024Updated 2 years ago
- Enhancing Multimodal Compositional Reasoning of Visual Language Models with Generative Negative Mining, WACV 2024☆14Jan 3, 2024Updated 2 years ago
- ☆17Dec 13, 2023Updated 2 years ago
- official code repo for paper "Merging Models on the Fly Without Retraining: A Sequential Approach to Scalable Continual Model Merging"☆22Oct 11, 2025Updated 4 months ago
- This is the implementation of CounterCurate, the data curation pipeline of both physical and semantic counterfactual image-caption pairs.☆19Jun 27, 2024Updated last year
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆46Dec 1, 2024Updated last year
- [AAAI 2024] TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training☆106Jan 9, 2024Updated 2 years ago
- ☆45Oct 5, 2025Updated 4 months ago
- If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions☆17Apr 4, 2024Updated last year
- Code for "CLIP Behaves like a Bag-of-Words Model Cross-modally but not Uni-modally"☆19Feb 14, 2025Updated 11 months ago
- ☆16Jan 3, 2023Updated 3 years ago
- [TCSVT 2025] CFMW: Cross-modality Fusion Mamba for Robust Object Detection under Adverse Weather☆86Aug 12, 2025Updated 6 months ago
- Knowledge Distillation using Contrastive Language-Image Pretraining (CLIP) without a teacher model.☆18Sep 6, 2024Updated last year
- ☆27Feb 27, 2025Updated 11 months ago
- ☆25Apr 16, 2025Updated 9 months ago
- An Examination of the Compositionality of Large Generative Vision-Language Models☆19Apr 9, 2024Updated last year
- PyTorch code for Improving Commonsense in Vision-Language Models via Knowledge Graph Riddles (DANCE)☆23Nov 29, 2022Updated 3 years ago
- S-CLIP: Semi-supervised Vision-Language Pre-training using Few Specialist Captions☆50May 26, 2023Updated 2 years ago
- Code for paper "Point and Ask: Incorporating Pointing into Visual Question Answering"☆19Oct 4, 2022Updated 3 years ago
- ☆23Aug 26, 2023Updated 2 years ago
- VMamba: Visual State Space Models,code is based on mamba☆3,041Mar 7, 2025Updated 11 months ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆23Mar 23, 2024Updated last year
- ☆23Jul 8, 2023Updated 2 years ago
- Repository for the PopulAtion Parameter Averaging (PAPA) paper☆30Apr 11, 2024Updated last year
- SVL-Adapter: Self-Supervised Adapter for Vision-Language Pretrained Models☆21Jan 11, 2024Updated 2 years ago
- Official implementation of Diffusion in Diffusion: Cyclic One-Way Diffusion for Text-Vision-Conditioned Generation (ICLR 2024).☆26May 14, 2024Updated last year