PalAvik / hycoclipLinks
Code for the paper "Compositional Entailment Learning for Hyperbolic Vision-Language Models".
☆75Updated last month
Alternatives and similar repositories for hycoclip
Users that are interested in hycoclip are comparing it to the libraries listed below
Sorting:
- [ICLR 2024] Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models.☆83Updated 11 months ago
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.☆28Updated last year
- Official PyTorch repository for GRAM☆80Updated 2 months ago
- Diffusion-TTA improves pre-trained discriminative models such as image classifiers or segmentors using pre-trained generative models.☆74Updated last year
- [CVPR 2025] FLAIR: VLM with Fine-grained Language-informed Image Representations☆86Updated 3 weeks ago
- Official implementation of CVPR 2024 paper "Prompt Learning via Meta-Regularization".☆27Updated 4 months ago
- [ICCV 2025] Token Activation Map to Visually Explain Multimodal LLMs☆40Updated 2 weeks ago
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆68Updated 2 months ago
- [CVPR 2025] RAP: Retrieval-Augmented Personalization☆64Updated 3 weeks ago
- [ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models☆50Updated last year
- The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".☆73Updated 2 months ago
- ☆20Updated 7 months ago
- CLAP: Isolating Content from Style through Contrastive Learning with Augmented Prompts☆51Updated 10 months ago
- Easy wrapper for inserting LoRA layers in CLIP.☆34Updated last year
- cliptrase☆38Updated 10 months ago
- Open source implementation of "Vision Transformers Need Registers"☆184Updated 3 months ago
- [ICLR 2025] - Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via Modality Inversion☆49Updated 2 months ago
- [AAAI'25, CVPRW 2024] Official repository of paper titled "Learning to Prompt with Text Only Supervision for Vision-Language Models".☆108Updated 7 months ago
- [ICLR2025] This repository is the official implementation of our Autoregressive Pretraining with Mamba in Vision☆82Updated last month
- ☆15Updated 2 months ago
- ☆44Updated 2 months ago
- Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models (AAAI 2024)☆73Updated 5 months ago
- [CVPR 2025 Highlight] Official Pytorch codebase for paper: "Assessing and Learning Alignment of Unimodal Vision and Language Models"