peterant330 / KUEALinks
[ICML'25] Kernel-based Unsupervised Embedding Alignment for Enhanced Visual Representation in Vision-language Models
☆18Updated 2 months ago
Alternatives and similar repositories for KUEA
Users that are interested in KUEA are comparing it to the libraries listed below
Sorting:
- [ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models☆54Updated last year
- CLIP-MoE: Mixture of Experts for CLIP☆49Updated last year
- ☆22Updated 6 months ago
- [CVPR 2025] Few-shot Recognition via Stage-Wise Retrieval-Augmented Finetuning☆28Updated last week
- Official Implementation of CODE☆15Updated last year
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆46Updated 11 months ago
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆74Updated 6 months ago
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆59Updated last year
- Pytorch Implementation for CVPR 2024 paper: Learn to Rectify the Bias of CLIP for Unsupervised Semantic Segmentation☆56Updated 3 months ago
- Official Implementation of "Read-only Prompt Optimization for Vision-Language Few-shot Learning", ICCV 2023☆54Updated 2 years ago
- ViCToR: Improving Visual Comprehension via Token Reconstruction for Pretraining LMMs☆25Updated 3 months ago
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.☆29Updated last year
- [CVPR 2025] Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention☆50Updated last year
- Implementation of "VL-Mamba: Exploring State Space Models for Multimodal Learning"☆85Updated last year
- [CVPR 2025] VASparse: Towards Efficient Visual Hallucination Mitigation via Visual-Aware Token Sparsification☆41Updated 8 months ago
- Rui Qian, Xin Yin, Dejing Dou†: Reasoning to Attend: Try to Understand How <SEG> Token Works (CVPR 2025)☆48Updated last month
- [CVPR 2025] COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training☆36Updated 8 months ago
- ☆22Updated last year
- [ICLR 2024] Test-Time RL with CLIP Feedback for Vision-Language Models.☆95Updated last month
- [CVPR 2025 Highlight] Interpreting Object-level Foundation Models via Visual Precision Search☆51Updated this week
- Official Implementation of DiffCLIP: Differential Attention Meets CLIP☆47Updated 8 months ago
- [CVPR 2025] FLAIR: VLM with Fine-grained Language-informed Image Representations☆117Updated 2 months ago
- AlignCLIP: Improving Cross-Modal Alignment in CLIP (ICLR 2025)☆51Updated 8 months ago
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆37Updated last year
- Code of LVAgent: Long Video Understanding by Multi-Round Dynamical Collaboration of MLLM Agents☆22Updated this week
- [CVPR 2025 🔥]A Large Multimodal Model for Pixel-Level Visual Grounding in Videos☆90Updated 7 months ago
- [CVPR 2025] PyTorch implementation of paper "FLAME: Frozen Large Language Models Enable Data-Efficient Language-Image Pre-training"☆32Updated 4 months ago
- [ECCV 2024] Soft Prompt Generation for Domain Generalization☆28Updated last year
- Official Implementation of the ECCV 2024 Paper: "CLAP: Isolating Content from Style through Contrastive Learning with Augmented Prompts"☆53Updated last month
- [CVPR 2025 Highlight] Official Pytorch codebase for paper: "Assessing and Learning Alignment of Unimodal Vision and Language Models"☆51Updated 3 months ago