Westlake-AI / SemiReward
[ICLR 2024] SemiReward: A General Reward Model for Semi-supervised Learning
☆65Updated 11 months ago
Alternatives and similar repositories for SemiReward:
Users that are interested in SemiReward are comparing it to the libraries listed below
- [TPAMI 2024] Probabilistic Contrastive Learning for Long-Tailed Visual Recognition☆79Updated 7 months ago
- ☆35Updated last year
- [CVPR'23 & TPAMI'25] Hard Patches Mining for Masked Image Modeling☆93Updated 3 weeks ago
- Official PyTorch implementation for "Diffusion Models and Semi-Supervised Learners Benefit Mutually with Few Labels"☆92Updated last year
- ☆44Updated last year
- The official implementation of paper: "Inter-Instance Similarity Modeling for Contrastive Learning"☆114Updated 6 months ago
- ☆86Updated 2 years ago
- The efficient tuning method for VLMs☆81Updated last year
- The repo for "Enhancing Multi-modal Cooperation via Sample-level Modality Valuation", CVPR 2024☆51Updated 6 months ago
- [ICLR 2024] Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models.☆78Updated 9 months ago
- ☆39Updated 2 weeks ago
- [NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model☆86Updated last year
- Implementation of "VL-Mamba: Exploring State Space Models for Multimodal Learning"☆81Updated last year
- [CVPR 2025] RAP: Retrieval-Augmented Personalization☆49Updated last month
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆52Updated 6 months ago
- [CVPR 2025 Highlight] Official Pytorch codebase for paper: "Assessing and Learning Alignment of Unimodal Vision and Language Models"☆34Updated 3 weeks ago
- [NeurIPS 2023] Generalized Logit Adjustment☆37Updated last year
- This repository is a collection of awesome things about vision prompts, including papers, code, etc.☆34Updated last year
- About Code Release for "CLIPood: Generalizing CLIP to Out-of-Distributions" (ICML 2023), https://arxiv.org/abs/2302.00864☆65Updated last year
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆59Updated this week
- The official code for "TextRefiner: Internal Visual Feature as Efficient Refiner for Vision-Language Models Prompt Tuning" | [AAAI2025]☆37Updated last month
- Code and Dataset for the paper "LAMM: Label Alignment for Multi-Modal Prompt Learning" AAAI 2024☆32Updated last year
- [ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models☆46Updated 10 months ago
- [CVPR-2024] Official implementations of CLIP-KD: An Empirical Study of CLIP Model Distillation☆112Updated 10 months ago
- Pytorch implementation of "Test-time Adaption against Multi-modal Reliability Bias".☆35Updated 4 months ago
- [NeurIPS'23] DropPos: Pre-Training Vision Transformers by Reconstructing Dropped Positions☆60Updated last year
- 🔥MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition [Official, ICCV 2023]☆30Updated 6 months ago
- This is a PyTorch implementation of “Context AutoEncoder for Self-Supervised Representation Learning"☆107Updated last year
- This repository is the official implementation of our Autoregressive Pretraining with Mamba in Vision☆77Updated 10 months ago
- Code for the paper Visual Explanations of Image–Text Representations via Multi-Modal Information Bottleneck Attribution☆49Updated last year