minglllli / CLS-RL
Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning
☆18Updated 2 weeks ago
Alternatives and similar repositories for CLS-RL:
Users that are interested in CLS-RL are comparing it to the libraries listed below
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)☆78Updated 6 months ago
- [CVPR2024 Highlight] Official implementation for Transferable Visual Prompting. The paper "Exploring the Transferability of Visual Prompt…☆39Updated 4 months ago
- Implementation of "VL-Mamba: Exploring State Space Models for Multimodal Learning"☆81Updated last year
- 🔎Official code for our paper: "VL-Uncertainty: Detecting Hallucination in Large Vision-Language Model via Uncertainty Estimation".☆32Updated last month
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆46Updated last year
- ☆93Updated last year
- cliptrase☆35Updated 7 months ago
- [ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models☆160Updated last year
- AlignCLIP: Improving Cross-Modal Alignment in CLIP (ICLR 2025)☆34Updated last month
- [NeurIPS 2023] Generalized Logit Adjustment☆35Updated last year
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.☆27Updated 11 months ago
- Exploring prompt tuning with pseudolabels for multiple modalities, learning settings, and training strategies.☆49Updated 5 months ago
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆48Updated 2 weeks ago
- ✨A curated list of papers on the uncertainty in multi-modal large language model (MLLM).☆43Updated 2 weeks ago
- [ICLR 2025] See What You Are Told: Visual Attention Sink in Large Multimodal Models☆19Updated 2 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆161Updated 3 months ago
- [ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models☆46Updated 9 months ago
- Code for the paper: "SuS-X: Training-Free Name-Only Transfer of Vision-Language Models" [ICCV'23]☆98Updated last year
- ☆16Updated 5 months ago
- Official Implementation of "Read-only Prompt Optimization for Vision-Language Few-shot Learning", ICCV 2023☆53Updated last year
- Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models (AAAI 2024)☆69Updated 2 months ago
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆34Updated last year
- [CVPR 2025 (Oral)] Mitigating Hallucinations in Large Vision-Language Models via DPO: On-Policy Data Hold the Key☆48Updated 2 weeks ago
- Instruction Tuning in Continual Learning paradigm☆47Updated 2 months ago
- About Code Release for "CLIPood: Generalizing CLIP to Out-of-Distributions" (ICML 2023), https://arxiv.org/abs/2302.00864☆65Updated last year
- [ICLR 2024] Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models.☆76Updated 8 months ago
- Preventing Zero-Shot Transfer Degradation in Continual Learning of Vision-Language Models☆91Updated last year
- ☆34Updated 9 months ago
- FineCLIP: Self-distilled Region-based CLIP for Better Fine-grained Understanding☆16Updated 4 months ago
- [NeurIPS 2023]DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Models☆41Updated last year