chengtan9907 / mc-cotLinks
The official implementation of the ECCV'24 paper MC-CoT: Boosting the Power of Small Multimodal Reasoning Models to Match Larger Models with Self-Consistency Training.
☆26Updated last year
Alternatives and similar repositories for mc-cot
Users that are interested in mc-cot are comparing it to the libraries listed below
Sorting:
- Look, Compare, Decide: Alleviating Hallucination in Large Vision-Language Models via Multi-View Multi-Path Reasoning☆24Updated last year
- [ICML 2024] Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning☆50Updated last year
- [NeurIPS 2023]DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Models☆49Updated last year
- The efficient tuning method for VLMs☆80Updated last year
- CLIP-MoE: Mixture of Experts for CLIP☆55Updated last year
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆62Updated last year
- [NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model☆89Updated 2 years ago
- Towards a Unified View on Visual Parameter-Efficient Transfer Learning☆26Updated 3 years ago
- Implementation of "VL-Mamba: Exploring State Space Models for Multimodal Learning"☆86Updated last year
- [ACM Multimedia 2025] This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual…☆82Updated 11 months ago
- [CVPR 2025] Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention☆60Updated last year
- The official implementation for MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning (CVPR '24)☆69Updated 7 months ago
- Code for Retrieval-Augmented Perception (ICML 2025)☆68Updated 6 months ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆45Updated 7 months ago
- Visual self-questioning for large vision-language assistant.☆45Updated 6 months ago
- A curated list of zero-shot captioning papers☆24Updated 2 years ago
- [ACL 2024] Logical Closed Loop: Uncovering Object Hallucinations in Large Vision-Language Models. Detect and mitigate object hallucinatio…☆25Updated last year
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models☆36Updated last year
- [IJCV2025] https://arxiv.org/abs/2304.04521☆15Updated last year
- [CVPR2025] VideoICL: Confidence-based Iterative In-context Learning for Out-of-Distribution Video Understanding☆24Updated 10 months ago
- [CVPR 2025 Highlight] Official Pytorch codebase for paper: "Assessing and Learning Alignment of Unimodal Vision and Language Models"☆56Updated 5 months ago
- ✨✨The Curse of Multi-Modalities (CMM): Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio☆52Updated 7 months ago
- Mitigating Shortcuts in Visual Reasoning with Reinforcement Learning☆45Updated 7 months ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆52Updated last year
- ☆17Updated last year
- Distribution-Aware Prompt Tuning for Vision-Language Models (ICCV 2023)☆44Updated 2 years ago
- Official code for paper "Beyond Sole Strength: Customized Ensembles for Generalized Vision-Language Models, ICML2024"☆26Updated last year
- Official implementation of CVPR 2024 paper "Prompt Learning via Meta-Regularization".☆32Updated 11 months ago
- [NeurIPS 2023] Bootstrapping Vision-Language Learning with Decoupled Language Pre-training☆26Updated 2 years ago
- [ECCV2024] Reflective Instruction Tuning: Mitigating Hallucinations in Large Vision-Language Models☆20Updated last year