OpenSparseLLMs / CLIP-MoE
CLIP-MoE: Mixture of Experts for CLIP
☆23Updated 3 months ago
Alternatives and similar repositories for CLIP-MoE:
Users that are interested in CLIP-MoE are comparing it to the libraries listed below
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆33Updated last month
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆62Updated 7 months ago
- This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strat…☆76Updated 10 months ago
- ☆27Updated 6 months ago
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆33Updated 2 months ago
- [ICLR 2025] The official pytorch implement of "Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Cont…☆17Updated last month
- code for Learning the Unlearned: Mitigating Feature Suppression in Contrastive Learning☆16Updated 6 months ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆43Updated 6 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆56Updated 4 months ago
- Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal …☆38Updated 2 months ago
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆59Updated 2 months ago
- The official repository for the paper "Can MLLMs Reason in Multimodality? EMMA: An Enhanced MultiModal ReAsoning Benchmark"☆34Updated this week
- The First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models?☆23Updated 2 months ago
- Look, Compare, Decide: Alleviating Hallucination in Large Vision-Language Models via Multi-View Multi-Path Reasoning☆19Updated 4 months ago
- [Arxiv 2024] AGLA: Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention☆21Updated 6 months ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆41Updated 3 months ago
- DEEM: Official implementation of Diffusion models serve as the eyes of large language models for image perception. (ICLR2025)☆18Updated last month
- Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.☆32Updated last year
- PyTorch Implementation of "Divide, Conquer and Combine: A Training-Free Framework for High-Resolution Image Perception in Multimodal Larg…☆19Updated last month
- [CVPR2024 Highlight] Official implementation for Transferable Visual Prompting. The paper "Exploring the Transferability of Visual Prompt…☆34Updated last month
- [ICML 2024] Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning☆45Updated 8 months ago
- [MM2024, oral] "Self-Supervised Visual Preference Alignment" https://arxiv.org/abs/2404.10501☆48Updated 6 months ago
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆62Updated 8 months ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆44Updated last year
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆44Updated last month
- The official GitHub page for ''What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Ins…☆18Updated last year
- ☆32Updated 2 weeks ago
- Code release for VTW (AAAI 2025) Oral☆30Updated last week
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆42Updated 6 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆95Updated 2 months ago