arctanxarc / MC-LLaVA
Official implementation of MC-LLaVA.
☆26Updated 3 months ago
Alternatives and similar repositories for MC-LLaVA
Users that are interested in MC-LLaVA are comparing it to the libraries listed below
Sorting:
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆45Updated 9 months ago
- MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models☆34Updated last month
- Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Model☆28Updated 4 months ago
- CLIP-MoE: Mixture of Experts for CLIP☆32Updated 7 months ago
- official repo for paper "[CLS] Token Tells Everything Needed for Training-free Efficient MLLMs"☆20Updated 2 weeks ago
- Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"☆26Updated last month
- [ICLR2025] γ -MOD: Mixture-of-Depth Adaptation for Multimodal Large Language Models☆36Updated 2 months ago
- ☆83Updated last month
- Official implement of MIA-DPO☆57Updated 3 months ago
- Official Repository of Personalized Visual Instruct Tuning☆28Updated 2 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆73Updated 11 months ago
- ☆44Updated last week
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆58Updated 2 weeks ago
- ☆79Updated last month
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆57Updated 7 months ago
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆39Updated 5 months ago
- ☆40Updated 4 months ago
- [CVPR 2025] RAP: Retrieval-Augmented Personalization☆49Updated last month
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆47Updated 2 months ago
- The official repository for paper "PruneVid: Visual Token Pruning for Efficient Video Large Language Models".☆38Updated 2 months ago
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆64Updated 8 months ago
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆53Updated this week
- Repository of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models☆37Updated last year
- ☆75Updated 4 months ago
- [ICLR 2025] VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning☆54Updated 3 months ago
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆86Updated 9 months ago
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"☆25Updated last month
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆55Updated 9 months ago
- [CVPR 2025] Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention☆32Updated 9 months ago
- ☆35Updated 10 months ago