shufangxun / LLaVA-MoDLinks
[ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation
☆192Updated 4 months ago
Alternatives and similar repositories for LLaVA-MoD
Users that are interested in LLaVA-MoD are comparing it to the libraries listed below
Sorting:
- Pruning the VLLMs☆98Updated 8 months ago
- The Next Step Forward in Multimodal LLM Alignment☆175Updated 3 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆119Updated 5 months ago
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆143Updated last month
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆247Updated 4 months ago
- ☆54Updated 3 months ago
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆130Updated 5 months ago
- [ICCV 2025] Official implementation of LLaVA-KD: A Framework of Distilling Multimodal Large Language Models☆92Updated last month
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.☆85Updated last month
- ☆101Updated 5 months ago
- Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆62Updated 3 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆174Updated 10 months ago
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…