THU-MIG / VTC-CLSLinks
official repo for paper "[CLS] Token Tells Everything Needed for Training-free Efficient MLLMs"
β22Updated 2 months ago
Alternatives and similar repositories for VTC-CLS
Users that are interested in VTC-CLS are comparing it to the libraries listed below
Sorting:
- π Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Modelsβ23Updated 2 weeks ago
- Official implementation of MC-LLaVA.β28Updated 3 weeks ago
- [CVPR 2025] Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attentionβ35Updated 11 months ago
- Fast-Slow Thinking for Large Vision-Language Model Reasoningβ15Updated last month
- β21Updated 3 months ago
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visioβ¦β39Updated 2 months ago
- β42Updated 7 months ago
- [ICLR2025] Ξ³ -MOD: Mixture-of-Depth Adaptation for Multimodal Large Language Modelsβ36Updated 4 months ago
- Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Modelβ30Updated 5 months ago
- Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"β29Updated last month
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". β¦β54Updated 7 months ago
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-trainingβ47Updated 3 months ago
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understandingβ73Updated 2 months ago
- FreeVA: Offline MLLM as Training-Free Video Assistantβ60Updated last year
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Modelsβ41Updated 2 weeks ago
- Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuningβ49Updated last month
- [ICLR 2025] See What You Are Told: Visual Attention Sink in Large Multimodal Modelsβ30Updated 4 months ago
- β86Updated 3 months ago
- The official repository for ACL2025 paper "PruneVid: Visual Token Pruning for Efficient Video Large Language Models".β46Updated last month
- Official Repository of Personalized Visual Instruct Tuningβ29Updated 3 months ago
- Hyperbolic Safety-Aware Vision-Language Models. CVPR 2025β17Updated 2 months ago
- [CVPR 2025] Few-shot Recognition via Stage-Wise Retrieval-Augmented Finetuningβ19Updated this week
- iLLaVA: An Image is Worth Fewer Than 1/3 Input Tokens in Large Multimodal Modelsβ19Updated 4 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reductionβ109Updated 3 months ago
- β49Updated last month
- [ECCV 2024] API: Attention Prompting on Image for Large Vision-Language Modelsβ91Updated 8 months ago
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understandingβ34Updated 3 months ago
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"β41Updated 6 months ago
- Mitigating Shortcuts in Visual Reasoning with Reinforcement Learningβ23Updated last week
- LEO: A powerful Hybrid Multimodal LLMβ18Updated 5 months ago