xuyang-liu16 / VidCom2Links
π Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Models
β29Updated 2 months ago
Alternatives and similar repositories for VidCom2
Users that are interested in VidCom2 are comparing it to the libraries listed below
Sorting:
- [ICCV 2025] p-MoD: Building Mixture-of-Depths MLLMs via Progressive Ratio Decayβ42Updated 2 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reductionβ119Updated 5 months ago
- The official repository for ACL2025 paper "PruneVid: Visual Token Pruning for Efficient Video Large Language Models".β53Updated 3 months ago
- β105Updated 5 months ago
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understandingβ97Updated this week
- Autoregressive Semantic Visual Reconstruction Helps VLMs Understand Betterβ36Updated 2 months ago
- [CVPR 2025] DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Modelsβ72Updated 2 months ago
- [ICCV 2025] Official code for paper: Beyond Text-Visual Attention: Exploiting Visual Cues for Effective Token Pruning in VLMsβ25Updated last month
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.β85Updated 2 months ago
- [ICLR2025] Ξ³ -MOD: Mixture-of-Depth Adaptation for Multimodal Large Language Modelsβ38Updated 6 months ago
- β54Updated 3 months ago
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Modelsβ46Updated 2 months ago
- π Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Modelsβ31Updated last month
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understandingβ36Updated 5 months ago
- Official repo for "Streaming Video Understanding and Multi-round Interaction with Memory-enhanced Knowledge" ICLR2025β69Updated 5 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selectionβ111Updated last month
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-trainingβ80Updated last month
- Official implement of MIA-DPOβ64Updated 7 months ago
- [ICCV 2025] Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"β38Updated 2 months ago
- [LLaVA-Video-R1]β¨First Adaptation of R1 to LLaVA-Video (2025-03-18)β30Updated 3 months ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoningβ96Updated 3 months ago
- Video-Holmes: Can MLLM Think Like Holmes for Complex Video Reasoning?β68Updated last month
- [CVPR 2025] LLaVA-ST: A Multimodal Large Language Model for Fine-Grained Spatial-Temporal Understandingβ62Updated last month
- LEO: A powerful Hybrid Multimodal LLMβ18Updated 7 months ago
- [ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference".β148Updated 2 months ago
- β27Updated 4 months ago
- [ICLR'25] Reconstructive Visual Instruction Tuningβ105Updated 4 months ago
- Survey: https://arxiv.org/pdf/2507.20198β107Updated last week
- [ICLR 2025] See What You Are Told: Visual Attention Sink in Large Multimodal Modelsβ45Updated 6 months ago
- β87Updated 2 months ago