MCG-NJU / p-MoDLinks
p-MoD: Building Mixture-of-Depths MLLMs via Progressive Ratio Decay
☆35Updated 4 months ago
Alternatives and similar repositories for p-MoD
Users that are interested in p-MoD are comparing it to the libraries listed below
Sorting:
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆64Updated last month
- The official repository for ACL2025 paper "PruneVid: Visual Token Pruning for Efficient Video Large Language Models".☆46Updated 2 weeks ago
- Envolving Temporal Reasoning Capability into LMMs via Temporal Consistent Reward☆35Updated 2 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆105Updated 2 months ago
- [CVPR2025] Number it: Temporal Grounding Videos like Flipping Manga☆80Updated 2 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆82Updated last month
- [LLaVA-Video-R1]✨First Adaptation of R1 to LLaVA-Video (2025-03-18)☆27Updated 3 weeks ago
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understanding☆34Updated 2 months ago
- Official repo for "Streaming Video Understanding and Multi-round Interaction with Memory-enhanced Knowledge" ICLR2025☆52Updated 2 months ago
- ☆84Updated 2 months ago
- A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization Ability☆94Updated 6 months ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning☆72Updated last week
- ☆46Updated 3 weeks ago
- [ICLR 2025] TimeSuite: Improving MLLMs for Long Video Understanding via Grounded Tuning☆33Updated last month
- [CVPR 2025] LLaVA-ST: A Multimodal Large Language Model for Fine-Grained Spatial-Temporal Understanding☆43Updated 3 months ago
- [CVPR 2025] DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models☆48Updated this week
- Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Models☆25Updated 2 weeks ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆163Updated last week
- Official PyTorch Code of ReKV (ICLR'25)☆23Updated 2 months ago
- Collections of Papers and Projects for Multimodal Reasoning.☆105Updated last month
- ☆25Updated last month
- official impelmentation of Kangaroo: A Powerful Video-Language Model Supporting Long-context Video Input☆65Updated 9 months ago
- R1-like Video-LLM for Temporal Grounding☆92Updated last week
- Code for CVPR25 paper "VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos"☆112Updated 3 months ago
- Official PyTorch code of GroundVQA (CVPR'24)☆61Updated 8 months ago
- Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Models☆111Updated 2 months ago
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆45Updated 2 months ago
- [CVPR 2025] Official PyTorch code of "Enhancing Video-LLM Reasoning via Agent-of-Thoughts Distillation".☆32Updated last week
- [Neurips 24' D&B] Official Dataloader and Evaluation Scripts for LongVideoBench.☆96Updated 10 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆60Updated 10 months ago