TungChintao / FlowCutLinks
[NeurIPS 2025] Official repository for “FlowCut: Rethinking Redundancy via Information Flow for Efficient Vision-Language Models”
☆25Updated this week
Alternatives and similar repositories for FlowCut
Users that are interested in FlowCut are comparing it to the libraries listed below
Sorting:
- [EMNLP 2025 Main] Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Models☆40Updated last week
- ☆132Updated 8 months ago
- [ICLR 2025] See What You Are Told: Visual Attention Sink in Large Multimodal Models☆77Updated 9 months ago
- cliptrase☆47Updated last year
- Code of LVAgent: Long Video Understanding by Multi-Round Dynamical Collaboration of MLLM Agents☆21Updated 2 weeks ago
- ☆36Updated 4 months ago
- [LLaVA-Video-R1]✨First Adaptation of R1 to LLaVA-Video (2025-03-18)☆35Updated 7 months ago
- The official repository of our paper "Reinforcing Video Reasoning with Focused Thinking"☆31Updated 6 months ago
- [NeurIPS 2024 Oral] RG-SAN: Rule-Guided Spatial Awareness Network for End-to-End 3D Referring Expression Segmentation☆18Updated 11 months ago
- [CVPR 2025] Official PyTorch Implementation of GLUS: Global-Local Reasoning Unified into A Single Large Language Model for Video Segmenta…☆64Updated 5 months ago
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆145Updated 3 months ago
- [ICCV 2025] Official code for paper: Beyond Text-Visual Attention: Exploiting Visual Cues for Effective Token Pruning in VLMs☆52Updated 5 months ago
- [ICLR'25] Reconstructive Visual Instruction Tuning☆129Updated 8 months ago
- [CVPR 2025 Highlight] Your Large Vision-Language Model Only Needs A Few Attention Heads For Visual Grounding☆51Updated 3 months ago
- The official implementation of "PixelThink: Towards Efficient Chain-of-Pixel Reasoning" (arXiv 2025)☆38Updated 6 months ago
- [CVPR 2025 🔥]A Large Multimodal Model for Pixel-Level Visual Grounding in Videos☆91Updated 7 months ago
- Reinforcing Spatial Reasoning in Vision-Language Models with Interwoven Thinking and Visual Drawing☆82Updated 4 months ago
- [ICCV25 Oral] Token Activation Map to Visually Explain Multimodal LLMs☆143Updated 4 months ago
- The official code of "Thinking With Videos: Multimodal Tool-Augmented Reinforcement Learning for Long Video Reasoning"☆68Updated last month
- [AAAI 2026 Oral] LENS: Learning to Segment Anything with Unified Reinforced Reasoning☆78Updated last week
- Official repository of the paper "A Glimpse to Compress: Dynamic Visual Token Pruning for Large Vision-Language Models"☆79Updated 3 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆129Updated 4 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆201Updated 4 months ago
- ☆30Updated last year
- code for the paper "CoReS: Orchestrating the Dance of Reasoning and Segmentation"☆20Updated 2 weeks ago
- The official code for the paper: LLaVA-Scissor: Token Compression with Semantic Connected Components for Video LLMs☆114Updated 5 months ago
- Universal Video Temporal Grounding with Generative Multi-modal Large Language Models☆39Updated 2 weeks ago
- Official code repository of Shuffle-R1☆25Updated 3 months ago
- Official repository of DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models☆88Updated last year
- [ICCV 2025] Official implementation of "InstructSeg: Unifying Instructed Visual Segmentation with Multi-modal Large Language Models"☆51Updated 10 months ago