BIT-DA / ABSLinks
[ICML2025] Official Code of From Local Details to Global Context: Advancing Vision-Language Models with Attention-Based Selection
☆22Updated 2 months ago
Alternatives and similar repositories for ABS
Users that are interested in ABS are comparing it to the libraries listed below
Sorting:
- [ICML 2025] This is the official PyTorch implementation of "🎵 HarmoniCa: Harmonizing Training and Inference for Better Feature Caching i…☆42Updated 2 months ago
- [ICLR 2025] See What You Are Told: Visual Attention Sink in Large Multimodal Models☆48Updated 7 months ago
- This repository contains the code for our ICML 2025 paper——LENSLLM: Unveiling Fine-Tuning Dynamics for LLM Selection🎉☆24Updated 3 months ago
- 🚀 Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Models☆33Updated last week
- Official implementation for the paper"Towards Understanding How Knowledge Evolves in Large Vision-Language Models"☆18Updated 5 months ago
- [CVPR 2025] Noise-Consistent Siamese-Diffusion for Medical Image Synthesis and Segmentation☆60Updated last week
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆63Updated 2 months ago
- [CVPR 2025] VASparse: Towards Efficient Visual Hallucination Mitigation via Visual-Aware Token Sparsification☆34Updated 5 months ago
- [CVPR2025] BOLT: Boost Large Vision-Language Model Without Training for Long-form Video Understanding☆31Updated 5 months ago
- VeriThinker: Learning to Verify Makes Reasoning Model Efficient☆52Updated 2 months ago
- Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆63Updated 4 months ago
- [ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference".☆157Updated 3 months ago
- ☆55Updated 4 months ago
- Official repository of the paper "A Glimpse to Compress: Dynamic Visual Token Pruning for Large Vision-Language Models"☆64Updated last week
- ☆22Updated 3 weeks ago
- [ICCV25 Oral] Token Activation Map to Visually Explain Multimodal LLMs☆78Updated last month
- Code for paper: Reinforced Vision Perception with Tools☆42Updated last week
- ☆26Updated 6 months ago
- [ICCV 2025] Official code for paper: Beyond Text-Visual Attention: Exploiting Visual Cues for Effective Token Pruning in VLMs☆30Updated 2 months ago
- [CVPR 2025] DivPrune: Diversity-based Visual Token Pruning for Large Multimodal Models☆45Updated 3 months ago
- Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Model☆34Updated 8 months ago
- The official code for the paper: LLaVA-Scissor: Token Compression with Semantic Connected Components for Video LLMs☆106Updated 2 months ago
- [ICCV 2025] ONLY: One-Layer Intervention Sufficiently Mitigates Hallucinations in Large Vision-Language Models☆36Updated 2 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆192Updated 2 months ago
- Official repository for “FlowCut: Rethinking Redundancy via Information Flow for Efficient Vision-Language Models”☆23Updated this week
- [ICME 2024 Oral] DARA: Domain- and Relation-aware Adapters Make Parameter-efficient Tuning for Visual Grounding☆22Updated 6 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆65Updated 3 months ago
- [ACL'25 Main] Official Implementation of HiDe-LLaVA: Hierarchical Decoupling for Continual Instruction Tuning of Multimodal Large Languag…☆26Updated last week
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆103Updated 3 weeks ago
- [CVPR 2025 Highlight] Your Large Vision-Language Model Only Needs A Few Attention Heads For Visual Grounding☆34Updated 3 weeks ago