[EMNLP 2025 main π₯] Code for "Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More"
β115Oct 12, 2025Updated 5 months ago
Alternatives and similar repositories for DART
Users that are interested in DART are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- (NeurIPS 2025 π₯) Official implementation for "Efficient Multi-modal Large Language Models via Progressive Consistency Distillation"β48Feb 11, 2026Updated 2 months ago
- π Collection of token-level model compression resources.β193Sep 3, 2025Updated 7 months ago
- Official PyTorch code for ICLR 2025 paper "Gnothi Seauton: Empowering Faithful Self-Interpretability in Black-Box Models"β23Mar 4, 2025Updated last year
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reductionβ143Mar 6, 2025Updated last year
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visioβ¦β45Apr 18, 2025Updated 11 months ago
- Virtual machines for every use case on DigitalOcean β’ AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Modelβ36Jan 8, 2025Updated last year
- [ICCV25 Highlight] The official implementation of the paper "LEGION: Learning to Ground and Explain for Synthetic Image Detection"β76Oct 22, 2025Updated 5 months ago
- [ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference" and "Spβ¦β257Dec 22, 2025Updated 3 months ago
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Languaβ¦β566Jan 4, 2025Updated last year
- [ICCV 2025] SparseMM: Head Sparsity Emerges from Visual Concept Responses in MLLMsβ82Jan 17, 2026Updated 2 months ago
- A Framework for Collaboration of Experts from Benchmarkβ13Apr 27, 2025Updated 11 months ago
- [ICCV 2025] Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"β57Oct 9, 2025Updated 6 months ago
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Modelsβ165Mar 8, 2026Updated last month
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.β110Jun 29, 2025Updated 9 months ago
- GPU virtual machines on DigitalOcean Gradient AI β’ AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Code release for VTW (AAAI 2025 Oral)β66Nov 4, 2025Updated 5 months ago
- π Awesome papers on token redundancy reductionβ11Mar 12, 2025Updated last year
- [ACM MM 2025] TimeChat-online: 80% Visual Tokens are Naturally Redundant in Streaming Videosβ120Dec 12, 2025Updated 3 months ago
- β13May 15, 2025Updated 10 months ago
- Towards Efficient Multimodal Large Language Models: A Survey on Token Compressionβ151Updated this week
- A paper list of some recent works about Token Compress for Vit and VLMβ875Apr 3, 2026Updated last week
- [CVPR 2025] PACT: Pruning and Clustering-Based Token Reduction for Faster Visual Language Modelsβ58Jan 30, 2026Updated 2 months ago
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.β86Oct 26, 2025Updated 5 months ago
- [NeurIPS 2025] Official code for paper: Beyond Attention or Similarity: Maximizing Conditional Diversity for Token Pruning in MLLMs.β95Sep 20, 2025Updated 6 months ago
- Virtual machines for every use case on DigitalOcean β’ AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- β14Apr 25, 2025Updated 11 months ago
- [NeurIPS 2025 π₯] FakeVLM: Advancing Synthetic Image Detection through Explainable Multimodal Models and Fine-Grained Artifact Analysisβ128Sep 24, 2025Updated 6 months ago
- The official pytorch implementation of Exploring the Interactive Guidance for Unified and Effective Image Matting [TOMM 2025]β25Nov 24, 2025Updated 4 months ago
- [CVPR 2026] Accelerating Streaming Video Large Language Models via Hierarchical Token Compressionβ54Feb 25, 2026Updated last month
- [ICLR'25] Streaming Video Question-Answering with In-context Video KV-Cache Retrievalβ114Nov 4, 2025Updated 5 months ago
- Idea2Img: Iterative Self-Refinement with GPT-4V(ision) for Automatic Image Design and Generation, ECCV 2024β22Feb 15, 2024Updated 2 years ago
- η΅εη§ζε€§ε¦ζ¬η§θ―Ύη¨δ»£η γβ16Dec 31, 2023Updated 2 years ago
- β10Oct 20, 2023Updated 2 years ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoningβ115Dec 24, 2025Updated 3 months ago
- GPU virtual machines on DigitalOcean Gradient AI β’ AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- β16Mar 24, 2025Updated last year
- This is the repo for the paper Multi-Agent Collaborative Data Selection for Efficient LLM Pretraining.β47Aug 22, 2025Updated 7 months ago
- Socratic-Zero is a fully autonomous framework that generates high-quality training data for mathematical reasoningβ36Oct 26, 2025Updated 5 months ago
- [EMNLP 2025 Main] Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Modelsβ71Mar 31, 2026Updated last week
- [CVPR2025] Code Release of Patch Matters: Training-free Fine-grained Image Caption Enhancement via Local Perceptionβ23Jun 17, 2025Updated 9 months ago
- [ACL-26 Findings] Implementation for HiPrune, a training-free visual token pruning method for VLM acceleration.β52Updated this week
- Code for paper 'Batch-ICL: Effective, Efficient, and Order-Agnostic In-Context Learning'β18Apr 19, 2024Updated last year