SUSTechBruce / LOOK-MLinks
[EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context Inference"
☆98Updated 8 months ago
Alternatives and similar repositories for LOOK-M
Users that are interested in LOOK-M are comparing it to the libraries listed below
Sorting:
- Code release for VTW (AAAI 2025 Oral)☆47Updated 2 weeks ago
- [ICLR 2025] The official pytorch implement of "Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Cont…☆48Updated 8 months ago
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆142Updated last month
- ☆54Updated 3 months ago
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.☆69Updated 6 months ago
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆86Updated 8 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆81Updated 5 months ago
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆83Updated 2 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆117Updated 5 months ago
- Code for "Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More"☆64Updated 3 months ago
- Code for "The Devil behind the mask: An emergent safety vulnerability of Diffusion LLMs"☆53Updated last week
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆121Updated 3 weeks ago
- This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"☆36Updated last year
- A Self-Training Framework for Vision-Language Reasoning☆80Updated 6 months ago
- A RLHF Infrastructure for Vision-Language Models☆179Updated 8 months ago
- This repository contains the code for SFT, RLHF, and DPO, designed for vision-based LLMs, including the LLaVA models and the LLaMA-3.2-vi…☆110Updated last month
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆54Updated 7 months ago
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆129Updated 3 months ago
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆124Updated last week
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆40Updated last year
- Survey: https://arxiv.org/pdf/2507.20198☆69Updated this week
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.☆84Updated last month
- A Collection of Papers on Diffusion Language Models☆97Updated last month
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visio…☆40Updated 3 months ago
- This is the official implementation of our paper "QuoTA: Query-oriented Token Assignment via CoT Query Decouple for Long Video Comprehens…☆73Updated 3 months ago
- [arXiv 2025] Efficient Reasoning Models: A Survey☆247Updated 2 weeks ago
- ☆103Updated 3 weeks ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆105Updated 2 months ago
- [CVPR 2025] DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models☆64Updated last month
- [ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference".☆135Updated 2 months ago