The code for "AttentionPredictor: Temporal Pattern Matters for Efficient LLM Inference", Qingyue Yang, Jie Wang, Xing Li, Zhihai Wang, Chen Chen, Lei Chen, Xianzhi Yu, Wulong Liu, Jianye HAO, Mingxuan Yuan, Bin Li.
☆28Jul 15, 2025Updated 9 months ago
Alternatives and similar repositories for LLM-AttentionPredictor
Users that are interested in LLM-AttentionPredictor are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- This is the source code of our ICML25 paper, titled "Accelerating Large Language Model Reasoning via Speculative Search".☆23Jun 1, 2025Updated 11 months ago
- [ICLR 2025] Official implementation of paper "Dynamic Low-Rank Sparse Adaptation for Large Language Models".☆24Mar 16, 2025Updated last year
- Cross-Self KV Cache Pruning for Efficient Vision-Language Inference☆10Dec 15, 2024Updated last year
- [EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMs☆15Jul 18, 2024Updated last year
- ☆36Feb 12, 2025Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆53Aug 6, 2025Updated 8 months ago
- ☆20Aug 14, 2025Updated 8 months ago
- [ACL 2026 Main] Analytical FFN-to-MoE Restructuring via Activation Pattern Analysis☆38Apr 24, 2026Updated last week
- The code of paper Learning Cut Selection for Mixed-Integer Linear Programming via Hierarchical Sequence Model. Zhihai Wang, Xijun Li,…☆64May 12, 2023Updated 2 years ago
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆47Jun 4, 2024Updated last year
- ☆21Apr 3, 2025Updated last year
- 🎓Automatically Update LLM inference systems Papers Daily using Github Actions (Update Every 12th hours)☆12Updated this week
- This repository serves as a comprehensive survey of LLM development, featuring numerous research papers along with their corresponding co…☆314Dec 5, 2025Updated 5 months ago
- ☆84Nov 10, 2025Updated 5 months ago
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- The Official Implementation of Ada-KV [NeurIPS 2025]☆132Nov 26, 2025Updated 5 months ago
- AdaSkip: Adaptive Sublayer Skipping for Accelerating Long-Context LLM Inference☆20Jan 24, 2025Updated last year
- Code used for analysis and visualiation of ocean model data during my postdoc☆12Mar 1, 2023Updated 3 years ago
- Implementation of "DIME-FM: DIstilling Multimodal and Efficient Foundation Models"☆15Oct 12, 2023Updated 2 years ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆35Aug 14, 2024Updated last year
- Official implementation of Hierarchical Context Merging: Better Long Context Understanding for Pre-trained LLMs (ICLR 2024).☆44Aug 6, 2024Updated last year
- [NeurIPS '25] Multi-Token Prediction Needs Registers☆29Dec 14, 2025Updated 4 months ago
- AdaSplash: Adaptive Sparse Flash Attention (aka Flash Entmax Attention)☆40Sep 30, 2025Updated 7 months ago
- Code for the EMNLP24 paper "A simple and effective L2 norm based method for KV Cache compression."☆18Dec 13, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Pytorch implementation of our paper accepted by ICML 2023 -- "Bi-directional Masks for Efficient N:M Sparse Training"☆13Jun 7, 2023Updated 2 years ago
- ☆33Nov 11, 2024Updated last year
- [ACL Findings 2026] Official Implementation of "FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acc…☆31Apr 14, 2026Updated 3 weeks ago
- Research work aimed at addressing the problem of modeling infinite-length context☆48Dec 18, 2025Updated 4 months ago
- ☆13Apr 9, 2026Updated 3 weeks ago
- Unofficial implementations of block/layer-wise pruning methods for LLMs.☆78Apr 29, 2024Updated 2 years ago
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆68Mar 27, 2025Updated last year
- Official Code Repository for the paper "Key-value memory in the brain"☆31Feb 25, 2025Updated last year
- [ACL'25 Findings] Official repo for "HumanEval Pro and MBPP Pro: Evaluating Large Language Models on Self-invoking Code Generation Task"☆41Apr 7, 2025Updated last year
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- ☆12Jan 8, 2025Updated last year
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆40Mar 11, 2024Updated 2 years ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆87Jun 20, 2025Updated 10 months ago
- Trusted Mamba Contrastive Network for Multi-View Clustering☆16Dec 10, 2025Updated 4 months ago
- ☆64Mar 30, 2026Updated last month
- [CVPR'25] Attention IoU: Examining Biases in CelebA using Attention Maps☆13Mar 26, 2025Updated last year
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆46Feb 28, 2026Updated 2 months ago