wangqinsi1 / CoreInfer
This is the official Python version of CoreInfer: Accelerating Large Language Model Inference with Semantics-Inspired Adaptive Sparse Activation.
β15Updated 2 months ago
Alternatives and similar repositories for CoreInfer:
Users that are interested in CoreInfer are comparing it to the libraries listed below
- Awesome-LLM-KV-Cache: A curated list of πAwesome LLM KV Cache Papers with Codes.β191Updated last month
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**β152Updated 7 months ago
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inferenceβ29Updated 7 months ago
- π° Must-read papers on KV Cache Compression (constantly updating π€).β266Updated last week
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMsβ32Updated 5 months ago
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".β81Updated last year
- β49Updated 8 months ago
- β40Updated last month
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rankβ32Updated 2 months ago
- Sirius, an efficient correction mechanism, which significantly boosts Contextual Sparsity models on reasoning tasks while maintaining itsβ¦β21Updated 4 months ago
- β43Updated 3 weeks ago
- β212Updated 8 months ago
- Official implementation for Yuan & Liu & Zhong et al., KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark oβ¦β61Updated 3 weeks ago
- SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Accelerationβ31Updated last month
- β51Updated 9 months ago
- 16-fold memory access reduction with nearly no lossβ67Updated 2 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inferenceβ235Updated 2 months ago
- Quantized Side Tuning: Fast and Memory-Efficient Tuning of Quantized Large Language Modelsβ42Updated 2 months ago
- Multi-Candidate Speculative Decodingβ34Updated 9 months ago
- Accepted LLM Papers in NeurIPS 2024β33Updated 3 months ago
- An experimentation platform for LLM inference optimisationβ28Updated 4 months ago
- β16Updated last month
- The Official Implementation of Ada-KV: Optimizing KV Cache Eviction by Adaptive Budget Allocation for Efficient LLM Inferenceβ54Updated last month
- My own implementation of "Fast Inference from Transformers via Speculative Decoding"β11Updated last year
- β36Updated 4 months ago
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"β70Updated 7 months ago
- β34Updated 2 months ago
- The official implementation of the paper "Demystifying the Compression of Mixture-of-Experts Through a Unified Framework".β53Updated 2 months ago
- β35Updated last month
- Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding (EMNLP 2023 Long)β56Updated 3 months ago