MIRALab-USTC / LLMReasoning-SpecSearchLinks
This is the source code of our ICML25 paper, titled "Accelerating Large Language Model Reasoning via Speculative Search".
☆20Updated 3 months ago
Alternatives and similar repositories for LLMReasoning-SpecSearch
Users that are interested in LLMReasoning-SpecSearch are comparing it to the libraries listed below
Sorting:
- This is the code for our ICLR 2025 paper, titled Computing Circuits Optimization via Model-Based Circuit Genetic Evolution.☆11Updated 3 months ago
- Curated collection of papers in MoE model inference☆250Updated last month
- [WSDM'24 Oral] The official implementation of paper <DeSCo: Towards Generalizable and Scalable Deep Subgraph Counting>☆22Updated last year
- Code release for AdapMoE accepted by ICCAD 2024☆32Updated 4 months ago
- Code Repository of Evaluating Quantized Large Language Models☆130Updated 11 months ago
- ☆26Updated last month
- ☆25Updated 5 months ago
- Reading notes on Speculative Decoding papers☆16Updated last month
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆43Updated 8 months ago
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.☆117Updated 3 weeks ago
- ☆107Updated last year
- Repo for SpecEE: Accelerating Large Language Model Inference with Speculative Early Exiting (ISCA25)☆48Updated 4 months ago
- ☆23Updated last year
- An implementation of the DISP-LLM method from the NeurIPS 2024 paper: Dimension-Independent Structural Pruning for Large Language Models.☆22Updated last month
- ☆49Updated last month
- ☆181Updated last year
- Github Repo for OATS: Outlier-Aware Pruning through Sparse and Low Rank Decomposition☆13Updated 4 months ago
- ☆47Updated 10 months ago
- ☆117Updated last month
- Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.☆356Updated 6 months ago
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆525Updated last month
- This repository serves as a comprehensive survey of LLM development, featuring numerous research papers along with their corresponding co…☆192Updated last month
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆101Updated last year
- Official code implementation for 2025 ICLR accepted paper "Dobi-SVD : Differentiable SVD for LLM Compression and Some New Perspectives"☆39Updated 5 months ago
- ☆41Updated 3 weeks ago
- Adaptive Attention Sparsity with Hierarchical Top-p Pruning☆19Updated 6 months ago
- Awesome list for LLM pruning.☆256Updated this week
- ☆30Updated last week
- Codebase for ICML'24 paper: Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs☆27Updated last year
- some docs for rookies in nics-efc☆22Updated 3 years ago