This is the source code of our ICML25 paper, titled "Accelerating Large Language Model Reasoning via Speculative Search".
☆23Jun 1, 2025Updated 11 months ago
Alternatives and similar repositories for LLMReasoning-SpecSearch
Users that are interested in LLMReasoning-SpecSearch are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- This is the code for our ICLR 2025 paper, titled Computing Circuits Optimization via Model-Based Circuit Genetic Evolution.☆13May 27, 2025Updated 11 months ago
- A novel template-free retrosynthesizer that can generate diverse sets of reactants for a desired product via discrete conditional variati…☆15Aug 7, 2022Updated 3 years ago
- This is the code for G2MILP, a deep learning-based mixed-integer linear programming (MILP) instance generator.☆36Oct 3, 2024Updated last year
- The code for "AttentionPredictor: Temporal Pattern Matters for Efficient LLM Inference", Qingyue Yang, Jie Wang, Xing Li, Zhihai Wang, Ch…☆28Jul 15, 2025Updated 9 months ago
- USTC Resources☆24Sep 18, 2023Updated 2 years ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- 💩里淘金☆47Updated this week
- Pytorch implementation of our paper accepted by ICML 2023 -- "Bi-directional Masks for Efficient N:M Sparse Training"☆13Jun 7, 2023Updated 2 years ago
- Here is the Feiyue handbook for all ECE students, including 1) how to prepare for your application, 2) official program organized by HUST…☆16Jun 6, 2020Updated 5 years ago
- Cross-Self KV Cache Pruning for Efficient Vision-Language Inference☆10Dec 15, 2024Updated last year
- Accelerating Large-Scale Reasoning Model Inference with Sparse Self-Speculative Decoding☆100Dec 2, 2025Updated 5 months ago
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated 2 years ago
- ☆17May 2, 2024Updated 2 years ago
- 多智能体强化学习VDN、QMIX、QTRAN、QPLEX复现☆37Apr 6, 2023Updated 3 years ago
- Source code for the architectural simulator used for modeling the PUD system proposed in our HPCA 2024 paper `MIMDRAM: An End-to-End Proc…☆29Sep 12, 2025Updated 7 months ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Must-read papers on Knowledge Graph Embedding☆29Oct 15, 2020Updated 5 years ago
- Pytorch implementation of our paper accepted by NeurIPS 2022 -- Learning Best Combination for Efficient N:M Sparsity☆22Jan 13, 2023Updated 3 years ago
- LLM Quantization toolkit☆20Updated this week
- [ICML2025] KVTuner: Sensitivity-Aware Layer-wise Mixed Precision KV Cache Quantization for Efficient and Nearly Lossless LLM Inference☆28Jan 27, 2026Updated 3 months ago
- This is the official Python version of CoreInfer: Accelerating Large Language Model Inference with Semantics-Inspired Adaptive Sparse Act…☆17Oct 25, 2024Updated last year
- Github Repo for OATS: Outlier-Aware Pruning through Sparse and Low Rank Decomposition☆21Apr 16, 2025Updated last year
- Reading notes on Speculative Decoding papers☆31Apr 16, 2026Updated 2 weeks ago
- The first version of TritonPart☆34Jan 2, 2024Updated 2 years ago
- Evolutionary-Algorithm and Large-Language-Model☆23Nov 5, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [ICLR 2025] Official implementation of paper "Dynamic Low-Rank Sparse Adaptation for Large Language Models".☆24Mar 16, 2025Updated last year
- TPAMI 2025 Survey Paper☆29Mar 31, 2025Updated last year
- PrefixKV: Adaptive Prefix KV Cache is What Vision Instruction-Following Models Need for Efficient Generation [NeurIPS 2025]☆18Oct 11, 2025Updated 6 months ago
- Code for KDD 2023 long paper: MetricPrompt: Prompting Model as a Relevance Metric for Few-Shot Text Classification☆19Aug 10, 2024Updated last year
- ☆18Sep 21, 2022Updated 3 years ago
- ☆47Jun 27, 2024Updated last year
- ThinK: Thinner Key Cache by Query-Driven Pruning☆29Feb 11, 2025Updated last year
- Official PyTorch implementation of the paper "Accelerating Diffusion Large Language Models with SlowFast Sampling: The Three Golden Princ…☆42Jul 18, 2025Updated 9 months ago
- Physics eXperiment 中科大大雾实验工具(绘制图像&计算不确定度&生成公式)☆24Dec 8, 2025Updated 4 months ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- ☆56Apr 8, 2024Updated 2 years ago
- AbstainQA, ACL 2024☆29Feb 4, 2026Updated 3 months ago
- Implementation of Effective Sparsification of Neural Networks with Global Sparsity Constraint☆31Mar 24, 2022Updated 4 years ago
- ☆35Jun 13, 2025Updated 10 months ago
- [CVPR 2026] OmniZip: Audio-Guided Dynamic Token Compression for Fast Omnimodal Large Language Models☆78Apr 20, 2026Updated 2 weeks ago
- ☆311Jul 10, 2025Updated 9 months ago
- Awesome list for LLM pruning.☆291Oct 11, 2025Updated 6 months ago