Efficient LLM Inference Acceleration using Prompting
☆51Oct 22, 2024Updated last year
Alternatives and similar repositories for parallel-prompt-decoding
Users that are interested in parallel-prompt-decoding are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Codebase for the Progressive Mixed-Precision Decoding paper.☆19Jul 15, 2025Updated 8 months ago
- An innovative method expediting LLMs via streamlined semi-autoregressive generation and draft verification.☆28Apr 15, 2025Updated 11 months ago
- [NeurIPS'23] FedL2P: Federated Learning to Personalize☆24Nov 8, 2025Updated 5 months ago
- List Flower resources☆12Feb 4, 2022Updated 4 years ago
- [ICCAD 2025] Squant☆15Jul 3, 2025Updated 9 months ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- ☆35Feb 10, 2025Updated last year
- This repository contains papers for a comprehensive survey on accelerated generation techniques in Large Language Models (LLMs).☆11May 24, 2024Updated last year
- ☆16Dec 9, 2023Updated 2 years ago
- ☆56Jul 7, 2025Updated 9 months ago
- ☆15Jan 12, 2026Updated 2 months ago
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated 2 years ago
- ☆15Apr 11, 2024Updated last year
- AdaSplash: Adaptive Sparse Flash Attention (aka Flash Entmax Attention)☆37Sep 30, 2025Updated 6 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆18Sep 29, 2024Updated last year
- a survey on deep research☆49Sep 9, 2025Updated 7 months ago
- ☆22Apr 17, 2025Updated 11 months ago
- The official implementation of TinyTrain [ICML '24]☆24Jul 19, 2024Updated last year
- ☆11Feb 5, 2026Updated 2 months ago
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆381Apr 22, 2025Updated 11 months ago
- LLM Serving Performance Evaluation Harness☆84Feb 25, 2025Updated last year
- BlockCIrculantRNN (LSTM and GRU) using TensorFlow☆14Oct 30, 2018Updated 7 years ago
- ☆22Aug 8, 2025Updated 8 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- DifferentialEquations.jl with PyTorch☆11Oct 12, 2022Updated 3 years ago
- ☆91Aug 18, 2024Updated last year
- [ICCV 2025] Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"☆57Oct 9, 2025Updated 6 months ago
- An open-source parameterizable NPU generator with full-stack multi-target compilation stack for intelligent workloads.☆73Sep 29, 2025Updated 6 months ago
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆29Jul 24, 2025Updated 8 months ago
- Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding (EMNLP 2023 Long)☆65Sep 28, 2024Updated last year
- ☆32Aug 22, 2020Updated 5 years ago
- [Findings of ACL 2023] Communication Efficient Federated Learning for Multilingual Machine Translation with Adapter☆12Sep 4, 2023Updated 2 years ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆164Apr 13, 2025Updated 11 months ago
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆123Jul 4, 2025Updated 9 months ago
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆262Nov 18, 2024Updated last year
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆363Feb 5, 2026Updated 2 months ago
- AdaSkip: Adaptive Sublayer Skipping for Accelerating Long-Context LLM Inference☆21Jan 24, 2025Updated last year
- The CaMLSys project template used for researching Federated Learning.☆23Updated this week
- Official code and data repository of MathChat: MathChat: Benchmarking Mathematical Reasoning and Instruction Following in Multi-Turn Inte…☆22Jun 3, 2024Updated last year
- Mutual information estimators and benchmarks☆14Mar 2, 2026Updated last month