Efficient LLM Inference Acceleration using Prompting
☆51Oct 22, 2024Updated last year
Alternatives and similar repositories for parallel-prompt-decoding
Users that are interested in parallel-prompt-decoding are comparing it to the libraries listed below
Sorting:
- FPGA-based hardware acceleration for dropout-based Bayesian Neural Networks.☆27Aug 15, 2023Updated 2 years ago
- An innovative method expediting LLMs via streamlined semi-autoregressive generation and draft verification.☆28Apr 15, 2025Updated 11 months ago
- Official PyTorch implementation of "EvoGrad: Efficient Gradient-Based Meta-Learning and Hyperparameter Optimization"☆23Oct 24, 2021Updated 4 years ago
- List Flower resources☆12Feb 4, 2022Updated 4 years ago
- ☆35Feb 10, 2025Updated last year
- This repository contains papers for a comprehensive survey on accelerated generation techniques in Large Language Models (LLMs).☆11May 24, 2024Updated last year
- Federated Learning for Artificial Intelligence and Machine Learning☆16Sep 11, 2025Updated 6 months ago
- ☆16Dec 9, 2023Updated 2 years ago
- ☆56Jul 7, 2025Updated 8 months ago
- ☆15Jan 12, 2026Updated 2 months ago
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated 2 years ago
- Fork of Flame repo for training of some new stuff in development☆19Updated this week
- ☆15Apr 11, 2024Updated last year
- AdaSplash: Adaptive Sparse Flash Attention (aka Flash Entmax Attention)☆35Sep 30, 2025Updated 5 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- ☆18Sep 29, 2024Updated last year
- APEX+ is an LLM Serving Simulator☆44Jun 16, 2025Updated 9 months ago
- a survey on deep research☆47Sep 9, 2025Updated 6 months ago
- The official implementation of TinyTrain [ICML '24]☆24Jul 19, 2024Updated last year
- ☆21Apr 17, 2025Updated 11 months ago
- [EMNLP Findings 2024] MobileQuant: Mobile-friendly Quantization for On-device Language Models☆66Sep 22, 2024Updated last year
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆372Apr 22, 2025Updated 10 months ago
- LLM Serving Performance Evaluation Harness☆83Feb 25, 2025Updated last year
- BlockCIrculantRNN (LSTM and GRU) using TensorFlow☆14Oct 30, 2018Updated 7 years ago
- Short, easy to run, code examples presented in the Flower's FlowerMonthly online meetings.☆14Feb 16, 2024Updated 2 years ago
- ☆21Aug 8, 2025Updated 7 months ago
- DifferentialEquations.jl with PyTorch☆11Oct 12, 2022Updated 3 years ago
- ☆91Aug 18, 2024Updated last year
- [ICCV 2025] Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"☆55Oct 9, 2025Updated 5 months ago
- The official repo of continuous speculative decoding☆32Mar 28, 2025Updated 11 months ago
- An open-source parameterizable NPU generator with full-stack multi-target compilation stack for intelligent workloads.☆73Sep 29, 2025Updated 5 months ago
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆29Jul 24, 2025Updated 7 months ago
- ☆37Nov 14, 2025Updated 4 months ago
- Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding (EMNLP 2023 Long)☆65Sep 28, 2024Updated last year
- FaceGrabber is introduced in the following paper: D. Merget, T. Eckl, M. Schwörer, P. Tiefenbacher, and G. Rigoll, “Capturing Facial Vide…☆11Sep 7, 2016Updated 9 years ago
- [Findings of ACL 2023] Communication Efficient Federated Learning for Multilingual Machine Translation with Adapter☆12Sep 4, 2023Updated 2 years ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆163Apr 13, 2025Updated 11 months ago
- Test scripts for exploring PyTorch JIT and quantization capability☆11Mar 8, 2021Updated 5 years ago
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆261Nov 18, 2024Updated last year