PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined Speculation
☆32Nov 16, 2024Updated last year
Alternatives and similar repositories for PipeInfer
Users that are interested in PipeInfer are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆28May 24, 2025Updated 10 months ago
- Simplifies writing CMake-based build systems.☆13Oct 27, 2025Updated 5 months ago
- ☆12Aug 31, 2023Updated 2 years ago
- ☆15May 23, 2022Updated 3 years ago
- ☆32Aug 21, 2021Updated 4 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- An easy-to-use Java SDK for running LLaMA models on edge devices, powered by LLaMA.cpp☆23Oct 17, 2023Updated 2 years ago
- [ICML 2021] "Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators" by Yonggan Fu, Yonga…☆16Jan 3, 2022Updated 4 years ago
- FractalTensor is a programming framework that introduces a novel approach to organizing data in deep neural networks (DNNs) as a list of …☆31Dec 21, 2024Updated last year
- ☆18Mar 4, 2025Updated last year
- A fake presidential speech generator with a Mad Libs element.☆10Jul 19, 2017Updated 8 years ago
- ☆19Mar 21, 2023Updated 3 years ago
- ☆67Nov 4, 2024Updated last year
- Visual Tagger is a JavaScript tool that visually highlights HTML elements for AIs, aiding in identifying interactive components on web pa…☆11Oct 28, 2024Updated last year
- Official Repo for "SplitQuant / LLM-PQ: Resource-Efficient LLM Offline Serving on Heterogeneous GPUs via Phase-Aware Model Partition and …☆37Aug 29, 2025Updated 7 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- ☆15Mar 18, 2026Updated 3 weeks ago
- This project involved the analysis of the ArXiv citation network.☆15Jan 29, 2022Updated 4 years ago
- A high performance batching router optimises max throughput for text inference workload☆16Sep 6, 2023Updated 2 years ago
- ☆26Mar 14, 2024Updated 2 years ago
- Continuous Pipelined Speculative Decoding☆19Jan 4, 2026Updated 3 months ago
- ☆47Jun 7, 2024Updated last year
- [ICCV 2025] Staleness-Centric Optimizations for Parallel Diffusion MoE Inference.☆21Oct 17, 2025Updated 5 months ago
- [NeurIPS 2024] The official implementation of "Kangaroo: Lossless Self-Speculative Decoding for Accelerating LLMs via Double Early Exitin…☆68Jun 26, 2024Updated last year
- ☆15Apr 11, 2024Updated 2 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Disaggregated serving system for Large Language Models (LLMs).☆798Apr 6, 2025Updated last year
- A FastAPI application that integrates with Telegram using webhooks and OpenAI Agents SDK for AI-powered stock trading assistance, utilizi…☆17May 11, 2025Updated 11 months ago
- ☆21Jun 6, 2024Updated last year
- DEPRICATED: See ChiScraper Instead☆17Oct 13, 2024Updated last year
- SPAA'21: Efficient Stepping Algorithms and Implementations for Parallel Shortest Paths☆21Aug 10, 2024Updated last year
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆69Mar 7, 2024Updated 2 years ago
- ☆47Jun 27, 2024Updated last year
- ☆62Apr 3, 2026Updated last week
- A proxy that hosts multiple single-model runners such as LLama.cpp and vLLM☆13May 30, 2025Updated 10 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆91Jun 16, 2025Updated 9 months ago
- Repository holding the code base to AC-SpGEMM : "Adaptive Sparse Matrix-Matrix Multiplication on the GPU"☆31Jul 7, 2020Updated 5 years ago
- ☆15Jun 26, 2024Updated last year
- The code based on vLLM for the paper “ Cost-Efficient Large Language Model Serving for Multi-turn Conversations with CachedAttention”.☆11Sep 19, 2024Updated last year
- STREAMer: Benchmarking remote volatile and non-volatile memory bandwidth☆18Aug 21, 2023Updated 2 years ago
- Yet another frontend for LLM, written using .NET and WinUI 3☆11Sep 14, 2025Updated 6 months ago
- linux 内核技术文档☆16Feb 26, 2026Updated last month