An experimentation platform for LLM inference optimisation
☆36Sep 19, 2024Updated last year
Alternatives and similar repositories for llm-inference-research
Users that are interested in llm-inference-research are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- 16-fold memory access reduction with nearly no loss☆108Mar 26, 2025Updated last year
- ClusterKV: Manipulating LLM KV Cache in Semantic Space for Recallable Compression (DAC'25)☆27Feb 26, 2026Updated 2 months ago
- ☆37Oct 10, 2024Updated last year
- ☆313Jul 10, 2025Updated 9 months ago
- ☆19Mar 11, 2025Updated last year
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆14Jun 4, 2024Updated last year
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆384Jul 10, 2025Updated 9 months ago
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆297May 1, 2025Updated last year
- Explore Inter-layer Expert Affinity in MoE Model Inference☆16May 6, 2024Updated 2 years ago
- Finetune Google's pre-trained ViT models from HuggingFace's model hub.☆19Apr 4, 2021Updated 5 years ago
- ☆21Jun 1, 2025Updated 11 months ago
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆184Jul 10, 2024Updated last year
- ☆18May 30, 2025Updated 11 months ago
- [NeurIPS'25 Spotlight] Adaptive Attention Sparsity with Hierarchical Top-p Pruning☆94Apr 20, 2026Updated 2 weeks ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- ☆33Apr 28, 2026Updated last week
- [SIGMOD 2025] PQCache: Product Quantization-based KVCache for Long Context LLM Inference☆88Dec 7, 2025Updated 5 months ago
- Modular and structured prompt caching for low-latency LLM inference☆112Nov 9, 2024Updated last year
- Dynamic Context Selection for Efficient Long-Context LLMs☆56May 20, 2025Updated 11 months ago
- A morden implement of CHD/SHD algorithm.☆13Mar 10, 2026Updated last month
- Source code of "Accelerating Truss Decomposition on Heterogeneous Processors", accepted by VLDB'20 - By Yulin Che, Zhuohang Lai, Shixuan …☆16May 25, 2020Updated 5 years ago
- ☆99Nov 25, 2024Updated last year
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆60Nov 20, 2024Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆92Jul 17, 2025Updated 9 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- ☆12Oct 5, 2022Updated 3 years ago
- ☆19Mar 13, 2016Updated 10 years ago
- OpenMP-based parallel software for computing the truss decomposition of a graph.☆14Mar 28, 2018Updated 8 years ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,210Apr 8, 2026Updated last month
- ☆13Oct 13, 2025Updated 6 months ago
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆53Aug 6, 2025Updated 9 months ago
- This is the official repo of "QuickLLaMA: Query-aware Inference Acceleration for Large Language Models"☆55Jul 16, 2024Updated last year
- Code Repository for the NeurIPS 2024 Paper "Toward Efficient Inference for Mixture of Experts".☆19Oct 30, 2024Updated last year
- The code of our paper "InfLLM: Unveiling the Intrinsic Capacity of LLMs for Understanding Extremely Long Sequences with Training-Free Mem…☆401Apr 20, 2024Updated 2 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- ☆13May 16, 2025Updated 11 months ago
- ☆11May 24, 2023Updated 2 years ago
- PipeRAG: Fast Retrieval-Augmented Generation via Algorithm-System Co-design (KDD 2025)☆32Jun 14, 2024Updated last year
- Official code repository for Findings of EMNLP 2022 paper: PseudoReasoner: Leveraging Pseudo Labels for Commonsense Knowledge Base Popula…☆11Oct 18, 2022Updated 3 years ago
- A collection of my talks☆12Jan 19, 2026Updated 3 months ago
- Official implementation for paper "Navigating Labels and Vectors: A Unified Approach to Filtered Approximate Nearest Neighbor Search"☆35Dec 21, 2024Updated last year
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆50Oct 18, 2024Updated last year