zyqCSL / DiffKVLinks
☆37Updated 3 months ago
Alternatives and similar repositories for DiffKV
Users that are interested in DiffKV are comparing it to the libraries listed below
Sorting:
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆56Updated last year
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆52Updated last year
- ☆26Updated 2 years ago
- ☆58Updated last year
- PIM-DL: Expanding the Applicability of Commodity DRAM-PIMs for Deep Learning via Algorithm-System Co-Optimization☆35Updated last year
- [HPCA'24] Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System☆51Updated 6 months ago
- ☆25Updated 3 years ago
- ☆14Updated 4 years ago
- Supplemental materials for The ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning☆25Updated 8 months ago
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆76Updated 3 months ago
- TiledLower is a Dataflow Analysis and Codegen Framework written in Rust.☆14Updated last year
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆25Updated last year
- ☆19Updated 7 months ago
- Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling☆12Updated last year
- ☆28Updated last year
- LLM Inference analyzer for different hardware platforms☆99Updated last month
- ☆21Updated 3 years ago
- PerFlow-AI is a programmable performance analysis, modeling, prediction tool for AI system.☆28Updated 3 weeks ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆62Updated 10 months ago
- Tacker: Tensor-CUDA Core Kernel Fusion for Improving the GPU Utilization while Ensuring QoS☆34Updated 11 months ago
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆79Updated 7 months ago
- ☆16Updated last year
- WaferLLM: Large Language Model Inference at Wafer Scale☆84Updated 3 weeks ago
- Artifact for paper "PIM is All You Need: A CXL-Enabled GPU-Free System for LLM Inference", ASPLOS 2025☆122Updated 8 months ago
- [ACM EuroSys 2023] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆56Updated 5 months ago
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆44Updated 3 years ago
- SOTA Learning-augmented Systems☆37Updated 3 years ago
- Compiler for Dynamic Neural Networks☆45Updated 2 years ago
- RPCNIC: A High-Performance and Reconfigurable PCIe-attached RPC Accelerator [HPCA2025]☆13Updated last year
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆58Updated last year