PiKV: KV Cache Management System for Mixture of Experts [Efficient ML System]
☆49Feb 24, 2026Updated last month
Alternatives and similar repositories for PiKV
Users that are interested in PiKV are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- FastCache: Fast Caching for Diffusion Transformer Through Learnable Linear Approximation [Efficient ML Model]☆48Feb 17, 2026Updated last month
- A Serving System for Distributed and Parallel LLM Quantization [Efficient ML System]☆26Jun 18, 2025Updated 9 months ago
- GraphSnapShot: Caching Local Structure for Fast Graph Learning [Efficient ML System]☆40Jan 1, 2026Updated 2 months ago
- Accelerating Multitask Training Trough Adaptive Transition [Efficient ML Model]☆12May 23, 2025Updated 10 months ago
- Efficient Foundation Model Design: A Perspective From Model and System Co-Design [Efficient ML System & Model]☆29Feb 23, 2025Updated last year
- ☆14Sep 29, 2025Updated 5 months ago
- Open-source AI Accelerator Stack integrating compute, memory, and software — from RTL to PyTorch.☆25Mar 15, 2026Updated last week
- [NeurIPS 2024] The official implementation of "Image Copy Detection for Diffusion Models"☆18Oct 1, 2024Updated last year
- AVPipe :-)☆12Jul 16, 2021Updated 4 years ago
- 关于数字IC的笔试面试题☆14Nov 17, 2019Updated 6 years ago
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆90Jun 16, 2025Updated 9 months ago
- HackerRank test solutions for FPGA engineer interview at Optiver☆16Jun 7, 2020Updated 5 years ago
- Benchmarking code for evaluating the cost of cache coherence protocols implemented on different platforms☆14Apr 13, 2021Updated 4 years ago
- [ICML 2025] Reward-guided Speculative Decoding (RSD) for efficiency and effectiveness.☆56May 2, 2025Updated 10 months ago
- Official Repository for Paper "BaichuanSEED: Sharing the Potential of ExtensivE Data Collection and Deduplication by Introducing a Compet…☆18Aug 28, 2024Updated last year
- Source code for "Latent Plan Transformer for Trajectory Abstraction: Planning as Latent Space Inference." In NeurIPS 2024☆21Dec 1, 2024Updated last year
- A file system over RDMA☆29Jun 17, 2022Updated 3 years ago
- A benchmarking tool for comparing different LLM API providers' DeepSeek model deployments.☆30Mar 28, 2025Updated 11 months ago
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆51Jun 12, 2025Updated 9 months ago
- Beyond KV Caching: Shared Attention for Efficient LLMs☆20Jul 19, 2024Updated last year
- ☆23Oct 6, 2022Updated 3 years ago
- Artifact evaluation for Dogfood☆12Feb 22, 2020Updated 6 years ago
- Mako is a low-pause, high-throughput garbage collector designed for memory-disaggregated datacenters.☆15Sep 2, 2024Updated last year
- ☆87Oct 17, 2025Updated 5 months ago
- Deduplication over dis-aggregated memory for Serverless Computing☆14Mar 21, 2022Updated 4 years ago
- C++17 implementation of einops for libtorch - clear and reliable tensor manipulations with einstein-like notation☆11Oct 16, 2023Updated 2 years ago
- a mini 2x2 systolic array and PE demo☆70Dec 21, 2025Updated 3 months ago
- 基于 mirai, Graia 的 QQ 机器人,可执行 Python, Mathematica, C++ 等代码,可以调用 Copilot 补全、Stable Diffusion (NovelAI) 文字转图片☆16Oct 14, 2022Updated 3 years ago
- An optimized Merkle Patricia Trie implementation on GPU, fully compatible with and integrable into Ethereum. The paper is published on VL…☆14Apr 15, 2024Updated last year
- Deft: A Scalable Tree Index for Disaggregated Memory☆23Apr 23, 2025Updated 11 months ago
- ☆54Sep 5, 2024Updated last year
- Official Repository of LatentSeek☆78Jun 6, 2025Updated 9 months ago
- RPCNIC: A High-Performance and Reconfigurable PCIe-attached RPC Accelerator [HPCA2025]☆14Dec 9, 2024Updated last year
- ☆29Nov 9, 2025Updated 4 months ago
- Source code for the FAST '23 paper “MadFS: Per-File Virtualization for Userspace Persistent Memory Filesystems”☆48Mar 5, 2023Updated 3 years ago
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Oct 1, 2025Updated 5 months ago
- Pluggable in-process caching engine to build and scale high performance services☆18Mar 16, 2026Updated last week
- [PACT'24] GraNNDis. A fast and unified distributed graph neural network (GNN) training framework for both full-batch (full-graph) and min…☆10Aug 13, 2024Updated last year
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆105Dec 15, 2025Updated 3 months ago