Fast OS-level support for GPU checkpoint and restore
☆271Sep 28, 2025Updated 5 months ago
Alternatives and similar repositories for PhoenixOS
Users that are interested in PhoenixOS are comparing it to the libraries listed below
Sorting:
- ☆20Jul 10, 2025Updated 7 months ago
- CUDA checkpoint and restore utility☆424Sep 15, 2025Updated 5 months ago
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆210Sep 21, 2024Updated last year
- cricket is a virtualization solution for GPUs☆236Sep 9, 2025Updated 5 months ago
- GeminiFS: A Companion File System for GPUs☆71Feb 18, 2025Updated last year
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆20Feb 23, 2024Updated 2 years ago
- Deduplication over dis-aggregated memory for Serverless Computing☆14Mar 21, 2022Updated 3 years ago
- This is the implementation repository of our SOSP'24 paper: Aceso: Achieving Efficient Fault Tolerance in Memory-Disaggregated Key-Value …☆22Oct 20, 2024Updated last year
- A lightweight design for computation-communication overlap.☆223Jan 20, 2026Updated last month
- Hooked CUDA-related dynamic libraries by using automated code generation tools.☆171Dec 12, 2023Updated 2 years ago
- DeeperGEMM: crazy optimized version☆74May 5, 2025Updated 9 months ago
- A scheduling framework for multitasking over diverse XPUs, including GPUs, NPUs, ASICs, and FPGAs☆158Jan 13, 2026Updated last month
- Medusa: Accelerating Serverless LLM Inference with Materialization [ASPLOS'25]☆41May 13, 2025Updated 9 months ago
- Handwritten GEMM using Intel AMX (Advanced Matrix Extension)☆17Jan 11, 2025Updated last year
- A throughput-oriented high-performance serving framework for LLMs☆946Oct 29, 2025Updated 4 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆464May 30, 2025Updated 9 months ago
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 7 months ago
- A dynamic binary instrumentation tool for tracing and analyzing CUDA kernel instructions.☆35Updated this week
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆270Feb 2, 2026Updated last month
- Artifact evaluation repo for EuroSys'24.☆29Nov 7, 2023Updated 2 years ago
- ☆241Dec 25, 2025Updated 2 months ago
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆84Jun 16, 2025Updated 8 months ago
- GLake: optimizing GPU memory management and IO transmission.☆498Mar 24, 2025Updated 11 months ago
- Mako is a low-pause, high-throughput garbage collector designed for memory-disaggregated datacenters.☆15Sep 2, 2024Updated last year
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆283May 1, 2025Updated 10 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆97Sep 19, 2025Updated 5 months ago
- Artifacts for our NSDI'23 paper TGS☆96Jun 10, 2024Updated last year
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆106Jun 28, 2025Updated 8 months ago
- Distributed Compiler based on Triton for Parallel Systems☆1,371Feb 13, 2026Updated 2 weeks ago
- Official Repo for "SplitQuant / LLM-PQ: Resource-Efficient LLM Offline Serving on Heterogeneous GPUs via Phase-Aware Model Partition and …☆36Aug 29, 2025Updated 6 months ago
- Debug print operator for cudagraph debugging☆14Aug 2, 2024Updated last year
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆19Dec 8, 2023Updated 2 years ago
- TiledLower is a Dataflow Analysis and Codegen Framework written in Rust.☆14Nov 23, 2024Updated last year
- Exploring CXL on QEMU Emulation☆36Mar 4, 2025Updated 11 months ago
- [OSDI 2024] Motor: Enabling Multi-Versioning for Distributed Transactions on Disaggregated Memory☆50Mar 3, 2024Updated last year
- Canvas: End-to-End Kernel Architecture Search in Neural Networks☆27Nov 18, 2024Updated last year
- NVIDIA Inference Xfer Library (NIXL)☆898Updated this week
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆58Aug 21, 2024Updated last year
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆234Sep 24, 2023Updated 2 years ago