SJTU-IPADS / PhoenixOSLinks
Fast OS-level support for GPU checkpoint and restore
☆270Updated 4 months ago
Alternatives and similar repositories for PhoenixOS
Users that are interested in PhoenixOS are comparing it to the libraries listed below
Sorting:
- ☆233Updated last month
- KV cache store for distributed LLM inference☆389Updated 2 months ago
- A scheduling framework for multitasking over diverse XPUs, including GPUs, NPUs, ASICs, and FPGAs☆154Updated 2 weeks ago
- Here are my personal paper reading notes (including machine learning systems, AI infrastructure, and other interesting stuffs).☆154Updated this week
- High performance Transformer implementation in C++.☆148Updated last year
- An interference-aware scheduler for fine-grained GPU sharing☆159Updated 2 months ago
- Artifacts for our NSDI'23 paper TGS☆94Updated last year
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆104Updated 3 years ago
- DeepSeek-V3/R1 inference performance simulator☆177Updated 10 months ago
- ☆76Updated last week
- ☆20Updated 6 months ago
- ☆79Updated 3 years ago
- ☆93Updated 9 months ago
- A framework for generating realistic LLM serving workloads☆98Updated 3 months ago
- ☆342Updated this week
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Updated last month
- Injecting Adrenaline into LLM Serving: Boosting Resource Utilization and Throughput via Attention Disaggregation☆40Updated 2 months ago
- NCCL Profiling Kit☆150Updated last year
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆134Updated last year
- Stateful LLM Serving☆95Updated 10 months ago
- Efficient and easy multi-instance LLM serving☆523Updated 4 months ago
- Medusa: Accelerating Serverless LLM Inference with Materialization [ASPLOS'25]☆41Updated 8 months ago
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆79Updated 7 months ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆66Updated last year
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆64Updated last year
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆455Updated this week
- Research prototype of PRISM — a cost-efficient multi-LLM serving system with flexible time- and space-based GPU sharing.☆54Updated 5 months ago
- A lightweight design for computation-communication overlap.☆213Updated last week
- Open ABI and FFI for Machine Learning Systems☆313Updated this week
- example code for using DC QP for providing RDMA READ and WRITE operations to remote GPU memory☆152Updated last year