Medusa: Accelerating Serverless LLM Inference with Materialization [ASPLOS'25]
☆42May 13, 2025Updated 9 months ago
Alternatives and similar repositories for Medusa
Users that are interested in Medusa are comparing it to the libraries listed below
Sorting:
- Deft: A Scalable Tree Index for Disaggregated Memory☆23Apr 23, 2025Updated 10 months ago
- Medusa: Accelerating Serverless LLM Inference with Materialization [ASPLOS'25]☆12Nov 8, 2024Updated last year
- Examples of usage for Mellanox HW offloads☆17Jan 18, 2022Updated 4 years ago
- ☆87Jan 22, 2026Updated last month
- Deferred Continuous Batching in Resource-Efficient Large Language Model Serving (EuroMLSys 2024)☆19May 28, 2024Updated last year
- A caching framework for microservice applications☆24Apr 22, 2024Updated last year
- The official implementation of OSDI'25 paper BlitzScale☆41Sep 20, 2025Updated 5 months ago
- Prefix-Aware Attention for LLM Decoding☆29Jan 23, 2026Updated last month
- APEX+ is an LLM Serving Simulator☆42Jun 16, 2025Updated 8 months ago
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆77Oct 15, 2025Updated 4 months ago
- This is the implementation repository of our SOSP'24 paper: Aceso: Achieving Efficient Fault Tolerance in Memory-Disaggregated Key-Value …☆23Oct 20, 2024Updated last year
- A fast and scalable distributed lock service using programmable switches.☆19Jul 30, 2024Updated last year
- [ICDCS 2023] Evaluation and Optimization of Gradient Compression for Distributed Deep Learning☆10Apr 28, 2023Updated 2 years ago
- ☆10Aug 9, 2021Updated 4 years ago
- Efficient Long-context Language Model Training by Core Attention Disaggregation☆92Updated this week
- a simple API to use CUPTI☆11Aug 19, 2025Updated 6 months ago
- The source code of INFless,a native serverless platform for AI inference.☆46Oct 10, 2022Updated 3 years ago
- [OSDI 2024] Motor: Enabling Multi-Versioning for Distributed Transactions on Disaggregated Memory☆50Mar 3, 2024Updated 2 years ago
- Next-generation datacenter OS built on kernel bypass to speed up unmodified code while improving platform density and security☆120Feb 21, 2026Updated last week
- Artifacts for our ASPLOS'23 paper dRAID☆29Feb 24, 2023Updated 3 years ago
- PipeRAG: Fast Retrieval-Augmented Generation via Algorithm-System Co-design (KDD 2025)☆30Jun 14, 2024Updated last year
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆129Jun 24, 2025Updated 8 months ago
- Accelerating Large-Scale Reasoning Model Inference with Sparse Self-Speculative Decoding☆93Dec 2, 2025Updated 3 months ago
- LoRAFusion: Efficient LoRA Fine-Tuning for LLMs☆24Sep 23, 2025Updated 5 months ago
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆33Nov 29, 2024Updated last year
- Implementation from scratch in C of the Multi-head latent attention used in the Deepseek-v3 technical paper.☆18Jan 15, 2025Updated last year
- ☆19Jun 1, 2025Updated 9 months ago
- ☆28Jun 22, 2025Updated 8 months ago
- Deduplication over dis-aggregated memory for Serverless Computing☆14Mar 21, 2022Updated 3 years ago
- ☆12May 13, 2025Updated 9 months ago
- [AFK] Hardware router in Chisel (THU Network Joint Lab 2020)☆14Oct 8, 2020Updated 5 years ago
- 训练营训练方向项目☆26Jan 28, 2026Updated last month
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆66Dec 11, 2025Updated 2 months ago
- ☆17May 10, 2024Updated last year
- [ACM SoCC'22] Pisces: Efficient Federated Learning via Guided Asynchronous Training☆13Apr 28, 2025Updated 10 months ago
- ☆20Jun 9, 2025Updated 8 months ago
- STREAMer: Benchmarking remote volatile and non-volatile memory bandwidth☆17Aug 21, 2023Updated 2 years ago
- Reading seminar in Harvard Cloud Networking and Systems Group☆16Aug 29, 2022Updated 3 years ago
- Pluggable in-process caching engine to build and scale high performance services☆18Updated this week