APEX+ is an LLM Serving Simulator
☆44Jun 16, 2025Updated 9 months ago
Alternatives and similar repositories for apex_plus
Users that are interested in apex_plus are comparing it to the libraries listed below
Sorting:
- a simple API to use CUPTI☆10Aug 19, 2025Updated 7 months ago
- PipeRAG: Fast Retrieval-Augmented Generation via Algorithm-System Co-design (KDD 2025)☆31Jun 14, 2024Updated last year
- PyTorch implementation of "Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks"☆14Mar 25, 2023Updated 2 years ago
- Implementation from scratch in C of the Multi-head latent attention used in the Deepseek-v3 technical paper.☆18Jan 15, 2025Updated last year
- LoRAFusion: Efficient LoRA Fine-Tuning for LLMs☆25Sep 23, 2025Updated 5 months ago
- Medusa: Accelerating Serverless LLM Inference with Materialization [ASPLOS'25]☆42May 13, 2025Updated 10 months ago
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆78Oct 15, 2025Updated 5 months ago
- ☆12Oct 16, 2022Updated 3 years ago
- This repository contains the implementation of DPMLBench: Holistic Evaluation of Differentially Private Machine Learning☆11Nov 24, 2023Updated 2 years ago
- Efficient Long-context Language Model Training by Core Attention Disaggregation☆96Mar 5, 2026Updated 2 weeks ago
- An end-to-end GCN inference accelerator written in HLS☆18Apr 5, 2022Updated 3 years ago
- Tiny-Megatron, a minimalistic re-implementation of the Megatron library☆23Sep 1, 2025Updated 6 months ago
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [NeurIPS '25]☆67Oct 2, 2025Updated 5 months ago
- ☆12Jun 30, 2025Updated 8 months ago
- [ASPLOS'26] Taming the Long-Tail: Efficient Reasoning RL Training with Adaptive Drafter☆156Feb 27, 2026Updated 3 weeks ago
- ☆56Jul 7, 2025Updated 8 months ago
- Efficient and easy multi-instance LLM serving☆532Mar 12, 2026Updated last week
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆75Nov 4, 2024Updated last year
- Dynamic Telegram Trading Bot☆18Feb 21, 2025Updated last year
- CHAI is a library for dynamic pruning of attention heads for efficient LLM inference.☆22Dec 11, 2024Updated last year
- ☆10Mar 31, 2022Updated 3 years ago
- Accelerating Large-Scale Reasoning Model Inference with Sparse Self-Speculative Decoding☆96Dec 2, 2025Updated 3 months ago
- Codes of PrivateKT☆15May 7, 2023Updated 2 years ago
- Estimate MFU for DeepSeekV3☆26Jan 5, 2025Updated last year
- (WIP) Parallel inference for black-forest-labs' FLUX model.☆19Nov 18, 2024Updated last year
- ☆54Sep 18, 2025Updated 6 months ago
- Python3 Interface to numa Linux library☆26May 14, 2021Updated 4 years ago
- The SCMC and PSCMC programming language☆18Dec 8, 2025Updated 3 months ago
- Skeleton code for new 6.858 final project --- an encrypted and authenticated file system☆24Apr 20, 2022Updated 3 years ago
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆33Nov 29, 2024Updated last year
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆94Jul 14, 2023Updated 2 years ago
- Anima Machina☆34Updated this week
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆66Dec 11, 2025Updated 3 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆21Updated this week
- Source code for Jellyfish, a soft real-time inference serving system☆15Dec 20, 2022Updated 3 years ago
- ☆16Sep 30, 2025Updated 5 months ago
- Repository for the COLM 2025 paper SpecDec++: Boosting Speculative Decoding via Adaptive Candidate Lengths☆16Jul 10, 2025Updated 8 months ago
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆34May 6, 2024Updated last year
- ☆13Dec 9, 2024Updated last year