yuyangJin / PerFlow-AILinks
PerFlow-AI is a programmable performance analysis, modeling, prediction tool for AI system.
☆28Updated last week
Alternatives and similar repositories for PerFlow-AI
Users that are interested in PerFlow-AI are comparing it to the libraries listed below
Sorting:
- Supplemental materials for The ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning☆25Updated 9 months ago
- Compiler for Dynamic Neural Networks☆45Updated 2 years ago
- ☆84Updated 3 years ago
- ☆88Updated 8 months ago
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆64Updated last year
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆56Updated last year
- ☆80Updated 3 weeks ago
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆58Updated last year
- TiledLower is a Dataflow Analysis and Codegen Framework written in Rust.☆14Updated last year
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆161Updated 4 months ago
- Tacker: Tensor-CUDA Core Kernel Fusion for Improving the GPU Utilization while Ensuring QoS☆34Updated last year
- Artifact for OSDI'23: MGG: Accelerating Graph Neural Networks with Fine-grained intra-kernel Communication-Computation Pipelining on Mult…☆41Updated last year
- Canvas: End-to-End Kernel Architecture Search in Neural Networks☆27Updated last year
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆104Updated 3 years ago
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆43Updated 3 years ago
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆77Updated 3 months ago
- ☆32Updated last year
- ☆23Updated 2 years ago
- Thunder Research Group's Collective Communication Library☆47Updated 7 months ago
- Artifacts of EVT ASPLOS'24☆28Updated last year
- An experimental parallel training platform☆56Updated last year
- Medusa: Accelerating Serverless LLM Inference with Materialization [ASPLOS'25]☆41Updated 8 months ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆68Updated last year
- A lightweight design for computation-communication overlap.☆219Updated 3 weeks ago
- gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling☆53Updated last month
- GVProf: A Value Profiler for GPU-based Clusters☆52Updated last year
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆66Updated 2 months ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆91Updated 3 years ago
- ☆79Updated 3 years ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆93Updated 2 years ago