stepfun-ai / StepMeshLinks
☆312Updated last week
Alternatives and similar repositories for StepMesh
Users that are interested in StepMesh are comparing it to the libraries listed below
Sorting:
- A lightweight design for computation-communication overlap.☆183Updated last month
- nnScaler: Compiling DNN models for Parallel Training☆118Updated last month
- High performance Transformer implementation in C++.☆140Updated 9 months ago
- Pipeline Parallelism Emulation and Visualization☆70Updated 5 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆434Updated 5 months ago
- Allow torch tensor memory to be released and resumed later☆164Updated last week
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆142Updated last month
- DeepSeek-V3/R1 inference performance simulator☆168Updated 7 months ago
- Perplexity GPU Kernels☆528Updated this week
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆251Updated 4 months ago
- Zero Bubble Pipeline Parallelism☆433Updated 6 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆114Updated 5 months ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆80Updated 11 months ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆66Updated last year
- ☆124Updated last year
- Stateful LLM Serving☆88Updated 8 months ago
- ☆75Updated 3 weeks ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆433Updated this week
- ☆149Updated 8 months ago
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆190Updated last year
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆377Updated 3 weeks ago
- ☆147Updated 10 months ago
- A low-latency & high-throughput serving engine for LLMs☆440Updated 3 weeks ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆460Updated this week
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆63Updated last month
- ☆101Updated last year
- ☆69Updated 10 months ago
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆278Updated 8 months ago
- ☆43Updated last year
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆286Updated 5 months ago