sgl-project / sgl-learning-materialsLinks
Materials for learning SGLang
☆717Updated 2 weeks ago
Alternatives and similar repositories for sgl-learning-materials
Users that are interested in sgl-learning-materials are comparing it to the libraries listed below
Sorting:
- Disaggregated serving system for Large Language Models (LLMs).☆766Updated 9 months ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆626Updated last week
- A throughput-oriented high-performance serving framework for LLMs☆937Updated 2 months ago
- Efficient and easy multi-instance LLM serving☆521Updated 4 months ago
- Perplexity GPU Kernels☆553Updated 2 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆457Updated 7 months ago
- A low-latency & high-throughput serving engine for LLMs☆467Updated last week
- Distributed Compiler based on Triton for Parallel Systems☆1,315Updated 3 weeks ago
- Puzzles for learning Triton, play it with minimal environment configuration!☆595Updated 3 weeks ago
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆611Updated last year
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆251Updated this week
- Zero Bubble Pipeline Parallelism☆447Updated 8 months ago
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆917Updated last month
- FlagGems is an operator library for large language models implemented in the Triton Language.☆871Updated last week
- Allow torch tensor memory to be released and resumed later☆202Updated last week
- NVIDIA Inference Xfer Library (NIXL)☆820Updated this week
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆802Updated 10 months ago
- GLake: optimizing GPU memory management and IO transmission.☆496Updated 9 months ago
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆753Updated last week
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆306Updated 7 months ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆375Updated this week
- A large-scale simulation framework for LLM inference☆522Updated 5 months ago
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆283Updated 10 months ago
- ☆520Updated 2 weeks ago
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,224Updated 4 months ago
- ☆340Updated 2 weeks ago
- KV cache store for distributed LLM inference☆385Updated 2 months ago
- flash attention tutorial written in python, triton, cuda, cutlass☆475Updated 8 months ago
- Latency and Memory Analysis of Transformer Models for Training and Inference☆475Updated 9 months ago
- ☆153Updated 10 months ago