ai-dynamo / nixlLinks
NVIDIA Inference Xfer Library (NIXL)
☆712Updated this week
Alternatives and similar repositories for nixl
Users that are interested in nixl are comparing it to the libraries listed below
Sorting:
- Efficient and easy multi-instance LLM serving☆506Updated 2 months ago
- Perplexity GPU Kernels☆519Updated 2 weeks ago
- KV cache store for distributed LLM inference☆358Updated 2 months ago
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆628Updated this week
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆433Updated this week
- Disaggregated serving system for Large Language Models (LLMs).☆721Updated 7 months ago
- A low-latency & high-throughput serving engine for LLMs☆440Updated 3 weeks ago
- Materials for learning SGLang☆636Updated 2 weeks ago
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆232Updated this week
- Dynamic Memory Management for Serving LLMs without PagedAttention☆434Updated 5 months ago
- GLake: optimizing GPU memory management and IO transmission.☆487Updated 7 months ago
- CUDA checkpoint and restore utility☆381Updated last month
- Offline optimization of your disaggregated Dynamo graph☆97Updated this week
- A PyTorch Native LLM Training Framework☆884Updated last month
- NVIDIA NCCL Tests for Distributed Training☆121Updated last week
- OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs)☆307Updated last week
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆122Updated last year
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,165Updated 2 months ago
- ☆312Updated this week
- A tool for bandwidth measurements on NVIDIA GPUs.☆564Updated 6 months ago
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆229Updated this week
- Zero Bubble Pipeline Parallelism☆433Updated 6 months ago
- A throughput-oriented high-performance serving framework for LLMs☆912Updated last week
- Distributed Compiler based on Triton for Parallel Systems☆1,214Updated 3 weeks ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆460Updated this week
- Fast OS-level support for GPU checkpoint and restore☆252Updated last month
- Microsoft Collective Communication Library☆371Updated 2 years ago
- Serverless LLM Serving for Everyone.☆585Updated this week
- A validation and profiling tool for AI infrastructure☆347Updated this week
- A large-scale simulation framework for LLM inference☆473Updated 3 months ago