triton-inference-server / triton_distributedLinks
☆49Updated 2 months ago
Alternatives and similar repositories for triton_distributed
Users that are interested in triton_distributed are comparing it to the libraries listed below
Sorting:
- NVIDIA Inference Xfer Library (NIXL)☆365Updated this week
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆169Updated last week
- NVIDIA NCCL Tests for Distributed Training☆92Updated this week
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆116Updated last year
- Efficient and easy multi-instance LLM serving☆423Updated this week
- Perplexity GPU Kernels☆324Updated 2 weeks ago
- ☆86Updated 5 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆384Updated this week
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆365Updated this week
- KV cache store for distributed LLM inference☆254Updated last week
- ☆25Updated 3 months ago
- A low-latency & high-throughput serving engine for LLMs☆370Updated this week
- NCCL Profiling Kit☆135Updated 11 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆81Updated 2 weeks ago
- PyTorch distributed training acceleration framework☆49Updated 3 months ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆242Updated 2 weeks ago
- A lightweight design for computation-communication overlap.☆132Updated 3 weeks ago
- RDMA and SHARP plugins for nccl library☆195Updated last month
- ☆208Updated 10 months ago
- ☆74Updated this week
- Experimental projects related to TensorRT☆105Updated this week
- Microsoft Collective Communication Library☆65Updated 6 months ago
- CUDA checkpoint and restore utility☆341Updated 4 months ago
- DeepSeek-V3/R1 inference performance simulator☆134Updated 2 months ago
- Applied AI experiments and examples for PyTorch☆271Updated last week
- ☆63Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆75Updated this week
- The core library and APIs implementing the Triton Inference Server.☆133Updated this week
- Microsoft Collective Communication Library☆347Updated last year
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆58Updated last year