NVIDIA / nvidia-resiliency-extLinks
NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the effective training time by minimizing the downtime due to failures and interruptions.
☆262Updated this week
Alternatives and similar repositories for nvidia-resiliency-ext
Users that are interested in nvidia-resiliency-ext are comparing it to the libraries listed below
Sorting:
- A library to analyze PyTorch traces.☆462Updated last week
- NVIDIA Inference Xfer Library (NIXL)☆876Updated this week
- Perplexity GPU Kernels☆560Updated 3 months ago
- ☆159Updated last year
- torchcomms: a modern PyTorch communications API☆327Updated last week
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆462Updated this week
- Zero Bubble Pipeline Parallelism☆449Updated 9 months ago
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆122Updated 2 years ago
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆263Updated this week
- Offline optimization of your disaggregated Dynamo graph☆184Updated this week
- ☆322Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆458Updated 8 months ago
- PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for…☆156Updated this week
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆462Updated last month
- CUDA checkpoint and restore utility☆410Updated 4 months ago
- A low-latency & high-throughput serving engine for LLMs☆470Updated last month
- A tool for bandwidth measurements on NVIDIA GPUs.☆618Updated 9 months ago
- Microsoft Collective Communication Library☆381Updated 2 years ago
- QuickReduce is a performant all-reduce library designed for AMD ROCm that supports inline compression.☆36Updated 5 months ago
- Applied AI experiments and examples for PyTorch☆315Updated 5 months ago
- Efficient and easy multi-instance LLM serving☆524Updated 5 months ago
- Allow torch tensor memory to be released and resumed later☆216Updated 3 weeks ago
- extensible collectives library in triton☆95Updated 10 months ago
- NCCL Profiling Kit☆152Updated last year
- Perplexity open source garden for inference technology☆359Updated last month
- RDMA and SHARP plugins for nccl library☆223Updated 3 weeks ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆219Updated last week
- NVIDIA NCCL Tests for Distributed Training☆134Updated 2 weeks ago
- This is a plugin which lets EC2 developers use libfabric as network provider while running NCCL applications.☆204Updated this week
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆168Updated this week