microsoft / mscclpp
MSCCL++: A GPU-driven communication stack for scalable AI applications
☆315Updated this week
Alternatives and similar repositories for mscclpp:
Users that are interested in mscclpp are comparing it to the libraries listed below
- Microsoft Collective Communication Library☆343Updated last year
- NCCL Profiling Kit☆128Updated 8 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆326Updated this week
- Synthesizer for optimal collective communication algorithms☆106Updated 11 months ago
- Microsoft Collective Communication Library☆60Updated 4 months ago
- A low-latency & high-throughput serving engine for LLMs☆327Updated last month
- nnScaler: Compiling DNN models for Parallel Training☆103Updated last month
- NVIDIA Inference Xfer Library (NIXL)☆191Updated this week
- RDMA and SHARP plugins for nccl library☆184Updated this week
- High performance Transformer implementation in C++.☆109Updated 2 months ago
- ☆191Updated 8 months ago
- ☆75Updated 2 years ago
- A Easy-to-understand TensorOp Matmul Tutorial☆331Updated 6 months ago
- Experimental projects related to TensorRT☆94Updated this week
- A tool for bandwidth measurements on NVIDIA GPUs.☆392Updated last month
- ☆57Updated 2 months ago
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆116Updated last year
- Zero Bubble Pipeline Parallelism☆373Updated 3 weeks ago
- Efficient and easy multi-instance LLM serving☆339Updated this week
- collection of benchmarks to measure basic GPU capabilities☆309Updated last month
- Shared Middle-Layer for Triton Compilation☆233Updated 2 weeks ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆143Updated 2 years ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆229Updated this week
- A validation and profiling tool for AI infrastructure☆302Updated this week
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆109Updated this week
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆371Updated 6 months ago
- Disaggregated serving system for Large Language Models (LLMs).☆507Updated 7 months ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆58Updated 10 months ago
- An experimental parallel training platform☆54Updated last year
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆240Updated 4 months ago