facebookresearch / paramLinks
PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for evaluation of training and inference platforms.
☆150Updated last week
Alternatives and similar repositories for param
Users that are interested in param are comparing it to the libraries listed below
Sorting:
- NCCL Profiling Kit☆143Updated last year
- Microsoft Collective Communication Library☆66Updated 9 months ago
- Synthesizer for optimal collective communication algorithms☆116Updated last year
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆121Updated last year
- Microsoft Collective Communication Library☆360Updated last year
- RDMA and SHARP plugins for nccl library☆203Updated this week
- This is a plugin which lets EC2 developers use libfabric as network provider while running NCCL applications.☆185Updated this week
- ☆82Updated 2 years ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆414Updated this week
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆74Updated 2 years ago
- oneCCL Bindings for Pytorch*☆102Updated last month
- Fine-grained GPU sharing primitives☆144Updated last month
- Repository for MLCommons Chakra schema and tools☆125Updated last month
- Repository for MLCommons Chakra schema and tools☆39Updated last year
- Pytorch process group third-party plugin for UCC☆21Updated last year
- A schedule language for large model training☆149Updated 3 weeks ago
- Thunder Research Group's Collective Communication Library☆41Updated 2 months ago
- Magnum IO community repo☆98Updated 3 weeks ago
- oneAPI Collective Communications Library (oneCCL)☆244Updated 2 weeks ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆63Updated last year
- RCCL Performance Benchmark Tests☆75Updated 3 weeks ago
- A hierarchical collective communications library with portable optimizations☆36Updated 9 months ago
- FTPipe and related pipeline model parallelism research.☆42Updated 2 years ago
- An experimental parallel training platform☆54Updated last year
- Unified Collective Communication Library☆275Updated this week
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆85Updated 2 years ago
- rocSHMEM intra-kernel networking runtime for AMD dGPUs on the ROCm platform.☆112Updated this week
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆216Updated this week
- ☆46Updated 9 months ago
- 🔮 Execution time predictions for deep neural network training iterations across different GPUs.☆63Updated 2 years ago