facebookresearch / paramLinks
PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for evaluation of training and inference platforms.
☆153Updated last week
Alternatives and similar repositories for param
Users that are interested in param are comparing it to the libraries listed below
Sorting:
- Microsoft Collective Communication Library☆66Updated last year
- NCCL Profiling Kit☆149Updated last year
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆122Updated 2 years ago
- RDMA and SHARP plugins for nccl library☆215Updated 2 weeks ago
- Synthesizer for optimal collective communication algorithms☆121Updated last year
- This is a plugin which lets EC2 developers use libfabric as network provider while running NCCL applications.☆198Updated last week
- Microsoft Collective Communication Library☆375Updated 2 years ago
- ☆83Updated 3 years ago
- Thunder Research Group's Collective Communication Library☆43Updated 4 months ago
- Fine-grained GPU sharing primitives☆147Updated 4 months ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆439Updated this week
- Repository for MLCommons Chakra schema and tools☆142Updated last month
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆239Updated this week
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆91Updated 2 years ago
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆78Updated 2 years ago
- Pytorch process group third-party plugin for UCC☆21Updated last year
- Repository for MLCommons Chakra schema and tools☆39Updated last year
- Unified Collective Communication Library☆280Updated this week
- A schedule language for large model training☆151Updated 3 months ago
- FTPipe and related pipeline model parallelism research.☆43Updated 2 years ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆65Updated last year
- Multi-GPU communication profiler and visualizer☆36Updated last year
- CloudAI Benchmark Framework☆75Updated this week
- oneAPI Collective Communications Library (oneCCL)☆248Updated this week
- An interference-aware scheduler for fine-grained GPU sharing☆153Updated last week
- An experimental parallel training platform☆56Updated last year
- 🔮 Execution time predictions for deep neural network training iterations across different GPUs.☆63Updated 3 years ago
- ROCm Communication Collectives Library (RCCL)☆403Updated last week
- A resilient distributed training framework☆96Updated last year
- Magnum IO community repo☆104Updated 3 months ago