facebookresearch / paramLinks
PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for evaluation of training and inference platforms.
☆140Updated this week
Alternatives and similar repositories for param
Users that are interested in param are comparing it to the libraries listed below
Sorting:
- Microsoft Collective Communication Library☆65Updated 6 months ago
- Synthesizer for optimal collective communication algorithms☆106Updated last year
- NCCL Profiling Kit☆134Updated 11 months ago
- ☆79Updated 2 years ago
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆116Updated last year
- RDMA and SHARP plugins for nccl library☆193Updated last month
- This is a plugin which lets EC2 developers use libfabric as network provider while running NCCL applications.☆172Updated last week
- Repository for MLCommons Chakra schema and tools☆39Updated last year
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆73Updated last year
- Microsoft Collective Communication Library☆346Updated last year
- rocSHMEM intra-kernel networking runtime for AMD dGPUs on the ROCm platform.☆86Updated this week
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆81Updated last year
- Pytorch process group third-party plugin for UCC☆21Updated last year
- An experimental parallel training platform☆54Updated last year
- Repository for MLCommons Chakra schema and tools☆99Updated 2 months ago
- A schedule language for large model training☆148Updated 11 months ago
- RCCL Performance Benchmark Tests☆65Updated last week
- Ultra | Ultimate | Unified CCL☆75Updated last week
- Fine-grained GPU sharing primitives☆141Updated 5 years ago
- ☆47Updated 2 years ago
- Set of datasets for the deep learning recommendation model (DLRM).☆47Updated 2 years ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆360Updated this week
- Thunder Research Group's Collective Communication Library☆36Updated last year
- ☆44Updated 3 years ago
- LLM serving cluster simulator☆100Updated last year
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆76Updated 4 years ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆58Updated last year
- A hierarchical collective communications library with portable optimizations☆35Updated 5 months ago
- ☆65Updated last month
- A resilient distributed training framework☆95Updated last year