facebookresearch / param
PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for evaluation of training and inference platforms.
☆137Updated this week
Alternatives and similar repositories for param:
Users that are interested in param are comparing it to the libraries listed below
- Microsoft Collective Communication Library☆65Updated 5 months ago
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆116Updated last year
- Synthesizer for optimal collective communication algorithms☆106Updated last year
- NCCL Profiling Kit☆133Updated 10 months ago
- RDMA and SHARP plugins for nccl library☆191Updated 3 weeks ago
- ☆79Updated 2 years ago
- Microsoft Collective Communication Library☆344Updated last year
- Ultra | Ultimate | Unified CCL☆65Updated 2 months ago
- Thunder Research Group's Collective Communication Library☆36Updated last year
- oneAPI Collective Communications Library (oneCCL)☆232Updated last week
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆345Updated this week
- RCCL Performance Benchmark Tests☆64Updated this week
- This is a plugin which lets EC2 developers use libfabric as network provider while running NCCL applications.☆169Updated this week
- Repository for MLCommons Chakra schema and tools☆39Updated last year
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆76Updated 4 years ago
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆73Updated last year
- Pytorch process group third-party plugin for UCC☆20Updated last year
- Repository for MLCommons Chakra schema and tools☆96Updated last month
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆121Updated 2 years ago
- Research and development for optimizing transformers☆126Updated 4 years ago
- Unified Collective Communication Library☆251Updated last week
- ☆142Updated 3 months ago
- Fine-grained GPU sharing primitives☆141Updated 5 years ago
- A schedule language for large model training☆146Updated 10 months ago
- An experimental parallel training platform☆54Updated last year
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆151Updated this week
- FTPipe and related pipeline model parallelism research.☆41Updated last year
- ☆36Updated 4 months ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆65Updated 3 years ago
- Magnum IO community repo☆90Updated 3 months ago