AI-HPC-Research-Team / AIPerf
Automated machine learning as an AI-HPC benchmark
☆64Updated 2 years ago
Alternatives and similar repositories for AIPerf:
Users that are interested in AIPerf are comparing it to the libraries listed below
- Synthesizer for optimal collective communication algorithms☆102Updated 9 months ago
- Fine-grained GPU sharing primitives☆140Updated 4 years ago
- ☆73Updated 2 years ago
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆114Updated last year
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆59Updated 8 months ago
- ☆20Updated 2 years ago
- NCCL Profiling Kit☆127Updated 6 months ago
- ☆83Updated 2 years ago
- NCCL Examples from Official NVIDIA NCCL Developer Guide.☆15Updated 6 years ago
- High performance NCCL plugin for Bagua.☆15Updated 3 years ago
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆79Updated last year
- ☆79Updated 2 months ago
- RDMA and SHARP plugins for nccl library☆172Updated last week
- BytePS examples (Vision, NLP, GAN, etc)☆19Updated 2 years ago
- A tool for examining GPU scheduling behavior.☆71Updated 5 months ago
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆60Updated 7 months ago
- GVProf: A Value Profiler for GPU-based Clusters☆48Updated 10 months ago
- RCCL Performance Benchmark Tests☆55Updated 2 weeks ago
- GPU-scheduler-for-deep-learning☆201Updated 4 years ago
- ddl-benchmarks: Benchmarks for Distributed Deep Learning☆37Updated 4 years ago
- ☆57Updated 4 years ago
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆73Updated 4 years ago
- ☆38Updated 4 years ago
- ☆36Updated last month
- this is the release repository of superneurons☆52Updated 3 years ago
- ☆23Updated 2 years ago
- Repository for SysML19 Artifacts Evaluation☆53Updated 5 years ago
- Microsoft Collective Communication Library☆61Updated 2 months ago
- Machine Learning System☆14Updated 4 years ago
- This repository contains the results and code for the MLPerf™ Training v1.1 benchmark.☆23Updated last year