AI-HPC-Research-Team / AIPerfLinks
Automated machine learning as an AI-HPC benchmark
☆65Updated 3 years ago
Alternatives and similar repositories for AIPerf
Users that are interested in AIPerf are comparing it to the libraries listed below
Sorting:
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆62Updated last year
- BytePS examples (Vision, NLP, GAN, etc)☆19Updated 2 years ago
- Fine-grained GPU sharing primitives☆147Updated 3 months ago
- NCCL Examples from Official NVIDIA NCCL Developer Guide.☆19Updated 7 years ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆126Updated 3 years ago
- Synthesizer for optimal collective communication algorithms☆119Updated last year
- GPU-scheduler-for-deep-learning☆210Updated 5 years ago
- RDMA and SHARP plugins for nccl library☆212Updated 3 weeks ago
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆54Updated last year
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆63Updated last year
- High performance NCCL plugin for Bagua.☆15Updated 4 years ago
- A tool for examining GPU scheduling behavior.☆89Updated last year
- GVProf: A Value Profiler for GPU-based Clusters☆52Updated last year
- ☆53Updated 10 months ago
- ☆82Updated 5 months ago
- Model-less Inference Serving☆91Updated 2 years ago
- Magnum IO community repo☆103Updated 2 months ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆268Updated 2 months ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆121Updated 3 years ago
- ☆83Updated 2 years ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆114Updated 5 months ago
- AI and Memory Wall☆220Updated last year
- ☆24Updated 3 years ago
- ☆377Updated last year
- NCCL Profiling Kit☆146Updated last year
- ☆155Updated last year
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆89Updated 2 years ago
- A home for the final text of all TVM RFCs.☆109Updated last year
- High performance RDMA-based distributed feature collection component for training GNN model on EXTREMELY large graph☆55Updated 3 years ago
- ddl-benchmarks: Benchmarks for Distributed Deep Learning☆36Updated 5 years ago