bytedance / xpu-perfLinks
AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and versatility of software and hardware.
☆282Updated 3 months ago
Alternatives and similar repositories for xpu-perf
Users that are interested in xpu-perf are comparing it to the libraries listed below
Sorting:
- ☆152Updated 11 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆118Updated 6 months ago
- A model compilation solution for various hardware☆457Updated 3 months ago
- DeepSeek-V3/R1 inference performance simulator☆169Updated 8 months ago
- GLake: optimizing GPU memory management and IO transmission.☆491Updated 8 months ago
- A lightweight design for computation-communication overlap.☆194Updated 2 months ago
- ☆134Updated last week
- Fast and memory-efficient exact attention☆104Updated this week
- ☆140Updated last year
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆442Updated this week
- PyTorch distributed training acceleration framework☆53Updated 4 months ago
- ☆130Updated 11 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆135Updated 7 months ago
- ☆119Updated 8 months ago
- Yinghan's Code Sample☆359Updated 3 years ago
- ☆102Updated last year
- heterogeneity-aware-lowering-and-optimization☆257Updated last year
- ☆154Updated last month
- ☆156Updated 11 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆119Updated last year
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆65Updated last year
- A benchmark suited especially for deep learning operators☆42Updated 2 years ago
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆84Updated 2 years ago
- ☆60Updated last year
- Development repository for the Triton-Linalg conversion☆206Updated 10 months ago
- ☆192Updated 2 years ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆43Updated 9 months ago
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆98Updated 2 years ago
- ☆328Updated last month
- Efficient operation implementation based on the Cambricon Machine Learning Unit (MLU) .☆142Updated this week