bytedance / ByteMLPerfLinks
AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and versatility of software and hardware.
☆265Updated 2 months ago
Alternatives and similar repositories for ByteMLPerf
Users that are interested in ByteMLPerf are comparing it to the libraries listed below
Sorting:
- ☆150Updated 9 months ago
- ☆129Updated 9 months ago
- ☆139Updated last year
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆112Updated 5 months ago
- A model compilation solution for various hardware☆451Updated 2 months ago
- ☆91Updated this week
- GLake: optimizing GPU memory management and IO transmission.☆481Updated 6 months ago
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆97Updated 2 years ago
- A lightweight design for computation-communication overlap.☆181Updated last week
- ☆141Updated last year
- ☆59Updated 11 months ago
- Fast and memory-efficient exact attention☆96Updated 2 weeks ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆425Updated this week
- Yinghan's Code Sample☆353Updated 3 years ago
- ☆136Updated 10 months ago
- DeepSeek-V3/R1 inference performance simulator☆170Updated 6 months ago
- ☆109Updated 6 months ago
- PyTorch distributed training acceleration framework☆53Updated 2 months ago
- Examples of CUDA implementations by Cutlass CuTe☆241Updated 3 months ago
- ☆100Updated last year
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆115Updated last year
- ☆193Updated 2 years ago
- ☆154Updated 9 months ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆479Updated last year
- heterogeneity-aware-lowering-and-optimization☆256Updated last year
- FlagTree is a unified compiler for multiple AI chips, which is forked from triton-lang/triton.☆90Updated this week
- A Easy-to-understand TensorOp Matmul Tutorial☆385Updated last week
- A simple high performance CUDA GEMM implementation.☆409Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆94Updated last month
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆41Updated 7 months ago