DeepLink-org / AIChipBenchmark
☆26Updated last month
Alternatives and similar repositories for AIChipBenchmark
Users that are interested in AIChipBenchmark are comparing it to the libraries listed below
Sorting:
- ☆148Updated 4 months ago
- ☆139Updated last year
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆77Updated this week
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆83Updated 2 years ago
- ☆58Updated this week
- ☆127Updated 4 months ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆239Updated last month
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆76Updated last week
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆36Updated 2 months ago
- ☆58Updated 5 months ago
- ☆91Updated last month
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆92Updated 2 weeks ago
- A benchmark suited especially for deep learning operators☆42Updated 2 years ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆100Updated last year
- Tencent Distribution of TVM☆15Updated 2 years ago
- ☆133Updated last year
- Tutorials for writing high-performance GPU operators in AI frameworks.☆130Updated last year
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆88Updated 4 months ago
- ☆17Updated last year
- play gemm with tvm☆91Updated last year
- ☆94Updated 8 months ago
- A lightweight design for computation-communication overlap.☆113Updated last week
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆65Updated 9 months ago
- ☆96Updated 3 years ago
- ☆133Updated 2 months ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆54Updated 9 months ago
- A tutorial for CUDA&PyTorch☆140Updated 3 months ago
- ☆140Updated 4 months ago
- Summary of some awesome work for optimizing LLM inference☆73Updated last month
- 动手学习TVM核心原理教程☆61Updated 4 years ago