DeepLink-org / AIChipBenchmark
☆25Updated this week
Alternatives and similar repositories for AIChipBenchmark:
Users that are interested in AIChipBenchmark are comparing it to the libraries listed below
- ☆139Updated 10 months ago
- ☆144Updated 2 months ago
- ☆127Updated 2 months ago
- ☆58Updated 3 months ago
- ☆87Updated 6 months ago
- ☆35Updated 5 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆89Updated 2 weeks ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆35Updated 2 weeks ago
- A benchmark suited especially for deep learning operators☆42Updated 2 years ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆57Updated 7 months ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆226Updated this week
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆82Updated last year
- ☆82Updated last year
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆93Updated last year
- NART = NART is not A RunTime, a deep learning inference framework.☆38Updated 2 years ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆79Updated 2 months ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆51Updated 7 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆105Updated 6 months ago
- ☆19Updated 3 years ago
- ☆66Updated 4 months ago
- Penn CIS 5650 (GPU Programming and Architecture) Final Project☆29Updated last year
- ☆112Updated 11 months ago
- ☆45Updated this week
- ☆132Updated 2 months ago
- An industrial extension library of pytorch to accelerate large scale model training☆25Updated last month
- Transformer related optimization, including BERT, GPT☆17Updated last year
- play gemm with tvm☆89Updated last year
- Tutorials for writing high-performance GPU operators in AI frameworks.☆129Updated last year
- Examples of CUDA implementations by Cutlass CuTe☆143Updated last month
- ☆29Updated 10 months ago