DeepLink-org / AIChipBenchmarkLinks
☆28Updated last month
Alternatives and similar repositories for AIChipBenchmark
Users that are interested in AIChipBenchmark are comparing it to the libraries listed below
Sorting:
- ☆128Updated 7 months ago
- ☆139Updated last year
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆256Updated this week
- ☆59Updated 8 months ago
- ☆149Updated 7 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆104Updated 2 months ago
- ☆81Updated last week
- DeepSparkHub selects hundreds of application algorithms and models, covering various fields of AI and general-purpose computing, to suppo…☆65Updated last month
- DeepLearning Framework Performance Profiling Toolkit☆285Updated 3 years ago
- Tutorials for writing high-performance GPU operators in AI frameworks.☆130Updated last year
- FlagScale is a large model toolkit based on open-sourced projects.☆336Updated this week
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆83Updated 2 years ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆101Updated 3 weeks ago
- ☆102Updated 4 months ago
- 动手学习TVM核心原理教程☆62Updated 4 years ago
- ☆72Updated 8 months ago
- ☆69Updated 9 months ago
- code reading for tvm☆76Updated 3 years ago
- Efficient operation implementation based on the Cambricon Machine Learning Unit (MLU) .☆125Updated last week
- ☆137Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆39Updated 5 months ago
- A tutorial for CUDA&PyTorch☆150Updated 6 months ago
- heterogeneity-aware-lowering-and-optimization☆255Updated last year
- GLake: optimizing GPU memory management and IO transmission.☆471Updated 4 months ago
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆61Updated last year
- NART = NART is not A RunTime, a deep learning inference framework.☆37Updated 2 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆474Updated last year
- ☆207Updated 8 months ago
- ☆145Updated 5 months ago