DeepLink-org / AIChipBenchmarkLinks
☆30Updated 2 months ago
Alternatives and similar repositories for AIChipBenchmark
Users that are interested in AIChipBenchmark are comparing it to the libraries listed below
Sorting:
- ☆141Updated last year
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆263Updated last month
- ☆128Updated 8 months ago
- ☆150Updated 8 months ago
- ☆88Updated this week
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆108Updated 4 months ago
- ☆59Updated 9 months ago
- Tutorials for writing high-performance GPU operators in AI frameworks.☆131Updated 2 years ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆112Updated last year
- ☆60Updated this week
- DeepLearning Framework Performance Profiling Toolkit☆288Updated 3 years ago
- DeepSparkHub selects hundreds of application algorithms and models, covering various fields of AI and general-purpose computing, to suppo…☆67Updated last week
- An unofficial cuda assembler, for all generations of SASS, hopefully :)☆84Updated 2 years ago
- ☆108Updated 5 months ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆106Updated 2 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆40Updated 6 months ago
- GLake: optimizing GPU memory management and IO transmission.☆479Updated 5 months ago
- ☆138Updated last year
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆264Updated last month
- ☆70Updated 10 months ago
- A tutorial for CUDA&PyTorch☆154Updated 7 months ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆62Updated 10 months ago
- FlagScale is a large model toolkit based on open-sourced projects.☆353Updated this week
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆477Updated last year
- ☆75Updated 9 months ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆62Updated last year
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- FlagTree is a unified compiler for multiple AI chips, which is forked from triton-lang/triton.☆84Updated this week
- MXMACA入门materials☆20Updated last year
- NART = NART is not A RunTime, a deep learning inference framework.☆37Updated 2 years ago