mlc-ai / llm-perf-benchView external linksLinks
☆120Apr 22, 2024Updated last year
Alternatives and similar repositories for llm-perf-bench
Users that are interested in llm-perf-bench are comparing it to the libraries listed below
Sorting:
- ☆12Sep 1, 2023Updated 2 years ago
- ☆172Updated this week
- study of cutlass☆22Nov 10, 2024Updated last year
- ☆38Jul 19, 2025Updated 6 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Sep 13, 2025Updated 5 months ago
- ☆145Jan 30, 2025Updated last year
- ☆13Mar 27, 2023Updated 2 years ago
- Benchmark scripts for TVM☆74Mar 15, 2022Updated 3 years ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Sep 10, 2024Updated last year
- Debug print operator for cudagraph debugging☆14Aug 2, 2024Updated last year
- A basic Docker-based installation of TVM☆11Jun 23, 2022Updated 3 years ago
- ☆15Apr 15, 2022Updated 3 years ago
- Repository for CPU Kernel Generation for LLM Inference☆28Jul 13, 2023Updated 2 years ago
- Serving multiple LoRA finetuned LLM as one☆1,139May 8, 2024Updated last year
- ☆42Sep 8, 2023Updated 2 years ago
- ☆16Mar 30, 2024Updated last year
- ☆192Mar 28, 2023Updated 2 years ago
- This repository contains integer operators on GPUs for PyTorch.☆237Sep 29, 2023Updated 2 years ago
- ☆250Jul 27, 2025Updated 6 months ago
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Jul 21, 2023Updated 2 years ago
- A home for the final text of all TVM RFCs.☆109Sep 24, 2024Updated last year
- Document the demo and a series of documents for learning the diffusion model.☆42Jun 29, 2023Updated 2 years ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆752Aug 6, 2025Updated 6 months ago
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆110Jan 11, 2024Updated 2 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆477Mar 15, 2024Updated last year
- ☆160Sep 15, 2023Updated 2 years ago
- 面向多平台编译优化的深度学习中间表示☆10Oct 28, 2024Updated last year
- Code for the paper: CodeTree: Agent-guided Tree Search for Code Generation with Large Language Models☆31Apr 1, 2025Updated 10 months ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,911Sep 30, 2023Updated 2 years ago
- ☆25Jun 12, 2023Updated 2 years ago
- TinyChatEngine: On-Device LLM Inference Library☆940Jul 4, 2024Updated last year
- Zeta implementation of a reusable and plug in and play feedforward from the paper "Exponentially Faster Language Modeling"☆16Nov 11, 2024Updated last year
- ☆13May 25, 2023Updated 2 years ago
- Implementation of a Hierarchical Mamba as described in the paper: "Hierarchical State Space Models for Continuous Sequence-to-Sequence Mo…☆15Nov 11, 2024Updated last year
- ☆13Dec 9, 2024Updated last year
- python package of rocm-smi-lib☆24Dec 15, 2025Updated last month
- A list of awesome neural symbolic papers.☆51Jul 25, 2022Updated 3 years ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆192Jan 28, 2025Updated last year
- experiments with inference on llama☆103Jun 6, 2024Updated last year