☆120Apr 22, 2024Updated last year
Alternatives and similar repositories for llm-perf-bench
Users that are interested in llm-perf-bench are comparing it to the libraries listed below
Sorting:
- ☆12Sep 1, 2023Updated 2 years ago
- ☆173Updated this week
- study of cutlass☆22Nov 10, 2024Updated last year
- ☆37Jul 19, 2025Updated 7 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Feb 20, 2026Updated 2 weeks ago
- ☆145Jan 30, 2025Updated last year
- ☆13Mar 27, 2023Updated 2 years ago
- Benchmark scripts for TVM☆74Mar 15, 2022Updated 3 years ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Sep 10, 2024Updated last year
- Debug print operator for cudagraph debugging☆14Aug 2, 2024Updated last year
- ☆15Apr 15, 2022Updated 3 years ago
- A basic Docker-based installation of TVM☆11Jun 23, 2022Updated 3 years ago
- Repository for CPU Kernel Generation for LLM Inference☆28Jul 13, 2023Updated 2 years ago
- Serving multiple LoRA finetuned LLM as one☆1,144May 8, 2024Updated last year
- ☆42Sep 8, 2023Updated 2 years ago
- ☆16Mar 30, 2024Updated last year
- ☆192Mar 28, 2023Updated 2 years ago
- This repository contains integer operators on GPUs for PyTorch.☆237Sep 29, 2023Updated 2 years ago
- ☆250Jul 27, 2025Updated 7 months ago
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Jul 21, 2023Updated 2 years ago
- A home for the final text of all TVM RFCs.☆109Sep 24, 2024Updated last year
- Document the demo and a series of documents for learning the diffusion model.☆42Jun 29, 2023Updated 2 years ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆753Aug 6, 2025Updated 7 months ago
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆110Jan 11, 2024Updated 2 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆478Mar 15, 2024Updated last year
- 面向多平台编译优化的深度学习中间表示☆10Oct 28, 2024Updated last year
- ☆160Sep 15, 2023Updated 2 years ago
- Code for the paper: CodeTree: Agent-guided Tree Search for Code Generation with Large Language Models☆31Apr 1, 2025Updated 11 months ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,913Sep 30, 2023Updated 2 years ago
- ☆13May 25, 2023Updated 2 years ago
- ☆13Dec 9, 2024Updated last year
- TinyChatEngine: On-Device LLM Inference Library☆943Jul 4, 2024Updated last year
- Zeta implementation of a reusable and plug in and play feedforward from the paper "Exponentially Faster Language Modeling"☆16Nov 11, 2024Updated last year
- python package of rocm-smi-lib☆24Dec 15, 2025Updated 2 months ago
- A list of awesome neural symbolic papers.☆52Jul 25, 2022Updated 3 years ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆194Jan 28, 2025Updated last year
- experiments with inference on llama☆103Jun 6, 2024Updated last year
- ☆68Mar 4, 2023Updated 3 years ago
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆713Aug 13, 2024Updated last year