L1aoXingyu / llm-infer-benchLinks
☆12Updated 2 years ago
Alternatives and similar repositories for llm-infer-bench
Users that are interested in llm-infer-bench are comparing it to the libraries listed below
Sorting:
- Odysseus: Playground of LLM Sequence Parallelism☆79Updated last year
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆30Updated last year
- ☆16Updated last year
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated 2 years ago
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Updated 2 years ago
- TVMScript kernel for deformable attention☆25Updated 4 years ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Updated 6 months ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- An object detection codebase based on MegEngine.☆28Updated 3 years ago
- High Performance FP8 GEMM Kernels for SM89 and later GPUs.☆20Updated 11 months ago
- Quantized Attention on GPU☆44Updated last year
- ☆13Updated 2 years ago
- ☆84Updated 8 months ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆39Updated last year
- GPTQ inference TVM kernel☆41Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆17Updated last year
- Official implementation of ICML 2024 paper "ExCP: Extreme LLM Checkpoint Compression via Weight-Momentum Joint Shrinking".☆47Updated last year
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated 6 months ago
- ☆11Updated last year
- An external memory allocator example for PyTorch.☆16Updated 5 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Updated 7 months ago
- Training LLaMA language model with MMEngine! It supports LoRA fine-tuning!☆41Updated 2 years ago
- ☆27Updated 9 months ago
- AFPQ code implementation☆23Updated 2 years ago
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆50Updated 2 years ago
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Updated 5 months ago
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆43Updated last year
- [NeurIPS 2024] Search for Efficient LLMs☆16Updated last year
- study of cutlass☆22Updated last year
- ☆132Updated 7 months ago