L1aoXingyu / llm-infer-benchLinks
☆11Updated last year
Alternatives and similar repositories for llm-infer-bench
Users that are interested in llm-infer-bench are comparing it to the libraries listed below
Sorting:
- Odysseus: Playground of LLM Sequence Parallelism☆70Updated last year
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated last year
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆27Updated last year
- TVMScript kernel for deformable attention☆25Updated 3 years ago
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Updated last year
- ☆77Updated 2 months ago
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆46Updated last year
- ☆16Updated last year
- Distributed DataLoader For Pytorch Based On Ray☆24Updated 3 years ago
- ☆22Updated 3 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆42Updated last week
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆39Updated last year
- ☆13Updated 2 years ago
- Quantized Attention on GPU☆44Updated 7 months ago
- Depict GPU memory footprint during DNN training of PyTorch☆11Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆16Updated last year
- An object detection codebase based on MegEngine.☆28Updated 2 years ago
- ☆11Updated 6 months ago
- GPTQ inference TVM kernel☆40Updated last year
- ☆31Updated last year
- Official implementation of ICML 2024 paper "ExCP: Extreme LLM Checkpoint Compression via Weight-Momentum Joint Shrinking".☆48Updated last year
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- A toolkit for developers to simplify the transformation of nn.Module instances. It's now corresponding to Pytorch.fx.☆13Updated 2 years ago
- Training LLaMA language model with MMEngine! It supports LoRA fine-tuning!☆40Updated 2 years ago
- ☆74Updated last month
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆38Updated last month
- An external memory allocator example for PyTorch.☆14Updated 3 years ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆23Updated last year
- ☆96Updated 10 months ago
- An easy-to-use package for implementing SmoothQuant for LLMs☆102Updated 3 months ago