L1aoXingyu / llm-infer-benchLinks
☆11Updated last year
Alternatives and similar repositories for llm-infer-bench
Users that are interested in llm-infer-bench are comparing it to the libraries listed below
Sorting:
- Odysseus: Playground of LLM Sequence Parallelism☆72Updated last year
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆29Updated last year
- ☆16Updated last year
- TVMScript kernel for deformable attention☆25Updated 3 years ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated last year
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆42Updated last month
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Updated 2 years ago
- An object detection codebase based on MegEngine.☆28Updated 2 years ago
- GPTQ inference TVM kernel☆40Updated last year
- ☆78Updated 3 months ago
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆46Updated last year
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆39Updated last year
- Flash Dynamic Mask Attention☆74Updated this week
- Quantized Attention on GPU☆44Updated 8 months ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- Training LLaMA language model with MMEngine! It supports LoRA fine-tuning!☆40Updated 2 years ago
- ☆13Updated 2 years ago
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated 3 weeks ago
- Official implementation of ICML 2024 paper "ExCP: Extreme LLM Checkpoint Compression via Weight-Momentum Joint Shrinking".☆48Updated last year
- ☆21Updated 4 months ago
- An external memory allocator example for PyTorch.☆14Updated 3 years ago
- A simple calculation for LLM MFU.☆42Updated 5 months ago
- ☆75Updated 2 months ago
- ☆11Updated 6 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆40Updated last month
- AFPQ code implementation☆22Updated last year
- Depict GPU memory footprint during DNN training of PyTorch☆11Updated 2 years ago
- study of cutlass☆22Updated 8 months ago
- ☆32Updated last year
- OneFlow Serving☆20Updated 3 months ago