L1aoXingyu / llm-infer-bench
☆11Updated last year
Alternatives and similar repositories for llm-infer-bench
Users that are interested in llm-infer-bench are comparing it to the libraries listed below
Sorting:
- Odysseus: Playground of LLM Sequence Parallelism☆69Updated 10 months ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆56Updated last year
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆27Updated last year
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Updated last year
- ☆16Updated last year
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆41Updated last week
- Quantized Attention on GPU☆45Updated 5 months ago
- ☆75Updated 3 weeks ago
- Official implementation of ICML 2024 paper "ExCP: Extreme LLM Checkpoint Compression via Weight-Momentum Joint Shrinking".☆47Updated 10 months ago
- TVMScript kernel for deformable attention☆25Updated 3 years ago
- An external memory allocator example for PyTorch.☆14Updated 3 years ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆39Updated last year
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆62Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆36Updated last month
- GPTQ inference TVM kernel☆38Updated last year
- An object detection codebase based on MegEngine.☆28Updated 2 years ago
- Efficient Mixture of Experts for LLM Paper List☆64Updated 4 months ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆36Updated last year
- ☆30Updated 11 months ago
- Official Code For Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM☆14Updated last year
- ☆68Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆16Updated 11 months ago
- ☆68Updated 3 months ago
- study of cutlass☆21Updated 6 months ago
- ☆10Updated 4 months ago
- ☆20Updated 2 months ago
- ☆29Updated last year
- ☆22Updated last year
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆46Updated 5 months ago
- Implementation for the paper: CMoE: Fast Carving of Mixture-of-Experts for Efficient LLM Inference☆19Updated 2 months ago