L1aoXingyu / llm-infer-benchLinks
☆12Updated 2 years ago
Alternatives and similar repositories for llm-infer-bench
Users that are interested in llm-infer-bench are comparing it to the libraries listed below
Sorting:
- Odysseus: Playground of LLM Sequence Parallelism☆78Updated last year
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated 2 years ago
- An object detection codebase based on MegEngine.☆28Updated 3 years ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆30Updated last year
- ☆16Updated last year
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Updated 2 years ago
- ☆27Updated 8 months ago
- Quantized Attention on GPU☆44Updated last year
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆39Updated last year
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆50Updated 2 years ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Updated 5 months ago
- ☆13Updated 2 years ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- ☆84Updated 8 months ago
- An external memory allocator example for PyTorch.☆16Updated 4 months ago
- ☆11Updated 11 months ago
- Official implementation of ICML 2024 paper "ExCP: Extreme LLM Checkpoint Compression via Weight-Momentum Joint Shrinking".☆47Updated last year
- GPTQ inference TVM kernel☆41Updated last year
- TVMScript kernel for deformable attention☆25Updated 4 years ago
- ☆89Updated last month
- ☆51Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆17Updated last year
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆98Updated last year
- High Performance FP8 GEMM Kernels for SM89 and later GPUs.☆20Updated 11 months ago
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated 5 months ago
- ☆157Updated 2 years ago
- Training LLaMA language model with MMEngine! It supports LoRA fine-tuning!☆41Updated 2 years ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆23Updated last year
- AFPQ code implementation☆24Updated 2 years ago
- study of cutlass☆22Updated last year