SkyworkAI / vllmLinks
A high-throughput and memory-efficient inference and serving engine for LLMs
☆16Updated last year
Alternatives and similar repositories for vllm
Users that are interested in vllm are comparing it to the libraries listed below
Sorting:
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆23Updated last year
- OneFlow Serving☆20Updated 6 months ago
- ☆16Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- Transformer related optimization, including BERT, GPT☆17Updated 2 years ago
- ☆12Updated 2 years ago
- ☆101Updated 5 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆78Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆45Updated 4 months ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆19Updated this week
- Model compression toolkit engineered for enhanced usability, comprehensiveness, and efficiency.☆185Updated this week
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆43Updated 2 years ago
- ☆97Updated 7 months ago
- ☆33Updated 8 months ago
- ☆25Updated 2 years ago
- ☆78Updated 11 months ago
- Datasets, Transforms and Models specific to Computer Vision☆90Updated last year
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆58Updated 11 months ago
- patches for huggingface transformers to save memory☆30Updated 4 months ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- Quantized Attention on GPU☆44Updated 11 months ago
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated 3 months ago
- TensorRT LLM Benchmark Configuration☆13Updated last year
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Updated 2 years ago
- ☆18Updated last year
- ☆124Updated last year
- GPTQ inference TVM kernel☆39Updated last year
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆180Updated 6 months ago
- GLM Series Edge Models☆149Updated 4 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆46Updated 3 months ago