SkyworkAI / vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
☆16Updated 11 months ago
Alternatives and similar repositories for vllm:
Users that are interested in vllm are comparing it to the libraries listed below
- OneFlow Serving☆20Updated 3 weeks ago
- Transformer related optimization, including BERT, GPT☆17Updated last year
- ☆28Updated 3 months ago
- ☆16Updated last year
- ☆23Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆36Updated last month
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆132Updated 10 months ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆22Updated last year
- An easy way to run, test, benchmark and tune OpenCL kernel files☆23Updated last year
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆42Updated last year
- Whisper in TensorRT-LLM☆15Updated last year
- Odysseus: Playground of LLM Sequence Parallelism☆69Updated 10 months ago
- ☆84Updated last month
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆37Updated last year
- 天池 NVIDIA TensorRT Hackathon 2023 —— 生成式AI模型优化赛 初赛第三名方案☆49Updated last year
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆47Updated last year
- OneFlow->ONNX☆43Updated 2 years ago
- llm deploy project based onnx.☆36Updated 7 months ago
- ☆11Updated last year
- Awesome code, projects, books, etc. related to CUDA☆16Updated 3 weeks ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆39Updated last year
- ☆124Updated last year
- GPTQ inference TVM kernel☆38Updated last year
- Manages vllm-nccl dependency☆17Updated 11 months ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆18Updated 7 months ago
- 大模型API性能指标比较 - 深入分析TTFT、TPS等关键指标☆17Updated 7 months ago
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆26Updated last year
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Updated last year
- ☆11Updated last year
- ☆72Updated 5 months ago