A high-throughput and memory-efficient inference and serving engine for LLMs
☆17Jun 3, 2024Updated last year
Alternatives and similar repositories for vllm
Users that are interested in vllm are comparing it to the libraries listed below
Sorting:
- ☆34Feb 3, 2025Updated last year
- study of cutlass☆22Nov 10, 2024Updated last year
- ☆23Apr 25, 2023Updated 2 years ago
- ☕️ A vscode extension for netron, support *.pdmodel, *.nb, *.onnx, *.pb, *.h5, *.tflite, *.pth, *.pt, *.mnn, *.param, etc.☆14Jun 4, 2023Updated 2 years ago
- ☆11Dec 26, 2025Updated 2 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆21Feb 9, 2026Updated 2 weeks ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- ☆13Mar 27, 2023Updated 2 years ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆139Jun 12, 2024Updated last year
- auto deploy neovim like chxuan/vimplus☆12Apr 22, 2025Updated 10 months ago
- Handwritten GEMM using Intel AMX (Advanced Matrix Extension)☆17Jan 11, 2025Updated last year
- ☆17Jan 1, 2024Updated 2 years ago
- FSANet: 1 Mb!! Head Pose Estimation with MNN、TNN and ONNXRuntime C++.☆17Feb 4, 2022Updated 4 years ago
- YOLOX with NCNN/MNN/TNN/ONNXRuntime C++.☆13Dec 18, 2021Updated 4 years ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 6 months ago
- OneFlow Serving☆21Apr 10, 2025Updated 10 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆79Aug 12, 2024Updated last year
- OneFlow->ONNX☆43Apr 19, 2023Updated 2 years ago
- Implementation of PGONAS for CVPR22W and RD-NAS for ICASSP23☆23Apr 25, 2023Updated 2 years ago
- ☆62Feb 15, 2026Updated 2 weeks ago
- https://start.oneflow.org/oneflow-yolo-doc☆23Mar 14, 2023Updated 2 years ago
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 7 months ago
- ☆30Jul 22, 2024Updated last year
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Feb 22, 2026Updated last week
- URS Benchmark: Evaluating LLMs on User Reported Scenarios☆30May 30, 2025Updated 9 months ago
- Longitudinal Evaluation of LLMs via Data Compression☆33May 29, 2024Updated last year
- ☆31Aug 30, 2022Updated 3 years ago
- ☆32May 26, 2024Updated last year
- vLLM performance dashboard☆42Apr 26, 2024Updated last year
- 多选表格组件vue+element(支持分页,选中回显、搜索)☆12Nov 28, 2018Updated 7 years ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆106Jun 28, 2025Updated 8 months ago
- 一个飞书机器人☆12Sep 22, 2023Updated 2 years ago
- Normalization Matters in Weakly Supervised Object Localization (ICCV 2021)☆11Oct 24, 2021Updated 4 years ago
- Code for the experiments in the ACL 2020 paper "Estimating predictive uncertainty for rumour verification models"☆11May 15, 2020Updated 5 years ago
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Oct 1, 2025Updated 5 months ago
- This repository is a reimplementation of the paper(BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model: htt…☆11Nov 14, 2019Updated 6 years ago
- GPTQ inference TVM kernel☆40Apr 25, 2024Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Feb 20, 2026Updated last week
- ☆11Feb 25, 2025Updated last year