vectorch-ai / ScaleLLMLinks
A high-performance inference system for large language models, designed for production environments.
☆451Updated this week
Alternatives and similar repositories for ScaleLLM
Users that are interested in ScaleLLM are comparing it to the libraries listed below
Sorting:
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆243Updated last year
- A throughput-oriented high-performance serving framework for LLMs☆835Updated last month
- Efficient AI Inference & Serving☆471Updated last year
- Serving multiple LoRA finetuned LLM as one☆1,070Updated last year
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆855Updated 10 months ago
- ☆195Updated 2 months ago
- OpenAI compatible API for TensorRT LLM triton backend☆209Updated 11 months ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆259Updated last month
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,067Updated 2 weeks ago
- Comparison of Language Model Inference Engines☆219Updated 6 months ago
- ☆271Updated last month
- ☆428Updated this week
- LLM Inference benchmark☆421Updated 11 months ago
- Advanced Quantization Algorithm for LLMs and VLMs, with support for CPU, Intel GPU, CUDA and HPU. Seamlessly integrated with Torchao, Tra…☆526Updated this week
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆809Updated last month
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated 10 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,259Updated 4 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆714Updated 4 months ago
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆129Updated this week
- GPTQ inference Triton kernel☆302Updated 2 years ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆305Updated last month
- The Triton TensorRT-LLM Backend☆859Updated this week
- Accelerate inference without tears☆319Updated 3 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆264Updated 9 months ago
- ☆128Updated 6 months ago
- ☆455Updated this week
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆173Updated 3 months ago
- Easy and Efficient Quantization for Transformers☆198Updated 2 weeks ago
- Materials for learning SGLang☆475Updated this week
- LLMPerf is a library for validating and benchmarking LLMs☆956Updated 7 months ago