vectorch-ai / ScaleLLM
A high-performance inference system for large language models, designed for production environments.
☆392Updated 2 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for ScaleLLM
- A throughput-oriented high-performance serving framework for LLMs☆636Updated 2 months ago
- Efficient AI Inference & Serving☆458Updated 10 months ago
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆236Updated 8 months ago
- LLM Inference benchmark☆350Updated 3 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆624Updated 2 months ago
- A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations☆737Updated last week
- ☆191Updated this week
- [NeurIPS'24 Spotlight] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which reduces in…☆791Updated this week
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆545Updated last month
- OpenAI compatible API for TensorRT LLM triton backend☆177Updated 3 months ago
- Serving multiple LoRA finetuned LLM as one☆984Updated 6 months ago
- ☆157Updated last month
- The Triton TensorRT-LLM Backend☆706Updated this week
- LLMPerf is a library for validating and benchmarking LLMs☆645Updated 3 months ago
- A flexible and efficient training framework for large-scale alignment tasks☆206Updated this week
- Comparison of Language Model Inference Engines☆190Updated 2 months ago
- FlashInfer: Kernel Library for LLM Serving☆1,452Updated this week
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,008Updated 10 months ago
- Official Implementation of EAGLE-1 (ICML'24) and EAGLE-2 (EMNLP'24)☆826Updated this week
- QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving☆443Updated last week
- scalable and robust tree-based speculative decoding algorithm☆315Updated 3 months ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆305Updated 3 months ago
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆685Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆130Updated 4 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆357Updated this week
- ☆289Updated this week
- For releasing code related to compression methods for transformers, accompanying our publications☆372Updated last month
- Materials for learning SGLang☆96Updated this week
- ☆123Updated 2 weeks ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,149Updated last month