zhihu / ZhiLightLinks
A highly optimized LLM inference acceleration engine for Llama and its variants.
☆906Updated 6 months ago
Alternatives and similar repositories for ZhiLight
Users that are interested in ZhiLight are comparing it to the libraries listed below
Sorting:
- An acceleration library that supports arbitrary bit-width combinatorial quantization operations☆238Updated last year
- TVM Documentation in Chinese Simplified / TVM 中文文档☆2,984Updated last month
- FlagPerf is an open-source software platform for benchmarking AI chips.☆358Updated 2 months ago
- Unified KV Cache Compression Methods for Auto-Regressive Models☆1,297Updated last year
- UCCL is an efficient communication library for GPUs, covering collectives, P2P (e.g., KV cache transfer, RL weight transfer), and EP (e.g…☆1,168Updated this week
- ☆518Updated last week
- Higher performance OpenAI LLM service than vLLM serve: A pure C++ high-performance OpenAI LLM service implemented with GPRS+TensorRT-LLM+…☆161Updated last month
- adds Sequence Parallelism into LLaMA-Factory☆598Updated 3 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆995Updated this week
- ☆1,033Updated last week
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆273Updated 5 months ago
- Deep Learning Deployment Framework: Supports tf/torch/trt/trtllm/vllm and other NN frameworks. Support dynamic batching, and streaming mo…☆168Updated 8 months ago
- Train your Agent model via our easy and efficient framework☆1,688Updated last month
- DLRover: An Automatic Distributed Deep Learning System☆1,620Updated this week
- A scalable, end-to-end training pipeline for general-purpose agents☆363Updated 6 months ago
- MIXQ: Taming Dynamic Outliers in Mixed-Precision Quantization by Online Prediction☆94Updated last year
- [ICLR 2025🔥] SVD-LLM & [NAACL 2025🔥] SVD-LLM V2☆271Updated 4 months ago
- GLake: optimizing GPU memory management and IO transmission.☆496Updated 9 months ago
- ☆73Updated last year
- FlagScale is a large model toolkit based on open-sourced projects.☆463Updated last week
- xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism☆2,498Updated last week
- ☆203Updated 3 months ago
- ☆77Updated last year
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆626Updated this week
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆279Updated 8 months ago
- minimal-cost for training 0.5B R1-Zero☆799Updated 8 months ago
- UltraScale Playbook 中文版☆112Updated 10 months ago
- [Neurips 2025] R-KV: Redundancy-aware KV Cache Compression for Reasoning Models☆1,165Updated 3 months ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆476Updated last year
- ☆801Updated 2 weeks ago