zhihu / ZhiLightLinks
A highly optimized LLM inference acceleration engine for Llama and its variants.
☆902Updated 3 weeks ago
Alternatives and similar repositories for ZhiLight
Users that are interested in ZhiLight are comparing it to the libraries listed below
Sorting:
- An acceleration library that supports arbitrary bit-width combinatorial quantization operations☆229Updated 10 months ago
- FlagPerf is an open-source software platform for benchmarking AI chips.☆343Updated last week
- TVM Documentation in Chinese Simplified / TVM 中文文档☆2,006Updated 3 months ago
- Unified KV Cache Compression Methods for Auto-Regressive Models☆1,219Updated 7 months ago
- Higher performance OpenAI LLM service than vLLM serve: A pure C++ high-performance OpenAI LLM service implemented with GPRS+TensorRT-LLM+…☆148Updated 2 months ago
- Train your Agent model via our easy and efficient framework☆1,317Updated this week
- ☆474Updated last week
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆823Updated last week
- A scalable, end-to-end training pipeline for general-purpose agents☆349Updated last month
- ☆517Updated this week
- DLRover: An Automatic Distributed Deep Learning System☆1,514Updated this week
- adds Sequence Parallelism into LLaMA-Factory☆538Updated this week
- ☆68Updated 9 months ago
- MIXQ: Taming Dynamic Outliers in Mixed-Precision Quantization by Online Prediction☆92Updated 9 months ago
- minimal-cost for training 0.5B R1-Zero☆765Updated 2 months ago
- Deep Learning Deployment Framework: Supports tf/torch/trt/trtllm/vllm and other NN frameworks. Support dynamic batching, and streaming mo…☆165Updated 2 months ago
- ☆72Updated 8 months ago
- FlagScale is a large model toolkit based on open-sourced projects.☆333Updated this week
- xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism☆2,172Updated last week
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆263Updated this week
- UltraScale Playbook 中文版☆48Updated 4 months ago
- GLake: optimizing GPU memory management and IO transmission.☆471Updated 4 months ago
- [ICLR 2025🔥] SVD-LLM & [NAACL 2025🔥] SVD-LLM V2☆233Updated 4 months ago
- ☆590Updated 3 weeks ago
- Awesome LLMs on Device: A Comprehensive Survey☆1,167Updated 6 months ago
- FlagGems is an operator library for large language models implemented in the Triton Language.☆635Updated this week
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆221Updated 3 months ago
- A self-learning tutorail for CUDA High Performance Programing.☆690Updated last month
- ☆52Updated this week
- Disaggregated serving system for Large Language Models (LLMs).☆654Updated 3 months ago