zhihu / ZhiLightLinks
A highly optimized LLM inference acceleration engine for Llama and its variants.
☆902Updated 3 months ago
Alternatives and similar repositories for ZhiLight
Users that are interested in ZhiLight are comparing it to the libraries listed below
Sorting:
- An acceleration library that supports arbitrary bit-width combinatorial quantization operations☆236Updated last year
- FlagPerf is an open-source software platform for benchmarking AI chips.☆352Updated last week
- Unified KV Cache Compression Methods for Auto-Regressive Models☆1,262Updated 9 months ago
- Higher performance OpenAI LLM service than vLLM serve: A pure C++ high-performance OpenAI LLM service implemented with GPRS+TensorRT-LLM+…☆156Updated 5 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆903Updated this week
- ☆897Updated this week
- DLRover: An Automatic Distributed Deep Learning System☆1,571Updated last week
- ☆508Updated last month
- adds Sequence Parallelism into LLaMA-Factory☆578Updated last week
- A scalable, end-to-end training pipeline for general-purpose agents☆360Updated 3 months ago
- Train your Agent model via our easy and efficient framework☆1,571Updated last week
- MIXQ: Taming Dynamic Outliers in Mixed-Precision Quantization by Online Prediction☆93Updated 11 months ago
- [ICLR 2025🔥] SVD-LLM & [NAACL 2025🔥] SVD-LLM V2☆257Updated last month
- ☆70Updated 11 months ago
- Deep Learning Deployment Framework: Supports tf/torch/trt/trtllm/vllm and other NN frameworks. Support dynamic batching, and streaming mo…☆167Updated 5 months ago
- minimal-cost for training 0.5B R1-Zero☆778Updated 5 months ago
- FlagScale is a large model toolkit based on open-sourced projects.☆364Updated this week
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆266Updated 2 months ago
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,153Updated last month
- xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism☆2,333Updated last week
- GLake: optimizing GPU memory management and IO transmission.☆483Updated 7 months ago
- UltraScale Playbook 中文版☆82Updated 7 months ago
- FlagGems is an operator library for large language models implemented in the Triton Language.☆703Updated this week
- ☆75Updated 11 months ago
- ☆194Updated last month
- ☆701Updated last month
- A powerful toolkit for compressing large models including LLM, VLM, and video generation models.☆593Updated 2 months ago
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆264Updated 5 months ago
- Omni_Infer is a suite of inference accelerators designed for the Ascend NPU platform, offering native support and an expanding feature se…☆80Updated this week
- Awesome LLMs on Device: A Comprehensive Survey☆1,231Updated 9 months ago