zhihu / TLLM_QMMLinks
TLLM_QMM strips the implementation of quantized kernels of Nvidia's TensorRT-LLM, removing NVInfer dependency and exposes ease of use Pytorch module. We modified the dequantation and weight preprocessing to align with popular quantization alogirthms such as AWQ and GPTQ, and combine them with new FP8 quantization.
☆16Updated last year
Alternatives and similar repositories for TLLM_QMM
Users that are interested in TLLM_QMM are comparing it to the libraries listed below
Sorting:
- PyTorch distributed training acceleration framework☆54Updated 4 months ago
- DeepRec Extension is an easy-to-use, stable and efficient large-scale distributed training system based on DeepRec.☆12Updated last year
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆99Updated 2 years ago
- ☆518Updated this week
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆120Updated last year
- ☆130Updated last year
- A high-performance framework for training wide-and-deep recommender systems on heterogeneous cluster☆159Updated last year
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆271Updated 2 years ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Updated 2 weeks ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆289Updated 4 months ago
- KV cache store for distributed LLM inference☆384Updated last month
- HierarchicalKV is a part of NVIDIA Merlin and provides hierarchical key-value storage to meet RecSys requirements. The key capability of…☆186Updated 2 months ago
- GLake: optimizing GPU memory management and IO transmission.☆494Updated 9 months ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆272Updated 5 months ago
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆85Updated 3 weeks ago
- Fast and memory-efficient exact attention☆107Updated 3 weeks ago
- ☆219Updated 2 years ago
- ☆58Updated 5 years ago
- ☆51Updated 9 months ago
- ☆152Updated last year
- ☆47Updated last year
- A lightweight parameter server interface☆87Updated 2 years ago
- ☆56Updated 2 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆476Updated last year
- ☆127Updated 4 years ago
- ☆141Updated last year
- ☆96Updated 9 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆995Updated this week
- ☆206Updated 8 months ago
- LLM training technologies developed by kwai☆69Updated this week