microsoft / T-MAC
Low-bit LLM inference on CPU with lookup table
☆440Updated this week
Related projects: ⓘ
- FlashInfer: Kernel Library for LLM Serving☆1,143Updated last week
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆232Updated 6 months ago
- To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which reduces inference latency by up t…☆698Updated last week
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆130Updated 3 weeks ago
- QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving☆399Updated 2 weeks ago
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆276Updated last week
- This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit…☆227Updated this week
- Efficient AI Inference & Serving☆452Updated 8 months ago
- A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations☆646Updated this week
- C++ implementation of Qwen-LM☆531Updated 8 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆512Updated last week
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆342Updated this week
- A throughput-oriented high-performance serving framework for LLMs☆470Updated this week
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆1,045Updated last month
- ☆348Updated 2 weeks ago
- LLM Inference benchmark☆331Updated last month
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆562Updated 2 weeks ago
- Official Implementation of EAGLE-1 and EAGLE-2☆749Updated 3 weeks ago
- An innovative library for efficient LLM inference via low-bit quantization☆342Updated 3 weeks ago
- llm-export can export llm model to onnx.☆193Updated this week
- LLaMa/RWKV onnx models, quantization and testcase☆345Updated last year
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆226Updated this week
- ☆269Updated 5 months ago
- A high-performance inference system for large language models, designed for production environments.☆370Updated last week
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.☆360Updated last month
- Advanced Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for t…☆205Updated this week
- Microsoft Automatic Mixed Precision Library☆507Updated this week
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆970Updated 8 months ago
- (ICML 2024) BiLLM: Pushing the Limit of Post-Training Quantization for LLMs☆175Updated 3 months ago
- ☆251Updated last week