microsoft / T-MAC
Low-bit LLM inference on CPU with lookup table
☆583Updated this week
Related projects ⓘ
Alternatives and complementary repositories for T-MAC
- FlashInfer: Kernel Library for LLM Serving☆1,452Updated this week
- QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving☆443Updated last week
- A throughput-oriented high-performance serving framework for LLMs☆636Updated 2 months ago
- [EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a V…☆322Updated this week
- [NeurIPS'24 Spotlight] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which reduces in…☆791Updated this week
- ☆379Updated last week
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆420Updated this week
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆545Updated last month
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆624Updated 2 months ago
- Advanced Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for t…☆248Updated this week
- An innovative library for efficient LLM inference via low-bit quantization☆348Updated 2 months ago
- EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆224Updated last month
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆137Updated 2 months ago
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆311Updated 2 months ago
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆236Updated 8 months ago
- A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations☆737Updated last week
- FlagGems is an operator library for large language models implemented in Triton Language.☆342Updated this week
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆1,120Updated 3 months ago
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆523Updated this week
- Official Implementation of EAGLE-1 (ICML'24) and EAGLE-2 (EMNLP'24)☆826Updated this week
- For releasing code related to compression methods for transformers, accompanying our publications☆372Updated last month
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆305Updated 3 months ago
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆685Updated this week
- Efficient AI Inference & Serving☆458Updated 10 months ago
- ☆289Updated this week
- Official implementation of Half-Quadratic Quantization (HQQ)☆701Updated last week
- LLM Inference benchmark☆350Updated 3 months ago
- (ICML 2024) BiLLM: Pushing the Limit of Post-Training Quantization for LLMs☆195Updated 5 months ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆278Updated 4 months ago
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆284Updated 3 months ago