Tencent / AngelSlimLinks
Model compression toolkit engineered for enhanced usability, comprehensiveness, and efficiency.
☆244Updated this week
Alternatives and similar repositories for AngelSlim
Users that are interested in AngelSlim are comparing it to the libraries listed below
Sorting:
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆214Updated 3 months ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆272Updated 5 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆198Updated last month
- A quantization algorithm for LLM☆147Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆138Updated last year
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆250Updated last year
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆153Updated 4 months ago
- An industrial extension library of pytorch to accelerate large scale model training☆56Updated 4 months ago
- An easy-to-use package for implementing SmoothQuant for LLMs☆110Updated 9 months ago
- ☆443Updated 5 months ago
- GLM Series Edge Models☆156Updated 6 months ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆23Updated last year
- [EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs, and video generative models.☆652Updated last month
- 青稞Talk☆181Updated last week
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆184Updated 9 months ago
- ☆206Updated 8 months ago
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆323Updated last month
- ☆71Updated this week
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆256Updated 5 months ago
- mllm-npu: training multimodal large language models on Ascend NPUs☆95Updated last year
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆338Updated 10 months ago
- KV cache compression for high-throughput LLM inference☆148Updated 11 months ago
- [ICML 2025] Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"☆204Updated last month
- ☆65Updated 3 months ago
- Efficient Mixture of Experts for LLM Paper List☆154Updated 3 months ago
- This repository is the official implementation of "Jakiro: Boosting Speculative Decoding with Decoupled Multi-Head via MoE"☆36Updated 3 months ago
- ☆78Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆519Updated 11 months ago
- ☆153Updated 10 months ago
- simplify >2GB large onnx model☆70Updated last year