Microsoft Automatic Mixed Precision Library
☆634Dec 1, 2025Updated 3 months ago
Alternatives and similar repositories for MS-AMP
Users that are interested in MS-AMP are comparing it to the libraries listed below
Sorting:
- Examples for MS-AMP package.☆30Jul 17, 2025Updated 8 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,211Updated this week
- This repository contains the experimental PyTorch native float8 training UX☆226Aug 1, 2024Updated last year
- Ring attention implementation with flash attention☆996Sep 10, 2025Updated 6 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆262Aug 9, 2025Updated 7 months ago
- A PyTorch native platform for training generative AI models☆5,162Updated this week
- ☆169Mar 9, 2023Updated 3 years ago
- PyTorch native quantization and sparsity for training and inference☆2,730Mar 14, 2026Updated last week
- Zero Bubble Pipeline Parallelism☆451May 7, 2025Updated 10 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆649Jan 15, 2026Updated 2 months ago
- The official implementation of the EMNLP 2023 paper LLM-FP4☆222Dec 15, 2023Updated 2 years ago
- FlashInfer: Kernel Library for LLM Serving☆5,145Updated this week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,233Aug 14, 2025Updated 7 months ago
- Minimalistic large language model 3D-parallelism training☆2,617Feb 19, 2026Updated last month
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆276Jul 16, 2025Updated 8 months ago
- ☆157Jun 22, 2023Updated 2 years ago
- Pipeline Parallelism for PyTorch☆785Aug 21, 2024Updated last year
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Aug 19, 2024Updated last year
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,621Jul 12, 2024Updated last year
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,041Sep 4, 2024Updated last year
- Ongoing research training transformer models at scale☆15,744Updated this week
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆1,000Mar 3, 2026Updated 2 weeks ago
- Transformer related optimization, including BERT, GPT☆6,397Mar 27, 2024Updated last year
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,266Mar 27, 2024Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆8,052Updated this week
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆752Aug 6, 2025Updated 7 months ago
- Large Context Attention☆769Oct 13, 2025Updated 5 months ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆281Nov 3, 2023Updated 2 years ago
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,273Aug 28, 2025Updated 6 months ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆490Updated this week
- Distributed Compiler based on Triton for Parallel Systems☆1,386Mar 11, 2026Updated last week
- Tile primitives for speedy kernels☆3,232Updated this week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,719Jun 25, 2024Updated last year
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆976Mar 6, 2026Updated 2 weeks ago
- Fast and memory-efficient exact attention☆22,832Updated this week
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆323Mar 4, 2025Updated last year
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆713Aug 13, 2024Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,463Jul 17, 2025Updated 8 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆818Mar 6, 2025Updated last year