pytorch / ao
PyTorch native quantization and sparsity for training and inference
☆726Updated this week
Related projects: ⓘ
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆452Updated last week
- A pytorch quantization backend for optimum☆758Updated this week
- Pipeline Parallelism for PyTorch☆708Updated 3 weeks ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆659Updated this week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆560Updated 2 weeks ago
- Minimalistic large language model 3D-parallelism training☆1,111Updated this week
- FlashInfer: Kernel Library for LLM Serving☆1,138Updated this week
- Make PyTorch models up to 40% faster! Thunder is a source to source compiler for PyTorch. It enables using different hardware executors a…☆1,131Updated this week
- An open-source efficient deep learning framework/compiler, written in python.☆646Updated 3 weeks ago
- Puzzles for learning Triton☆966Updated this week
- TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, sparsity, distillat…☆434Updated last week
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆451Updated last month
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆407Updated this week
- Microsoft Automatic Mixed Precision Library☆505Updated 3 weeks ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆558Updated 5 months ago
- Tile primitives for speedy kernels☆1,489Updated this week
- ☆1,164Updated last week
- This repository contains the experimental PyTorch native float8 training UX☆210Updated last month
- ☆478Updated 2 weeks ago
- A throughput-oriented high-performance serving framework for LLMs☆470Updated this week
- Transform datasets at scale. Optimize datasets for fast AI model training.☆318Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs…☆1,811Updated this week
- A multi-level tensor algebra superoptimizer☆314Updated this week
- A native PyTorch Library for large model training☆1,544Updated this week
- Transformers with Arbitrarily Large Context☆613Updated last month
- QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving☆399Updated 2 weeks ago
- ☆247Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆250Updated this week
- Helpful tools and examples for working with flex-attention☆341Updated last month
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆994Updated 5 months ago