This repository contains the experimental PyTorch native float8 training UX
☆226Aug 1, 2024Updated last year
Alternatives and similar repositories for float8_experimental
Users that are interested in float8_experimental are comparing it to the libraries listed below
Sorting:
- Microsoft Automatic Mixed Precision Library☆636Dec 1, 2025Updated 2 months ago
- PyTorch native quantization and sparsity for training and inference☆2,696Feb 22, 2026Updated last week
- Applied AI experiments and examples for PyTorch☆318Aug 22, 2025Updated 6 months ago
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆164Jan 12, 2026Updated last month
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,170Feb 21, 2026Updated last week
- Fast low-bit matmul kernels in Triton☆433Feb 1, 2026Updated last month
- Simple (fast) transformer inference in PyTorch with torch.compile + lit-llama code☆10Aug 29, 2023Updated 2 years ago
- Pipeline Parallelism for PyTorch☆785Aug 21, 2024Updated last year
- ☆65Apr 26, 2025Updated 10 months ago
- Debug print operator for cudagraph debugging☆14Aug 2, 2024Updated last year
- ☆85Jan 23, 2025Updated last year
- Ring attention implementation with flash attention☆986Sep 10, 2025Updated 5 months ago
- A CPU+GPU Profiling library that provides access to timeline traces and hardware performance counters.☆922Updated this week
- Tile primitives for speedy kernels☆3,183Updated this week
- A PyTorch native platform for training generative AI models☆5,098Updated this week
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Feb 20, 2026Updated last week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,018Sep 4, 2024Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆73Feb 17, 2026Updated last week
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆219Feb 16, 2026Updated last week
- ☆348Feb 19, 2026Updated last week
- train with kittens!☆63Oct 25, 2024Updated last year
- Distributed Compiler based on Triton for Parallel Systems☆1,361Feb 13, 2026Updated 2 weeks ago
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆478Feb 3, 2026Updated 3 weeks ago
- TensorDict is a pytorch dedicated tensor container.☆1,009Updated this week
- depyf is a tool to help you understand and adapt to PyTorch compiler torch.compile.☆790Oct 13, 2025Updated 4 months ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,075Apr 17, 2024Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆15Oct 16, 2023Updated 2 years ago
- ☆206May 5, 2025Updated 9 months ago
- ☆160Sep 15, 2023Updated 2 years ago
- Example of applying CUDA graphs to LLaMA-v2☆12Aug 25, 2023Updated 2 years ago
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Jul 21, 2023Updated 2 years ago
- A pytorch quantization backend for optimum☆1,025Nov 21, 2025Updated 3 months ago
- PyTorch emulation library for Microscaling (MX)-compatible data formats☆343Jun 18, 2025Updated 8 months ago
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆938Nov 27, 2025Updated 3 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆595Aug 12, 2025Updated 6 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆259Aug 9, 2025Updated 6 months ago
- Official Pytorch Implementation of Self-emerging Token Labeling☆35Mar 27, 2024Updated last year
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,184Aug 22, 2025Updated 6 months ago