This repository contains the experimental PyTorch native float8 training UX
☆226Aug 1, 2024Updated last year
Alternatives and similar repositories for float8_experimental
Users that are interested in float8_experimental are comparing it to the libraries listed below
Sorting:
- Microsoft Automatic Mixed Precision Library☆634Dec 1, 2025Updated 3 months ago
- PyTorch native quantization and sparsity for training and inference☆2,730Mar 14, 2026Updated last week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,211Updated this week
- Applied AI experiments and examples for PyTorch☆319Aug 22, 2025Updated 6 months ago
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆164Jan 12, 2026Updated 2 months ago
- Fast low-bit matmul kernels in Triton☆438Feb 1, 2026Updated last month
- Pipeline Parallelism for PyTorch☆785Aug 21, 2024Updated last year
- Simple (fast) transformer inference in PyTorch with torch.compile + lit-llama code☆10Aug 29, 2023Updated 2 years ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆95Feb 20, 2026Updated last month
- ☆65Apr 26, 2025Updated 10 months ago
- Tile primitives for speedy kernels☆3,232Updated this week
- ☆87Jan 23, 2025Updated last year
- [DATE 2023] Pipe-BD: Pipelined Parallel Blockwise Distillation☆12Jul 13, 2023Updated 2 years ago
- Ring attention implementation with flash attention☆996Sep 10, 2025Updated 6 months ago
- Distributed Compiler based on Triton for Parallel Systems☆1,386Mar 11, 2026Updated last week
- A CPU+GPU Profiling library that provides access to timeline traces and hardware performance counters.☆932Updated this week
- Debug print operator for cudagraph debugging☆14Aug 2, 2024Updated last year
- A PyTorch native platform for training generative AI models☆5,162Updated this week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,041Sep 4, 2024Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆74Mar 11, 2026Updated last week
- ☆350Mar 10, 2026Updated last week
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,077Apr 17, 2024Updated last year
- depyf is a tool to help you understand and adapt to PyTorch compiler torch.compile.☆797Oct 13, 2025Updated 5 months ago
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Jul 21, 2023Updated 2 years ago
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆15Oct 16, 2023Updated 2 years ago
- A library to analyze PyTorch traces.☆474Updated this week
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆487Updated this week
- A pytorch quantization backend for optimum☆1,032Nov 21, 2025Updated 4 months ago
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,187Aug 22, 2025Updated 6 months ago
- TensorDict is a pytorch dedicated tensor container.☆1,015Updated this week
- ☆159Sep 15, 2023Updated 2 years ago
- Examples for MS-AMP package.☆30Jul 17, 2025Updated 8 months ago
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆1,000Mar 3, 2026Updated 2 weeks ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆335Jul 2, 2024Updated last year
- PyTorch emulation library for Microscaling (MX)-compatible data formats☆347Jun 18, 2025Updated 9 months ago
- HALO: Hadamard-Assisted Low-Precision Optimization and Training method for finetuning LLMs. 🚀 The official implementation of https://arx…☆29Feb 17, 2025Updated last year
- ☆207May 5, 2025Updated 10 months ago
- This repository contains integer operators on GPUs for PyTorch.☆237Sep 29, 2023Updated 2 years ago
- ☆63Jul 21, 2024Updated last year