meta-pytorch / float8_experimentalLinks
This repository contains the experimental PyTorch native float8 training UX
☆227Updated last year
Alternatives and similar repositories for float8_experimental
Users that are interested in float8_experimental are comparing it to the libraries listed below
Sorting:
- Applied AI experiments and examples for PyTorch☆309Updated 3 months ago
- Fast low-bit matmul kernels in Triton☆402Updated 3 weeks ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆216Updated 2 weeks ago
- Triton-based implementation of Sparse Mixture of Experts.☆253Updated 2 months ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆271Updated 2 weeks ago
- ☆257Updated last week
- ☆159Updated 2 years ago
- ring-attention experiments☆160Updated last year
- Cataloging released Triton kernels.☆277Updated 3 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆302Updated this week
- A library for unit scaling in PyTorch☆132Updated 5 months ago
- Collection of kernels written in Triton language☆173Updated 8 months ago
- extensible collectives library in triton☆91Updated 8 months ago
- ☆338Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆584Updated 4 months ago
- ☆113Updated last year
- ☆121Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆132Updated 6 months ago
- Implementation of a Transformer, but completely in Triton☆277Updated 3 years ago
- A Quirky Assortment of CuTe Kernels☆681Updated 3 weeks ago
- A bunch of kernels that might make stuff slower 😉☆65Updated last week
- Load compute kernels from the Hub☆348Updated this week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆272Updated 4 months ago
- A library to analyze PyTorch traces.☆443Updated 3 weeks ago
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆658Updated this week
- Fast Hadamard transform in CUDA, with a PyTorch interface☆264Updated last month
- ☆177Updated last year
- A safetensors extension to efficiently store sparse quantized tensors on disk☆214Updated this week
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆548Updated 6 months ago
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆161Updated 2 months ago