lucidrains / triton-transformerLinks
Implementation of a Transformer, but completely in Triton
☆270Updated 3 years ago
Alternatives and similar repositories for triton-transformer
Users that are interested in triton-transformer are comparing it to the libraries listed below
Sorting:
- Implementation of Flash Attention in Jax☆213Updated last year
- Torch Distributed Experimental☆116Updated 11 months ago
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆158Updated 3 weeks ago
- This repository contains the experimental PyTorch native float8 training UX☆224Updated 11 months ago
- ☆251Updated 11 months ago
- ☆112Updated last year
- Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch☆184Updated 2 years ago
- A library for unit scaling in PyTorch☆125Updated 7 months ago
- ☆320Updated 2 weeks ago
- ☆358Updated last year
- ☆157Updated last year
- ☆186Updated last month
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆205Updated this week
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆214Updated 2 years ago
- Pipeline Parallelism for PyTorch☆769Updated 10 months ago
- PyTorch RFCs (experimental)☆133Updated last month
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆561Updated 3 weeks ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆255Updated this week
- Block-sparse primitives for PyTorch☆157Updated 4 years ago
- jax-triton contains integrations between JAX and OpenAI Triton☆405Updated 3 weeks ago
- ☆147Updated 2 years ago
- Slicing a PyTorch Tensor Into Parallel Shards☆299Updated last month
- A library to analyze PyTorch traces.☆391Updated this week
- Applied AI experiments and examples for PyTorch☆281Updated last month
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆235Updated 2 years ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆379Updated last year
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆187Updated 3 years ago
- Library for 8-bit optimizers and quantization routines.☆716Updated 2 years ago
- Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)☆117Updated 3 years ago
- JAX implementation of the Llama 2 model☆219Updated last year