lucidrains / triton-transformerLinks
Implementation of a Transformer, but completely in Triton
☆274Updated 3 years ago
Alternatives and similar repositories for triton-transformer
Users that are interested in triton-transformer are comparing it to the libraries listed below
Sorting:
- Implementation of Flash Attention in Jax☆216Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆224Updated last year
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆161Updated 2 months ago
- ☆118Updated last year
- Torch Distributed Experimental☆117Updated last year
- ☆330Updated this week
- A library for unit scaling in PyTorch☆130Updated 2 months ago
- ☆251Updated last year
- ☆159Updated last year
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆575Updated last month
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆209Updated last week
- jax-triton contains integrations between JAX and OpenAI Triton☆416Updated last week
- Pipeline Parallelism for PyTorch☆779Updated last year
- Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch☆184Updated 2 years ago
- ☆188Updated last week
- Applied AI experiments and examples for PyTorch☆295Updated 3 weeks ago
- PyTorch RFCs (experimental)☆135Updated 3 months ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆265Updated last month
- ☆168Updated last year
- ☆149Updated 2 years ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆214Updated 2 years ago
- Triton-based implementation of Sparse Mixture of Experts.☆238Updated 2 weeks ago
- ☆361Updated last year
- A library to analyze PyTorch traces.☆406Updated 3 weeks ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆537Updated 3 months ago
- ☆176Updated last year
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆379Updated 2 years ago
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters i…☆180Updated 2 weeks ago
- Experiment of using Tangent to autodiff triton☆81Updated last year
- ☆110Updated last year