lucidrains / triton-transformerLinks
Implementation of a Transformer, but completely in Triton
☆273Updated 3 years ago
Alternatives and similar repositories for triton-transformer
Users that are interested in triton-transformer are comparing it to the libraries listed below
Sorting:
- Implementation of Flash Attention in Jax☆215Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆224Updated last year
- ☆323Updated last month
- ☆114Updated last year
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆158Updated last month
- Torch Distributed Experimental☆117Updated last year
- ☆187Updated this week
- A library for unit scaling in PyTorch☆128Updated 3 weeks ago
- ☆158Updated last year
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆206Updated last week
- ☆251Updated last year
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆565Updated this week
- ☆107Updated 11 months ago
- ☆361Updated last year
- Applied AI experiments and examples for PyTorch☆289Updated 2 months ago
- jax-triton contains integrations between JAX and OpenAI Triton☆411Updated last month
- Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch☆184Updated 2 years ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆258Updated last week
- Triton-based implementation of Sparse Mixture of Experts.☆230Updated 8 months ago
- PyTorch RFCs (experimental)☆133Updated 2 months ago
- ☆147Updated 2 years ago
- ☆227Updated last week
- Experiment of using Tangent to autodiff triton☆79Updated last year
- ☆171Updated last year
- JAX implementation of the Llama 2 model☆219Updated last year
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆214Updated 2 years ago
- Fast low-bit matmul kernels in Triton☆338Updated last week
- ☆162Updated last year
- JAX bindings for Flash Attention v2☆90Updated last week
- Load compute kernels from the Hub☆220Updated this week