lucidrains / triton-transformerLinks
Implementation of a Transformer, but completely in Triton
☆275Updated 3 years ago
Alternatives and similar repositories for triton-transformer
Users that are interested in triton-transformer are comparing it to the libraries listed below
Sorting:
- Implementation of Flash Attention in Jax☆218Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆224Updated last year
- ☆122Updated last year
- ☆331Updated 3 weeks ago
- Torch Distributed Experimental☆117Updated last year
- ☆253Updated last year
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆161Updated 2 weeks ago
- ☆159Updated 2 years ago
- A library for unit scaling in PyTorch☆130Updated 2 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆213Updated last week
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆216Updated 2 years ago
- PyTorch RFCs (experimental)☆135Updated 4 months ago
- Pipeline Parallelism for PyTorch☆779Updated last year
- ☆188Updated last week
- ☆362Updated last year
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆576Updated last month
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆268Updated 2 months ago
- Applied AI experiments and examples for PyTorch☆296Updated last month
- Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch☆183Updated 2 years ago
- ☆149Updated 2 years ago
- Block-sparse primitives for PyTorch☆160Updated 4 years ago
- JAX implementation of the Llama 2 model☆218Updated last year
- A library to analyze PyTorch traces.☆414Updated last week
- jax-triton contains integrations between JAX and OpenAI Triton☆426Updated last month
- ☆112Updated last year
- Slicing a PyTorch Tensor Into Parallel Shards☆301Updated 3 months ago
- ☆240Updated last week
- Research and development for optimizing transformers☆130Updated 4 years ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆381Updated 2 years ago
- Prune a model while finetuning or training.☆405Updated 3 years ago