lucidrains / triton-transformerLinks
Implementation of a Transformer, but completely in Triton
☆273Updated 3 years ago
Alternatives and similar repositories for triton-transformer
Users that are interested in triton-transformer are comparing it to the libraries listed below
Sorting:
- Implementation of Flash Attention in Jax☆216Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆224Updated last year
- ☆118Updated last year
- Torch Distributed Experimental☆117Updated last year
- A library for unit scaling in PyTorch☆129Updated last month
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆158Updated 2 months ago
- ☆188Updated 3 weeks ago
- ☆159Updated last year
- ☆324Updated 3 weeks ago
- ☆251Updated last year
- Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch☆184Updated 2 years ago
- PyTorch RFCs (experimental)☆133Updated 2 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆208Updated last week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆568Updated last week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆260Updated last month
- Applied AI experiments and examples for PyTorch☆290Updated 2 months ago
- ☆361Updated last year
- jax-triton contains integrations between JAX and OpenAI Triton☆413Updated 2 months ago
- ☆110Updated 11 months ago
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters i…☆180Updated last month
- ☆148Updated 2 years ago
- JAX implementation of the Llama 2 model☆219Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆233Updated 8 months ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆215Updated 2 years ago
- Slicing a PyTorch Tensor Into Parallel Shards☆300Updated 2 months ago
- Block-sparse primitives for PyTorch☆158Updated 4 years ago
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆236Updated 2 years ago
- ☆232Updated this week
- Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)☆117Updated 3 years ago
- Pipeline Parallelism for PyTorch☆774Updated last year