lucidrains / rotary-embedding-torch
Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch
☆565Updated last month
Related projects ⓘ
Alternatives and complementary repositories for rotary-embedding-torch
- An implementation of local windowed attention for language modeling☆383Updated 2 months ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆359Updated last year
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆291Updated 4 months ago
- Code for the ALiBi method for transformer language models (ICLR 2022)☆506Updated last year
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆695Updated 6 months ago
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆637Updated last year
- A simple way to keep track of an Exponential Moving Average (EMA) version of your Pytorch model☆511Updated 2 weeks ago
- Helpful tools and examples for working with flex-attention☆460Updated 2 weeks ago
- Sequence modeling with Mega.☆297Updated last year
- Annotated version of the Mamba paper☆455Updated 8 months ago
- Implementation of Block Recurrent Transformer - Pytorch☆214Updated 2 months ago
- Implementation of Linformer for Pytorch☆255Updated 10 months ago
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆393Updated 8 months ago
- Neighborhood Attention Extension. Bringing attention to a neighborhood near you!☆363Updated last week
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆242Updated 6 months ago
- Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"☆347Updated last year
- A PyTorch implementation of Perceiver, Perceiver IO and Perceiver AR with PyTorch Lightning scripts for distributed training☆437Updated 10 months ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆474Updated 2 weeks ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆207Updated last year
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆224Updated 2 months ago
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆277Updated last month
- Long Range Arena for Benchmarking Efficient Transformers☆727Updated 10 months ago
- Implementation of Bit Diffusion, Hinton's group's attempt at discrete denoising diffusion, in Pytorch☆332Updated last year
- ☆283Updated 2 months ago
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆1,320Updated this week
- Implementation of the proposed minGRU in Pytorch☆228Updated 2 weeks ago
- An implementation of 1D, 2D, and 3D positional encoding in Pytorch and TensorFlow☆545Updated 2 weeks ago
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆974Updated 6 months ago
- Rotary Transformer☆811Updated 2 years ago
- [ICML 2024 Best Paper] Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution (https://arxiv.org/abs/2310.16834)☆392Updated 8 months ago