lucidrains / rotary-embedding-torchLinks
Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch
☆736Updated 3 weeks ago
Alternatives and similar repositories for rotary-embedding-torch
Users that are interested in rotary-embedding-torch are comparing it to the libraries listed below
Sorting:
- An implementation of local windowed attention for language modeling☆472Updated last month
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆794Updated last year
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆379Updated 2 years ago
- A simple way to keep track of an Exponential Moving Average (EMA) version of your Pytorch model☆601Updated 8 months ago
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆792Updated last year
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆359Updated last year
- Helpful tools and examples for working with flex-attention☆938Updated last week
- Implementation of Linformer for Pytorch☆295Updated last year
- Code for the ALiBi method for transformer language models (ICLR 2022)☆539Updated last year
- Annotated version of the Mamba paper☆487Updated last year
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆569Updated 6 months ago
- Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"☆369Updated last year
- Implementation of Block Recurrent Transformer - Pytorch☆220Updated last year
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆536Updated 3 months ago
- Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models☆797Updated 2 months ago
- A PyTorch implementation of Perceiver, Perceiver IO and Perceiver AR with PyTorch Lightning scripts for distributed training☆487Updated last year
- Sequence modeling with Mega.☆297Updated 2 years ago
- Implementation of https://srush.github.io/annotated-s4☆499Updated 2 months ago
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆413Updated 7 months ago
- 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch☆2,153Updated 8 months ago
- When it comes to optimizers, it's always better to be safe than sorry☆359Updated last week
- Pytorch library for fast transformer implementations☆1,727Updated 2 years ago
- Muon is an optimizer for hidden layers in neural networks☆1,547Updated last month
- Long Range Arena for Benchmarking Efficient Transformers☆762Updated last year
- Fast Multi-dimensional Sparse Attention☆587Updated last week
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paper☆725Updated last week
- optimizer & lr scheduler & loss function collections in PyTorch