Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch
☆806Jan 30, 2026Updated 2 months ago
Alternatives and similar repositories for rotary-embedding-torch
Users that are interested in rotary-embedding-torch are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- An implementation of local windowed attention for language modeling☆498Jul 16, 2025Updated 9 months ago
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,826Updated this week
- Vector (and Scalar) Quantization, in Pytorch☆3,896Mar 30, 2026Updated 2 weeks ago
- Axial Positional Embedding for Pytorch☆84Feb 25, 2025Updated last year
- Implementation of Nyström Self-attention, from the paper Nyströmformer☆145Mar 24, 2025Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Implementation of Agent Attention in Pytorch☆93Jul 10, 2024Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆92Jun 18, 2024Updated last year
- 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch☆2,182Nov 27, 2024Updated last year
- Rotary Transformer☆1,104Mar 21, 2022Updated 4 years ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆548May 16, 2025Updated 11 months ago
- Fast and memory-efficient exact attention☆23,344Updated this week
- [ECCV 2024] Official PyTorch implementation of RoPE-ViT "Rotary Position Embedding for Vision Transformer"☆457Oct 29, 2025Updated 5 months ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆220Feb 13, 2023Updated 3 years ago
- A Transformer made of Rotation-equivariant Attention using Vector Neurons☆101Aug 1, 2023Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆122Oct 17, 2024Updated last year
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆57Jan 5, 2023Updated 3 years ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆391Jul 18, 2023Updated 2 years ago
- 🚀 Efficient implementations for emerging model architectures☆4,878Updated this week
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆345Apr 2, 2025Updated last year
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)☆9,456Apr 9, 2026Updated last week
- Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"☆8,503May 31, 2024Updated last year
- Implementation of E(n)-Transformer, which incorporates attention mechanisms into Welling's E(n)-Equivariant Graph Neural Network☆226Jun 2, 2024Updated last year
- A simple way to keep track of an Exponential Moving Average (EMA) version of your Pytorch model☆645Dec 19, 2025Updated 3 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Implementation of the proposed minGRU in Pytorch☆322Dec 10, 2025Updated 4 months ago
- Structured state space sequence models☆2,883Jul 17, 2024Updated last year
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆49Jan 27, 2022Updated 4 years ago
- Implementation of a single layer of the MMDiT, proposed in Stable Diffusion 3, in Pytorch☆515Jan 18, 2026Updated 3 months ago
- Implementation of Block Recurrent Transformer - Pytorch☆224Aug 20, 2024Updated last year
- Implementation of Flash Attention in Jax☆228Mar 1, 2024Updated 2 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Apr 6, 2022Updated 4 years ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆231Sep 6, 2024Updated last year
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,177Feb 2, 2022Updated 4 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Implementation of the convolutional module from the Conformer paper, for use in Transformers☆433May 17, 2023Updated 2 years ago
- Implementation of an Attention layer where each head can attend to more than just one token, using coordinate descent to pick topk☆47Jul 16, 2023Updated 2 years ago
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆227Mar 25, 2026Updated 3 weeks ago
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,417Mar 30, 2026Updated 2 weeks ago
- Code for the ALiBi method for transformer language models (ICLR 2022)☆555Oct 30, 2023Updated 2 years ago
- Helpful tools and examples for working with flex-attention☆1,174Updated this week
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch☆1,200Aug 22, 2023Updated 2 years ago