Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch
☆811Jan 30, 2026Updated 3 months ago
Alternatives and similar repositories for rotary-embedding-torch
Users that are interested in rotary-embedding-torch are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- An implementation of local windowed attention for language modeling☆499Jul 16, 2025Updated 9 months ago
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,852Updated this week
- Vector (and Scalar) Quantization, in Pytorch☆3,920Apr 17, 2026Updated 3 weeks ago
- Axial Positional Embedding for Pytorch☆84Feb 25, 2025Updated last year
- Implementation of Nyström Self-attention, from the paper Nyströmformer☆145Mar 24, 2025Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Implementation of Agent Attention in Pytorch☆93Jul 10, 2024Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆92Jun 18, 2024Updated last year
- 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch☆2,185Nov 27, 2024Updated last year
- Rotary Transformer☆1,107Mar 21, 2022Updated 4 years ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆548May 16, 2025Updated 11 months ago
- Fast and memory-efficient exact attention☆23,628May 3, 2026Updated last week
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆220Feb 13, 2023Updated 3 years ago
- A Transformer made of Rotation-equivariant Attention using Vector Neurons☆102Aug 1, 2023Updated 2 years ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆122Oct 17, 2024Updated last year
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆57Jan 5, 2023Updated 3 years ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆391Jul 18, 2023Updated 2 years ago
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆345Apr 2, 2025Updated last year
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)☆9,476Apr 19, 2026Updated 2 weeks ago
- 🚀 Efficient implementations for emerging model architectures☆5,032May 1, 2026Updated last week
- Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"☆8,545May 31, 2024Updated last year
- Implementation of E(n)-Transformer, which incorporates attention mechanisms into Welling's E(n)-Equivariant Graph Neural Network☆226Jun 2, 2024Updated last year
- A simple way to keep track of an Exponential Moving Average (EMA) version of your Pytorch model☆650Dec 19, 2025Updated 4 months ago
- Implementation of the proposed minGRU in Pytorch☆323Dec 10, 2025Updated 4 months ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Structured state space sequence models☆2,893Jul 17, 2024Updated last year
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆49Jan 27, 2022Updated 4 years ago
- Implementation of a single layer of the MMDiT, proposed in Stable Diffusion 3, in Pytorch☆523Jan 18, 2026Updated 3 months ago
- Implementation of Block Recurrent Transformer - Pytorch☆225Aug 20, 2024Updated last year
- Implementation of Flash Attention in Jax☆228Mar 1, 2024Updated 2 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Apr 6, 2022Updated 4 years ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆231Sep 6, 2024Updated last year
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,177Feb 2, 2022Updated 4 years ago
- Implementation of the convolutional module from the Conformer paper, for use in Transformers☆435May 17, 2023Updated 2 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Implementation of an Attention layer where each head can attend to more than just one token, using coordinate descent to pick topk☆47Jul 16, 2023Updated 2 years ago
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆227Mar 25, 2026Updated last month
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,442Apr 21, 2026Updated 2 weeks ago
- Code for the ALiBi method for transformer language models (ICLR 2022)☆556Oct 30, 2023Updated 2 years ago
- Helpful tools and examples for working with flex-attention☆1,182Apr 13, 2026Updated 3 weeks ago
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch☆1,202Aug 22, 2023Updated 2 years ago
- Implementation of a Transformer, but completely in Triton☆278Apr 5, 2022Updated 4 years ago