jaketae / alibi
PyTorch implementation of Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation
☆25Updated 2 years ago
Related projects: ⓘ
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 2 years ago
- An implementation of simple diffusion in PyTorch (and JAX)☆34Updated last year
- ☆24Updated this week
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆47Updated 2 years ago
- Residual Quantization with Implicit Neural Codebooks☆44Updated last month
- Another attempt at a long-context / efficient transformer by me☆37Updated 2 years ago
- ☆29Updated last year
- A convolution-free, transformer-only version of the CycleGAN framework☆32Updated 2 years ago
- JAX implementation ViT-VQGAN☆77Updated last year
- A simple implementation of [Mamba: Linear-Time Sequence Modeling with Selective State Spaces](https://arxiv.org/abs/2312.00752)☆19Updated 7 months ago
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆52Updated last year
- Implementation of Insertion-deletion Denoising Diffusion Probabilistic Models☆29Updated 2 years ago
- Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI☆74Updated 2 years ago
- Code for ICLR 2023 Paper, "Stable Target Field for Reduced Variance Score Estimation in Diffusion Models”☆66Updated last year
- Implementation of Retrieval-Augmented Denoising Diffusion Probabilistic Models in Pytorch☆64Updated 2 years ago
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆50Updated 2 years ago
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆36Updated 2 years ago
- Implementation of Metaformer, but in an autoregressive manner☆22Updated 2 years ago
- Code for the paper PermuteFormer☆43Updated 2 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆94Updated last year
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆101Updated last year
- Implementation of Multistream Transformers in Pytorch☆54Updated 3 years ago
- ☆48Updated 3 months ago
- ☆38Updated this week
- Experimental scripts for researching data adaptive learning rate scheduling.☆23Updated 11 months ago
- Implementation of a Light Recurrent Unit in Pytorch☆43Updated 3 weeks ago
- Implementation of NWT, audio-to-video generation, in Pytorch☆87Updated 2 years ago
- Keras implement of Finite Scalar Quantization☆58Updated 10 months ago
- The accompanying code for "Memory-efficient Transformers via Top-k Attention" (Ankit Gupta, Guy Dar, Shaya Goodman, David Ciprut, Jonatha…☆58Updated 3 years ago
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆30Updated last year