lucidrains / self-reasoning-tokens-pytorch
Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto
☆55Updated 11 months ago
Alternatives and similar repositories for self-reasoning-tokens-pytorch
Users that are interested in self-reasoning-tokens-pytorch are comparing it to the libraries listed below
Sorting:
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"☆57Updated last year
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆36Updated 2 months ago
- Exploration into the Scaling Value Iteration Networks paper, from Schmidhuber's group☆36Updated 7 months ago
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆103Updated 5 months ago
- Implementation of Infini-Transformer in Pytorch☆110Updated 4 months ago
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆25Updated 3 months ago
- Explorations into improving ViTArc with Slot Attention☆41Updated 6 months ago
- Implementation of a multimodal diffusion transformer in Pytorch☆102Updated 10 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆99Updated 8 months ago
- The Gaussian Histogram Loss (HL-Gauss) proposed by Imani et al. with a few convenient wrappers for regression, in Pytorch☆59Updated 2 weeks ago
- Implementation of Agent Attention in Pytorch☆89Updated 10 months ago
- Implementation of the proposed Spline-Based Transformer from Disney Research☆88Updated 6 months ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆118Updated 6 months ago
- Official code implementation for the work Preference Alignment with Flow Matching (NeurIPS 2024)☆49Updated 6 months ago
- Implementation of a Light Recurrent Unit in Pytorch☆46Updated 7 months ago
- [ICLR 2025] Official PyTorch implementation of "Forgetting Transformer: Softmax Attention with a Forget Gate"☆99Updated this week
- Griffin MQA + Hawk Linear RNN Hybrid☆86Updated last year
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆87Updated 10 months ago
- ☆33Updated 8 months ago
- Implementation of 🌻 Mirasol, SOTA Multimodal Autoregressive model out of Google Deepmind, in Pytorch☆88Updated last year
- Implementation of an Attention layer where each head can attend to more than just one token, using coordinate descent to pick topk☆46Updated last year
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆100Updated 2 years ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆50Updated 2 months ago
- Pytorch implementation of a simple way to enable (Stochastic) Frame Averaging for any network☆50Updated 9 months ago
- FID computation in Jax/Flax.☆27Updated 9 months ago
- Focused on fast experimentation and simplicity☆72Updated 4 months ago
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆82Updated 3 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆37Updated last year
- My attempts at applying Soundstream design on learned tokenization of text and then applying hierarchical attention to text generation☆86Updated 7 months ago