lucidrains / sinkhorn-router-pytorch
Self contained pytorch implementation of a sinkhorn based router, for mixture of experts or otherwise
☆34Updated 8 months ago
Alternatives and similar repositories for sinkhorn-router-pytorch
Users that are interested in sinkhorn-router-pytorch are comparing it to the libraries listed below
Sorting:
- Explorations into improving ViTArc with Slot Attention☆41Updated 6 months ago
- Implementation of Infini-Transformer in Pytorch☆110Updated 4 months ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆118Updated 6 months ago
- Implementation of a Light Recurrent Unit in Pytorch☆46Updated 7 months ago
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆56Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆99Updated 4 months ago
- Exploration into the Firefly algorithm in Pytorch☆38Updated 3 months ago
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"☆57Updated last year
- Exploration into the Scaling Value Iteration Networks paper, from Schmidhuber's group☆36Updated 7 months ago
- The Gaussian Histogram Loss (HL-Gauss) proposed by Imani et al. with a few convenient wrappers for regression, in Pytorch☆59Updated 2 weeks ago
- Efficiently discovering algorithms via LLMs with evolutionary search and reinforcement learning.☆78Updated 3 weeks ago
- Here we will test various linear attention designs.☆60Updated last year
- Explorations into the recently proposed Taylor Series Linear Attention☆99Updated 8 months ago
- Pytorch implementation of a simple way to enable (Stochastic) Frame Averaging for any network☆50Updated 9 months ago
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆103Updated 5 months ago
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆51Updated last month
- Explorations into whether a transformer with RL can direct a genetic algorithm to converge faster☆68Updated 3 weeks ago
- Implementation of an Attention layer where each head can attend to more than just one token, using coordinate descent to pick topk☆46Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆88Updated 10 months ago
- ☆27Updated last year
- ☆37Updated last year
- [ICLR 2025] Official PyTorch implementation of "Forgetting Transformer: Softmax Attention with a Forget Gate"☆99Updated this week
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆50Updated 3 months ago
- ☆31Updated last year
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆39Updated 7 months ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆66Updated 7 months ago
- JAX Scalify: end-to-end scaled arithmetics☆16Updated 6 months ago
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆82Updated 3 months ago
- Using FlexAttention to compute attention with different masking patterns☆43Updated 7 months ago
- Experimental scripts for researching data adaptive learning rate scheduling.☆23Updated last year