TheMody / No-learning-rates-needed-Introducing-SALSA-Stable-Armijo-Line-Search-Adaptation
SaLSa Optimizer implementation (No learning rates needed)
☆28Updated last month
Alternatives and similar repositories for No-learning-rates-needed-Introducing-SALSA-Stable-Armijo-Line-Search-Adaptation:
Users that are interested in No-learning-rates-needed-Introducing-SALSA-Stable-Armijo-Line-Search-Adaptation are comparing it to the libraries listed below
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆98Updated 3 months ago
- An implementation of PSGD Kron second-order optimizer for PyTorch☆83Updated last month
- ☆79Updated 11 months ago
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆77Updated last month
- Implementation of a Light Recurrent Unit in Pytorch☆47Updated 5 months ago
- A State-Space Model with Rational Transfer Function Representation.☆78Updated 10 months ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆17Updated this week
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new…☆120Updated 7 months ago
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆115Updated 9 months ago
- Focused on fast experimentation and simplicity☆69Updated 2 months ago
- A HuggingFace compatible Small Language Model trainer.☆74Updated last month
- FlashRNN - Fast RNN Kernels with I/O Awareness☆76Updated this week
- Official repository for the paper "NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks". This rep…☆54Updated 4 months ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆64Updated 10 months ago
- ☆31Updated 10 months ago
- Pytorch (Lightning) implementation of the Mamba model☆25Updated 11 months ago
- σ-GPT: A New Approach to Autoregressive Models☆62Updated 7 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆54Updated 11 months ago
- ☆58Updated 4 months ago
- PyTorch implementation of models from the Zamba2 series.☆177Updated 2 months ago
- ☆91Updated 2 months ago
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆42Updated 9 months ago
- This is the official repo for Gradient Agreement Filtering (GAF).☆23Updated last month
- ☆42Updated last month
- ☆169Updated 3 months ago
- Code and pretrained models for the paper: "MatMamba: A Matryoshka State Space Model"☆58Updated 4 months ago
- NLP with Rust for Python 🦀🐍☆61Updated 9 months ago