TheMody / No-learning-rates-needed-Introducing-SALSA-Stable-Armijo-Line-Search-AdaptationLinks
SaLSa Optimizer implementation (No learning rates needed)
☆31Updated 5 months ago
Alternatives and similar repositories for No-learning-rates-needed-Introducing-SALSA-Stable-Armijo-Line-Search-Adaptation
Users that are interested in No-learning-rates-needed-Introducing-SALSA-Stable-Armijo-Line-Search-Adaptation are comparing it to the libraries listed below
Sorting:
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated 10 months ago
- Getting crystal-like representations with harmonic loss☆192Updated 7 months ago
- ☆81Updated last year
- Official implementation of the paper "Linear Transformers with Learnable Kernel Functions are Better In-Context Models"☆165Updated 9 months ago
- ☆102Updated 3 months ago
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆121Updated last year
- The AdEMAMix Optimizer: Better, Faster, Older.☆186Updated last year
- PyTorch implementation of models from the Zamba2 series.☆185Updated 9 months ago
- An implementation of PSGD Kron second-order optimizer for PyTorch☆96Updated 3 months ago
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆72Updated last week
- ICLR 2025 - official implementation for "I-Con: A Unifying Framework for Representation Learning"☆117Updated 4 months ago
- Implementation of a Light Recurrent Unit in Pytorch☆49Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆202Updated last year
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆66Updated last year
- Efficiently discovering algorithms via LLMs with evolutionary search and reinforcement learning.☆116Updated last week
- A HuggingFace compatible Small Language Model trainer.☆76Updated 9 months ago
- A State-Space Model with Rational Transfer Function Representation.☆82Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆129Updated last year
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆147Updated last month
- The Gaussian Histogram Loss (HL-Gauss) proposed by Imani et al. with a few convenient wrappers for regression, in Pytorch☆66Updated 2 months ago
- Implementation of the proposed Spline-Based Transformer from Disney Research☆104Updated 11 months ago
- A repository for log-time feedforward networks☆222Updated last year
- Code for the paper Don't Pay Attention☆50Updated last month
- ☆150Updated last year
- ☆220Updated 10 months ago
- This is the code that went into our practical dive using mamba as information extraction☆55Updated last year
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new…☆123Updated last year
- ☆70Updated last year
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆91Updated 4 months ago
- ☆58Updated last year