Z-T-WANG / LaProp-Optimizer
Codes accompanying the paper "LaProp: a Better Way to Combine Momentum with Adaptive Gradient"
☆28Updated 4 years ago
Alternatives and similar repositories for LaProp-Optimizer:
Users that are interested in LaProp-Optimizer are comparing it to the libraries listed below
- ☆33Updated 7 months ago
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆24Updated 3 months ago
- ☆27Updated last year
- Code for the paper "Function-Space Learning Rates"☆19Updated 2 weeks ago
- Implementation of Spectral State Space Models☆16Updated last year
- A simple hypernetwork implementation in jax using haiku.☆23Updated 2 years ago
- ☆31Updated last year
- ☆52Updated 7 months ago
- flexible meta-learning in jax☆13Updated last year
- ☆33Updated 2 years ago
- ☆17Updated 8 months ago
- ☆30Updated 5 months ago
- AdaCat☆49Updated 2 years ago
- Code for minimum-entropy coupling.☆31Updated 10 months ago
- JAX implementation of "Fine-Tuning Language Models with Just Forward Passes"☆19Updated last year
- ☆19Updated last month
- ☆13Updated 9 months ago
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆60Updated 3 years ago
- ☆53Updated last year
- ☆14Updated last month
- DiCE: The Infinitely Differentiable Monte-Carlo Estimator☆31Updated last year
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆17Updated last month
- High quality implementations of imitation and inverse reinforcement learning algorithms☆14Updated last month
- Generative cellular automaton-like learning environments for RL.☆19Updated 3 months ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆83Updated last year
- FID computation in Jax/Flax.☆27Updated 9 months ago
- Code for "Accelerating Training with Neuron Interaction and Nowcasting Networks" [to appear at ICLR 2025]☆19Updated last month
- ☆39Updated 3 years ago
- Automatically take good care of your preemptible TPUs☆36Updated last year
- The official code of "Building on Efficient Foundations: Effectively Training LLMs with Structured Feedforward Layers"☆19Updated 9 months ago