LucasPrietoAl / grokking-at-the-edge-of-numerical-stability
☆56Updated last week
Alternatives and similar repositories for grokking-at-the-edge-of-numerical-stability:
Users that are interested in grokking-at-the-edge-of-numerical-stability are comparing it to the libraries listed below
- ☆69Updated 5 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆95Updated 3 weeks ago
- supporting pytorch FSDP for optimizers☆75Updated last month
- ☆75Updated 6 months ago
- ☆53Updated last year
- ☆78Updated 9 months ago
- ☆62Updated 3 months ago
- Experiments for efforts to train a new and improved t5☆77Updated 9 months ago
- Focused on fast experimentation and simplicity☆64Updated 3 weeks ago
- Collection of autoregressive model implementation☆76Updated last week
- Token Omission Via Attention☆122Updated 3 months ago
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆36Updated 3 months ago
- ☆41Updated last year
- Train, tune, and infer Bamba model☆76Updated this week
- ☆49Updated 4 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆121Updated 9 months ago
- ☆43Updated 2 months ago
- GoldFinch and other hybrid transformer components☆42Updated 5 months ago
- ☆65Updated 6 months ago
- WIP☆92Updated 5 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆90Updated last month
- ☆67Updated 5 months ago
- A general framework for inference-time scaling and steering of diffusion models with arbitrary rewards.☆40Updated this week
- Code for TrackTheMind☆67Updated last month
- Jax like function transformation engine but micro, microjax☆30Updated 2 months ago
- Normalized Transformer (nGPT)☆145Updated 2 months ago
- A repository for research on medium sized language models.☆76Updated 7 months ago
- Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers☆77Updated 6 months ago
- ☆49Updated 10 months ago
- ☆33Updated 4 months ago