evanatyourservice / kron_torchLinks
An implementation of PSGD Kron second-order optimizer for PyTorch
☆97Updated 5 months ago
Alternatives and similar repositories for kron_torch
Users that are interested in kron_torch are comparing it to the libraries listed below
Sorting:
- supporting pytorch FSDP for optimizers☆84Updated last year
- Efficient optimizers☆279Updated last week
- Getting crystal-like representations with harmonic loss☆194Updated 8 months ago
- 🧱 Modula software package☆316Updated 4 months ago
- ☆230Updated last year
- ☆70Updated last year
- WIP☆93Updated last year
- The AdEMAMix Optimizer: Better, Faster, Older.☆186Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated last year
- DeMo: Decoupled Momentum Optimization☆198Updated last year
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 8 months ago
- ☆122Updated 6 months ago
- 📄Small Batch Size Training for Language Models☆69Updated 2 months ago
- Supporting code for the blog post on modular manifolds.☆108Updated 3 months ago
- ☆82Updated last year
- Focused on fast experimentation and simplicity☆76Updated last year
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆85Updated 3 months ago
- FlashRNN - Fast RNN Kernels with I/O Awareness☆174Updated 2 months ago
- MoE training for Me and You and maybe other people☆298Updated 2 weeks ago
- H-Net Dynamic Hierarchical Architecture☆80Updated 3 months ago
- ☆92Updated last year
- Official implementation of the paper: "ZClip: Adaptive Spike Mitigation for LLM Pre-Training".☆141Updated last month
- research impl of Native Sparse Attention (2502.11089)☆63Updated 10 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆18Updated 5 months ago
- ☆156Updated 2 months ago
- ☆212Updated last year
- ☆107Updated 5 months ago
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆336Updated last month
- Dion optimizer algorithm☆411Updated last week