evanatyourservice / kron_torchLinks
An implementation of PSGD Kron second-order optimizer for PyTorch
☆98Updated 5 months ago
Alternatives and similar repositories for kron_torch
Users that are interested in kron_torch are comparing it to the libraries listed below
Sorting:
- supporting pytorch FSDP for optimizers☆84Updated last year
- Getting crystal-like representations with harmonic loss☆195Updated 9 months ago
- 🧱 Modula software package☆322Updated 5 months ago
- Efficient optimizers☆281Updated last month
- WIP☆93Updated last year
- DeMo: Decoupled Momentum Optimization☆198Updated last year
- ☆123Updated 7 months ago
- ☆237Updated last year
- 📄Small Batch Size Training for Language Models☆79Updated 3 months ago
- The AdEMAMix Optimizer: Better, Faster, Older.☆186Updated last year
- ☆214Updated last year
- ☆70Updated last year
- Official implementation of the paper: "ZClip: Adaptive Spike Mitigation for LLM Pre-Training".☆141Updated 2 months ago
- Focused on fast experimentation and simplicity☆79Updated last year
- Modular, scalable library to train ML models☆187Updated last week
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated last year
- Implementation of Diffusion Transformer (DiT) in JAX☆305Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- Scalable and Performant Data Loading☆363Updated this week
- ☆92Updated last year
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 8 months ago
- ☆27Updated 3 months ago
- FlashRNN - Fast RNN Kernels with I/O Awareness☆174Updated 3 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆185Updated 6 months ago
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆150Updated 3 months ago
- For optimization algorithm research and development.☆556Updated last week
- ☆314Updated last year
- Dion optimizer algorithm☆419Updated this week
- ☆82Updated last year
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆86Updated 4 months ago