riverstone496 / awesome-second-order-optimizationLinks
β28Updated 2 months ago
Alternatives and similar repositories for awesome-second-order-optimization
Users that are interested in awesome-second-order-optimization are comparing it to the libraries listed below
Sorting:
- πSmall Batch Size Training for Language Modelsβ68Updated 2 months ago
- Supporting code for the blog post on modular manifolds.β104Updated 2 months ago
- β121Updated 6 months ago
- An implementation of PSGD Kron second-order optimizer for PyTorchβ97Updated 4 months ago
- β227Updated last year
- π§± Modula software packageβ315Updated 3 months ago
- supporting pytorch FSDP for optimizersβ84Updated last year
- Code and weights for the paper "Cluster and Predict Latents Patches for Improved Masked Image Modeling"β124Updated 8 months ago
- Dion optimizer algorithmβ403Updated this week
- WIPβ93Updated last year
- β69Updated last week
- β62Updated last year
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"β85Updated 3 months ago
- Flash Attention Triton kernel with support for second-order derivativesβ117Updated last month
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 secondsβ330Updated 3 weeks ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"β86Updated last year
- β53Updated last year
- Deep Networks Grok All the Time and Here is Whyβ38Updated last year
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]β69Updated last year
- β91Updated last year
- Code for the paper "Function-Space Learning Rates"β23Updated 6 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.β174Updated 5 months ago
- Official PyTorch implementation and models for paper "Diffusion Beats Autoregressive in Data-Constrained Settings". We find diffusion modβ¦β113Updated last month
- [ICML 2025] Roll the dice & look before you leap: Going beyond the creative limits of next-token predictionβ80Updated 6 months ago
- Official Jax Implementation of MD4 Masked Diffusion Modelsβ147Updated 9 months ago
- β34Updated last year
- Normalized Transformer (nGPT)β193Updated last year
- A comprehensive JAX/NNX library for diffusion and flow matching generative algorithms, featuring DiT (Diffusion Transformer) and its variβ¦β119Updated last month
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT trainingβ132Updated last year
- A basic pure pytorch implementation of flash attentionβ16Updated last year