cloneofsimo / minSAELinks
☆30Updated 10 months ago
Alternatives and similar repositories for minSAE
Users that are interested in minSAE are comparing it to the libraries listed below
Sorting:
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adam☆85Updated last year
- supporting pytorch FSDP for optimizers☆84Updated 9 months ago
- WIP☆93Updated last year
- Supporting code for the blog post on modular manifolds.☆39Updated this week
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆56Updated 6 months ago
- Focused on fast experimentation and simplicity☆75Updated 9 months ago
- ☆89Updated last year
- ☆40Updated 3 weeks ago
- ☆19Updated 4 months ago
- ☆67Updated 10 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆164Updated 3 months ago
- ☆41Updated 5 months ago
- ☆215Updated 10 months ago
- Accelerated First Order Parallel Associative Scan☆188Updated last year
- research impl of Native Sparse Attention (2502.11089)☆61Updated 7 months ago
- ☆53Updated last year
- Maximal Update Parametrization (μP) with Flax & Optax.☆16Updated last year
- DeMo: Decoupled Momentum Optimization☆193Updated 10 months ago
- These papers will provide unique insightful concepts that will broaden your perspective on neural networks and deep learning☆48Updated 2 years ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆128Updated last year
- H-Net Dynamic Hierarchical Architecture☆80Updated 3 weeks ago
- Efficient optimizers☆265Updated last week
- ☆53Updated last year
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆70Updated last year
- 📄Small Batch Size Training for Language Models☆62Updated last week
- ☆34Updated last year
- Mixture of A Million Experts☆48Updated last year
- ☆120Updated 3 months ago
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆84Updated 3 weeks ago