cloneofsimo / insightful-nn-papersLinks
These papers will provide unique insightful concepts that will broaden your perspective on neural networks and deep learning
☆48Updated 2 years ago
Alternatives and similar repositories for insightful-nn-papers
Users that are interested in insightful-nn-papers are comparing it to the libraries listed below
Sorting:
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adam☆85Updated last year
- ☆53Updated last year
- WIP☆94Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- ☆87Updated last year
- supporting pytorch FSDP for optimizers☆84Updated 9 months ago
- Flexibly track outputs and grad-outputs of torch.nn.Module.☆13Updated last year
- Sparse Autoencoders for Stable Diffusion XL models.☆70Updated last month
- ☆34Updated last year
- Replicating and dissecting the git-re-basin project in one-click-replication Colabs☆36Updated 2 years ago
- Implementation of Discrete Key / Value Bottleneck, in Pytorch☆88Updated 2 years ago
- ☆23Updated last year
- ☆30Updated 9 months ago
- Language models scale reliably with over-training and on downstream tasks☆98Updated last year
- Mixture of A Million Experts☆47Updated last year
- ☆91Updated 2 years ago
- Reproduction of DDPO paper (RLHF for diffusion)☆90Updated last year
- ☆104Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆157Updated 2 months ago
- ☆53Updated 9 months ago
- ☆106Updated 2 years ago
- LoRA for arbitrary JAX models and functions☆142Updated last year
- Consistency models trained on CIFAR-10, in JAX.☆150Updated 2 years ago
- ☆166Updated 2 years ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆56Updated 6 months ago
- Model Stock: All we need is just a few fine-tuned models☆122Updated last month
- Focused on fast experimentation and simplicity☆75Updated 8 months ago
- ☆39Updated last year
- Automatically take good care of your preemptible TPUs☆36Updated 2 years ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆82Updated 10 months ago