shreyansh26 / An-Empirical-Model-of-Large-Batch-TrainingLinks
An approximate implementation of the OpenAI paper - An Empirical Model of Large-Batch Training for MNIST
☆10Updated 2 years ago
Alternatives and similar repositories for An-Empirical-Model-of-Large-Batch-Training
Users that are interested in An-Empirical-Model-of-Large-Batch-Training are comparing it to the libraries listed below
Sorting:
- ☆12Updated 3 months ago
- ☆53Updated last year
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆57Updated last month
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆88Updated last month
- Experiments on the impact of depth in transformers and SSMs.☆31Updated 7 months ago
- ☆79Updated 10 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆32Updated 3 months ago
- ☆13Updated 5 months ago
- ☆28Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- The official implementation for Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free☆44Updated last month
- ☆36Updated 2 months ago
- Using FlexAttention to compute attention with different masking patterns☆44Updated 9 months ago
- ☆55Updated 11 months ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆54Updated last year
- ☆21Updated 2 weeks ago
- Custom triton kernels for training Karpathy's nanoGPT.☆19Updated 8 months ago
- Minimal but scalable implementation of large language models in JAX☆35Updated 7 months ago
- ☆48Updated last year
- ☆32Updated last year
- Code for the paper "Function-Space Learning Rates"☆20Updated 3 weeks ago
- DeciMamba: Exploring the Length Extrapolation Potential of Mamba (ICLR 2025)☆28Updated 2 months ago
- Combining SOAP and MUON☆16Updated 4 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆28Updated 9 months ago
- Stick-breaking attention☆57Updated last week
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆35Updated last year
- The evaluation framework for training-free sparse attention in LLMs☆69Updated this week
- Here we will test various linear attention designs.☆59Updated last year
- ☆37Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 2 weeks ago