epfml / schedules-and-scalingLinks
Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"
☆81Updated 9 months ago
Alternatives and similar repositories for schedules-and-scaling
Users that are interested in schedules-and-scaling are comparing it to the libraries listed below
Sorting:
- Language models scale reliably with over-training and on downstream tasks☆98Updated last year
- ☆85Updated last year
- Simple and efficient pytorch-native transformer training and inference (batched)☆78Updated last year
- Stick-breaking attention☆59Updated last month
- ☆90Updated last year
- ☆53Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆84Updated last month
- Universal Neurons in GPT2 Language Models☆30Updated last year
- ☆34Updated 7 months ago
- nanoGPT-like codebase for LLM training☆102Updated 3 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆91Updated 9 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆77Updated 8 months ago
- ☆28Updated 6 months ago
- ☆33Updated last year
- Exploration of automated dataset selection approaches at large scales.☆47Updated 5 months ago
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆60Updated 3 months ago
- Using FlexAttention to compute attention with different masking patterns☆44Updated 11 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆33Updated 3 weeks ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- ☆20Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆152Updated last month
- Long Context Extension and Generalization in LLMs☆58Updated 11 months ago
- ☆101Updated 10 months ago
- ☆56Updated 10 months ago
- The evaluation framework for training-free sparse attention in LLMs☆90Updated 2 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year
- ☆49Updated last year
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆30Updated last month
- ☆45Updated last year