JeanKaddour / LAWALinks
Latest Weight Averaging (NeurIPS HITY 2022)
☆32Updated 2 years ago
Alternatives and similar repositories for LAWA
Users that are interested in LAWA are comparing it to the libraries listed below
Sorting:
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- ☆52Updated last year
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆70Updated last year
- Recycling diverse models☆46Updated 3 years ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆88Updated last year
- Triton Implementation of HyperAttention Algorithm☆48Updated 2 years ago
- ☆52Updated last month
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated 2 years ago
- Replicating and dissecting the git-re-basin project in one-click-replication Colabs☆37Updated 3 years ago
- ModelDiff: A Framework for Comparing Learning Algorithms☆58Updated 2 years ago
- Data for "Datamodels: Predicting Predictions with Training Data"☆97Updated 2 years ago
- Gemstones: A Model Suite for Multi-Faceted Scaling Laws (NeurIPS 2025)☆32Updated 3 months ago
- The official repository for our paper "The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns …☆16Updated 7 months ago
- ☆45Updated 2 years ago
- Code for T-MARS data filtering☆35Updated 2 years ago
- Official code for the paper "Attention as a Hypernetwork"☆47Updated last year
- Blog post☆17Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 7 months ago
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆46Updated 3 months ago
- A centralized place for deep thinking code and experiments☆90Updated 2 years ago
- ☆91Updated last year
- ☆62Updated last year
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated 2 years ago
- nanoGPT-like codebase for LLM training☆114Updated 2 months ago
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆31Updated last year
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated 2 years ago
- Using FlexAttention to compute attention with different masking patterns☆47Updated last year
- ☆30Updated 2 years ago
- Code release for REPAIR: REnormalizing Permuted Activations for Interpolation Repair☆52Updated last year
- ☆83Updated 2 years ago