JeanKaddour / LAWALinks
Latest Weight Averaging (NeurIPS HITY 2022)
☆32Updated 2 years ago
Alternatives and similar repositories for LAWA
Users that are interested in LAWA are comparing it to the libraries listed below
Sorting:
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆70Updated last year
- ☆52Updated last month
- Triton Implementation of HyperAttention Algorithm☆48Updated 2 years ago
- ☆52Updated last year
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆46Updated 3 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆88Updated last year
- Data for "Datamodels: Predicting Predictions with Training Data"☆97Updated 2 years ago
- Gemstones: A Model Suite for Multi-Faceted Scaling Laws (NeurIPS 2025)☆32Updated 4 months ago
- Recycling diverse models☆46Updated 3 years ago
- Replicating and dissecting the git-re-basin project in one-click-replication Colabs☆37Updated 3 years ago
- ☆91Updated last year
- Code release for REPAIR: REnormalizing Permuted Activations for Interpolation Repair☆52Updated 2 years ago
- ☆18Updated 3 years ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated 2 years ago
- ModelDiff: A Framework for Comparing Learning Algorithms☆58Updated 2 years ago
- ☆35Updated last year
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated 2 years ago
- Unofficial Implementation of Selective Attention Transformer☆20Updated last year
- nanoGPT-like codebase for LLM training☆113Updated 2 months ago
- Personal implementation of ASIF by Antonio Norelli☆26Updated last year
- Using FlexAttention to compute attention with different masking patterns☆47Updated last year
- Code for T-MARS data filtering☆35Updated 2 years ago
- ☆46Updated 2 years ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 7 months ago
- ☆62Updated last year
- The official repository for our paper "The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns …☆16Updated 7 months ago
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆31Updated last year
- Code for reproducing our paper "Low Rank Adapting Models for Sparse Autoencoder Features"☆17Updated 10 months ago
- ☆83Updated 2 years ago