HanseulJo / position-couplingLinks
Position Coupling: Improving Length Generalization of Arithmetic Transformers Using Task Structure (NeurIPS 2024) + Arithmetic Transformers Can Length-Generalize in Both Operand Length and Count (ICLR 2025)
☆11Updated 2 weeks ago
Alternatives and similar repositories for position-coupling
Users that are interested in position-coupling are comparing it to the libraries listed below
Sorting:
- ☆20Updated this week
- ☆31Updated last year
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated 2 years ago
- Recycling diverse models☆46Updated 2 years ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated 6 months ago
- ☆78Updated 3 years ago
- ☆71Updated 11 months ago
- ☆13Updated 4 months ago
- Data for "Datamodels: Predicting Predictions with Training Data"☆97Updated 2 years ago
- Test-time-training on nearest neighbors for large language models☆46Updated last year
- Latest Weight Averaging (NeurIPS HITY 2022)☆31Updated 2 years ago
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆19Updated 11 months ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated 2 years ago
- Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).☆59Updated 3 years ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆84Updated last year
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆105Updated 2 years ago
- Efficient Scaling laws and collaborative pretraining.☆18Updated last month
- Gemstones: A Model Suite for Multi-Faceted Scaling Laws (NeurIPS 2025)☆29Updated last month
- Code Repository for the NeurIPS 2022 paper: "Hyper-Representations as Generative Models: Sampling Unseen Neural Network Weights".☆17Updated last year
- Lightweight Adapting for Black-Box Large Language Models☆24Updated last year
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆31Updated 9 months ago
- Code for the paper "Distinguishing the Knowable from the Unknowable with Language Models"☆10Updated last year
- Self-Supervised Alignment with Mutual Information☆21Updated last year
- ☆45Updated 2 years ago
- Code for reproducing our paper "Low Rank Adapting Models for Sparse Autoencoder Features"☆17Updated 7 months ago
- ☆34Updated last year
- ☆35Updated last year
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆55Updated 9 months ago
- ☆19Updated 7 months ago
- Provably (and non-vacuously) bounding test error of deep neural networks under distribution shift with unlabeled test data.☆10Updated last year