HanseulJo / position-couplingLinks
Position Coupling: Improving Length Generalization of Arithmetic Transformers Using Task Structure (NeurIPS 2024) + Arithmetic Transformers Can Length-Generalize in Both Operand Length and Count (ICLR 2025)
☆11Updated 2 months ago
Alternatives and similar repositories for position-coupling
Users that are interested in position-coupling are comparing it to the libraries listed below
Sorting:
- ☆20Updated 2 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- codes and plots for "Active-Dormant Attention Heads: Mechanistically Demystifying Extreme-Token Phenomena in LLMs"☆10Updated last year
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆20Updated last year
- ☆32Updated last year
- ☆80Updated 3 years ago
- ☆52Updated last month
- Unofficial Implementation of Selective Attention Transformer☆20Updated last year
- Fluent dreaming for language models☆12Updated last year
- Self-Supervised Alignment with Mutual Information☆20Updated last year
- ☆35Updated last year
- ☆33Updated 2 years ago
- Bayesian Low-Rank Adaptation for Large Language Models☆36Updated last year
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated 2 years ago
- Recycling diverse models☆46Updated 3 years ago
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆23Updated 8 months ago
- ☆45Updated 2 years ago
- Gemstones: A Model Suite for Multi-Faceted Scaling Laws (NeurIPS 2025)☆31Updated 3 months ago
- Data for "Datamodels: Predicting Predictions with Training Data"☆97Updated 2 years ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆108Updated 2 years ago
- ☆51Updated last year
- MambaFormer in-context learning experiments and implementation for https://arxiv.org/abs/2402.04248☆58Updated last year
- Code for NeurIPS'23 paper "A Bayesian Approach To Analysing Training Data Attribution In Deep Learning"☆17Updated 2 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Updated 2 years ago
- Lightweight Adapting for Black-Box Large Language Models☆24Updated last year
- Interpretating the latent space representations of attention head outputs for LLMs☆36Updated last year
- ☆13Updated 6 months ago
- ☆38Updated last year
- Deep Learning & Information Bottleneck☆63Updated 2 years ago
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆40Updated 2 years ago