allenbai01 / transformers-as-statisticiansLinks
☆34Updated 2 years ago
Alternatives and similar repositories for transformers-as-statisticians
Users that are interested in transformers-as-statisticians are comparing it to the libraries listed below
Sorting:
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆17Updated 10 months ago
- ☆70Updated 10 months ago
- ☆18Updated last year
- Code for the paper "Pretraining task diversity and the emergence of non-Bayesian in-context learning for regression"☆23Updated 2 years ago
- Experiments and code to generate the GINC small-scale in-context learning dataset from "An Explanation for In-context Learning as Implici…☆108Updated last year
- ☆240Updated last year
- Code for testing DCT plus Sparse (DCTpS) networks☆14Updated 4 years ago
- Code and data for the paper "Understanding Hidden Context in Preference Learning: Consequences for RLHF"☆30Updated last year
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated 2 years ago
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 3 years ago
- Efficient empirical NTKs in PyTorch☆22Updated 3 years ago
- Learning Safety Constraints for Large Language Models (ICML2025)☆23Updated 2 months ago
- Bayesian Low-Rank Adaptation for Large Language Models☆36Updated last year
- Provably (and non-vacuously) bounding test error of deep neural networks under distribution shift with unlabeled test data.☆10Updated last year
- Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)☆28Updated last year
- Code for Accelerated Linearized Laplace Approximation for Bayesian Deep Learning (ELLA, NeurIPS 22')☆16Updated 2 years ago
- ☆31Updated 6 months ago
- Rewarded soups official implementation☆60Updated 2 years ago
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆39Updated 2 years ago
- Curse-of-memory phenomenon of RNNs in sequence modelling☆19Updated 5 months ago
- Code for the paper: Why Transformers Need Adam: A Hessian Perspective☆63Updated 6 months ago
- Parallelizing non-linear sequential models over the sequence length☆54Updated 3 months ago
- ☆33Updated 11 months ago
- [NeurIPS 2025] What Makes a Reward Model a Good Teacher? An Optimization Perspective☆36Updated 2 weeks ago
- This is an official repository for "LAVA: Data Valuation without Pre-Specified Learning Algorithms" (ICLR2023).☆51Updated last year
- ☆20Updated last year
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- ☆45Updated last year
- [NeurIPS 2023 Spotlight] Temperature Balancing, Layer-wise Weight Analysis, and Neural Network Training☆35Updated 6 months ago
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆35Updated last year