allenbai01 / transformers-as-statisticiansLinks
☆32Updated 2 years ago
Alternatives and similar repositories for transformers-as-statisticians
Users that are interested in transformers-as-statisticians are comparing it to the libraries listed below
Sorting:
- ☆70Updated 7 months ago
- ☆18Updated last year
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆17Updated 7 months ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- ☆17Updated 8 months ago
- Code for the paper "Pretraining task diversity and the emergence of non-Bayesian in-context learning for regression"☆21Updated 2 years ago
- Code and data for the paper "Understanding Hidden Context in Preference Learning: Consequences for RLHF"☆29Updated last year
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆35Updated last year
- Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)☆28Updated last year
- Provably (and non-vacuously) bounding test error of deep neural networks under distribution shift with unlabeled test data.☆10Updated last year
- Rewarded soups official implementation☆58Updated last year
- This is an official repository for "LAVA: Data Valuation without Pre-Specified Learning Algorithms" (ICLR2023).☆48Updated last year
- Experiments and code to generate the GINC small-scale in-context learning dataset from "An Explanation for In-context Learning as Implici…☆108Updated last year
- Code for testing DCT plus Sparse (DCTpS) networks☆14Updated 4 years ago
- The official repository for our paper "Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks". We…☆46Updated last year
- Code for the paper: Why Transformers Need Adam: A Hessian Perspective☆59Updated 4 months ago
- ☆23Updated 9 months ago
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 3 years ago
- ☆233Updated last year
- ☆32Updated 8 months ago
- The code for our NeurIPS 2021 paper "Kernelized Heterogeneous Risk Minimization".☆13Updated 3 years ago
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆29Updated last year
- What Makes a Reward Model a Good Teacher? An Optimization Perspective☆34Updated 2 weeks ago
- Distilling Model Failures as Directions in Latent Space☆47Updated 2 years ago
- Self-Supervised Alignment with Mutual Information☆20Updated last year
- ☆31Updated last year
- Bayesian Low-Rank Adaptation for Large Language Models☆34Updated last year
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆25Updated 8 months ago
- ☆87Updated last year
- ☆44Updated 2 years ago