allenbai01 / transformers-as-statisticians
☆31Updated last year
Alternatives and similar repositories for transformers-as-statisticians:
Users that are interested in transformers-as-statisticians are comparing it to the libraries listed below
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆16Updated 5 months ago
- ☆67Updated 4 months ago
- ☆17Updated last year
- Code for Accelerated Linearized Laplace Approximation for Bayesian Deep Learning (ELLA, NeurIPS 22')☆16Updated 2 years ago
- What Makes a Reward Model a Good Teacher? An Optimization Perspective☆23Updated 2 weeks ago
- This is an official repository for "LAVA: Data Valuation without Pre-Specified Learning Algorithms" (ICLR2023).☆48Updated 10 months ago
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 2 years ago
- Benchmark for Natural Temporal Distribution Shift (NeurIPS 2022)☆66Updated 2 years ago
- The code for our NeurIPS 2021 paper "Kernelized Heterogeneous Risk Minimization".☆12Updated 3 years ago
- official code for paper Probing the Decision Boundaries of In-context Learning in Large Language Models. https://arxiv.org/abs/2406.11233…☆17Updated 7 months ago
- A python package providing a benchmark with various specified distribution shift patterns.☆57Updated last year
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- Experiments and code to generate the GINC small-scale in-context learning dataset from "An Explanation for In-context Learning as Implici…☆105Updated last year
- Code for the paper "Pretraining task diversity and the emergence of non-Bayesian in-context learning for regression"☆21Updated last year
- Provably (and non-vacuously) bounding test error of deep neural networks under distribution shift with unlabeled test data.☆10Updated last year
- Rewarded soups official implementation☆58Updated last year
- Code and data for the paper "Understanding Hidden Context in Preference Learning: Consequences for RLHF"☆29Updated last year
- ☆18Updated 9 months ago
- Code for the paper: Why Transformers Need Adam: A Hessian Perspective☆57Updated last month
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆54Updated 3 weeks ago
- Gradient Estimation with Discrete Stein Operators (NeurIPS 2022)☆17Updated last year
- Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)☆28Updated last year
- ☆15Updated 5 months ago
- ☆40Updated last year
- ☆24Updated last year
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆35Updated last year
- source code for paper "Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models"☆24Updated 10 months ago
- Lightweight Adapting for Black-Box Large Language Models☆22Updated last year
- Bayesian low-rank adaptation for large language models☆23Updated 11 months ago