allenbai01 / transformers-as-statisticians
☆29Updated last year
Alternatives and similar repositories for transformers-as-statisticians:
Users that are interested in transformers-as-statisticians are comparing it to the libraries listed below
- ☆65Updated 3 months ago
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆15Updated 4 months ago
- ☆15Updated 11 months ago
- source code for paper "Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models"☆23Updated 9 months ago
- Code for Accelerated Linearized Laplace Approximation for Bayesian Deep Learning (ELLA, NeurIPS 22')☆16Updated 2 years ago
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆35Updated last year
- Deep Learning & Information Bottleneck☆58Updated last year
- Distributional and Outlier Robust Optimization (ICML 2021)☆26Updated 3 years ago
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 2 years ago
- ☆16Updated 2 years ago
- Code for the paper "Pretraining task diversity and the emergence of non-Bayesian in-context learning for regression"☆20Updated last year
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- Benchmark for Natural Temporal Distribution Shift (NeurIPS 2022)☆65Updated last year
- This is an official repository for "LAVA: Data Valuation without Pre-Specified Learning Algorithms" (ICLR2023).☆47Updated 9 months ago
- Efficient empirical NTKs in PyTorch☆18Updated 2 years ago
- ☆60Updated 3 years ago
- Bayesian low-rank adaptation for large language models☆22Updated 10 months ago
- Bayesian Low-Rank Adaptation for Large Language Models☆29Updated 9 months ago
- Curse-of-memory phenomenon of RNNs in sequence modelling☆19Updated this week
- ☆37Updated last year
- Pytorch code for experiments on Linear Transformers☆20Updated last year
- Code and data for the paper "Understanding Hidden Context in Preference Learning: Consequences for RLHF"☆29Updated last year
- Code for the paper: Why Transformers Need Adam: A Hessian Perspective☆53Updated 2 weeks ago
- Gradient Estimation with Discrete Stein Operators (NeurIPS 2022)☆17Updated last year
- [NeurIPS 2021] A Geometric Analysis of Neural Collapse with Unconstrained Features☆55Updated 2 years ago
- Rewarded soups official implementation☆55Updated last year
- Official implementation of Transformer Neural Processes☆71Updated 2 years ago
- ☆18Updated 8 months ago
- Code for "The Expressive Power of Low-Rank Adaptation".☆20Updated 11 months ago
- ☆28Updated 8 months ago