chengxiang / LinearTransformer
Pytorch code for experiments on Linear Transformers
☆13Updated 10 months ago
Related projects ⓘ
Alternatives and complementary repositories for LinearTransformer
- ☆59Updated 3 years ago
- Welcome to the 'In Context Learning Theory' Reading Group☆22Updated this week
- Efficient empirical NTKs in PyTorch☆16Updated 2 years ago
- Neural Tangent Kernel Papers☆92Updated 8 months ago
- Bayesian Low-Rank Adaptation for Large Language Models☆27Updated 4 months ago
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 2 years ago
- Omnigrok: Grokking Beyond Algorithmic Data☆49Updated last year
- [NeurIPS 2021] A Geometric Analysis of Neural Collapse with Unconstrained Features☆53Updated 2 years ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆84Updated last year
- Source code of "What can linearized neural networks actually say about generalization?☆18Updated 3 years ago
- Bayesian low-rank adaptation for large language models☆23Updated 6 months ago
- ☆32Updated 9 months ago
- This is an official repository for "LAVA: Data Valuation without Pre-Specified Learning Algorithms" (ICLR2023).☆44Updated 5 months ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆42Updated last year
- (ICML 2023) Feature learning in deep classifiers through Intermediate Neural Collapse: Accompanying code☆13Updated last year
- ☆26Updated last year
- ☆197Updated 6 months ago
- Deep Learning & Information Bottleneck☆50Updated last year
- This repo contains papers, books, tutorials and resources on Riemannian optimization.☆15Updated this week
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆79Updated last year
- Simple CIFAR10 ResNet example with JAX.☆21Updated 3 years ago
- Code for the paper "Pretraining task diversity and the emergence of non-Bayesian in-context learning for regression"☆20Updated last year
- Code for the paper: Why Transformers Need Adam: A Hessian Perspective☆40Updated 6 months ago
- Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]☆25Updated last year
- ☆32Updated last year
- Distilling Model Failures as Directions in Latent Space☆45Updated last year
- Influence Functions with (Eigenvalue-corrected) Kronecker-Factored Approximate Curvature☆100Updated 3 months ago
- Benchmark for Natural Temporal Distribution Shift (NeurIPS 2022)☆61Updated last year
- ☆59Updated 2 years ago
- Conformal Language Modeling☆22Updated 10 months ago