chengxiang / LinearTransformer
Pytorch code for experiments on Linear Transformers
☆20Updated last year
Alternatives and similar repositories for LinearTransformer:
Users that are interested in LinearTransformer are comparing it to the libraries listed below
- ☆65Updated 3 months ago
- ☆29Updated last year
- Efficient empirical NTKs in PyTorch☆18Updated 2 years ago
- source code for paper "Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models"☆23Updated 9 months ago
- Neural Tangent Kernel Papers☆106Updated 2 months ago
- Deep Learning & Information Bottleneck☆58Updated last year
- Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation (ICML'24 Oral)☆14Updated 8 months ago
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆15Updated 4 months ago
- (ICML 2023) Feature learning in deep classifiers through Intermediate Neural Collapse: Accompanying code☆13Updated last year
- ☆18Updated 4 months ago
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆35Updated 2 years ago
- ☆16Updated 6 months ago
- Source code of "What can linearized neural networks actually say about generalization?☆20Updated 3 years ago
- ☆34Updated last year
- This is an official repository for "LAVA: Data Valuation without Pre-Specified Learning Algorithms" (ICLR2023).☆47Updated 9 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆79Updated last year
- Code for testing DCT plus Sparse (DCTpS) networks☆14Updated 3 years ago
- Welcome to the 'In Context Learning Theory' Reading Group☆28Updated 4 months ago
- ☆19Updated last year
- Code for the paper "Pretraining task diversity and the emergence of non-Bayesian in-context learning for regression"☆20Updated last year
- Bayesian low-rank adaptation for large language models☆22Updated 10 months ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆97Updated last year
- Universal Neurons in GPT2 Language Models☆27Updated 9 months ago
- Benchmark for Natural Temporal Distribution Shift (NeurIPS 2022)☆65Updated last year
- ☆15Updated 11 months ago
- ☆60Updated 3 years ago
- ☆28Updated 8 months ago
- Parallelizing non-linear sequential models over the sequence length☆51Updated 2 months ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- ☆17Updated last year