chengxiang / LinearTransformerLinks
Pytorch code for experiments on Linear Transformers
☆24Updated last year
Alternatives and similar repositories for LinearTransformer
Users that are interested in LinearTransformer are comparing it to the libraries listed below
Sorting:
- ☆73Updated last year
- Omnigrok: Grokking Beyond Algorithmic Data☆62Updated 2 years ago
- Neural Tangent Kernel Papers☆120Updated 11 months ago
- Official PyTorch implementation of NeuralSVD (ICML 2024)☆20Updated last year
- ☆241Updated last year
- ☆34Updated 2 years ago
- Welcome to the 'In Context Learning Theory' Reading Group☆30Updated last year
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆20Updated last year
- Efficient empirical NTKs in PyTorch☆22Updated 3 years ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆108Updated 2 years ago
- ☆112Updated 10 months ago
- ☆51Updated last week
- Source code of "What can linearized neural networks actually say about generalization?☆20Updated 4 years ago
- Code for the paper: Why Transformers Need Adam: A Hessian Perspective☆63Updated 9 months ago
- Bayesian Low-Rank Adaptation for Large Language Models☆36Updated last year
- ☆20Updated last year
- Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation (ICML'24 Oral)☆13Updated last year
- codes and plots for "Active-Dormant Attention Heads: Mechanistically Demystifying Extreme-Token Phenomena in LLMs"☆10Updated last year
- Influence Functions with (Eigenvalue-corrected) Kronecker-Factored Approximate Curvature☆175Updated 6 months ago
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆40Updated 2 years ago
- Sparse Autoencoder Training Library☆56Updated 7 months ago
- source code for paper "Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models"☆33Updated last year
- Unofficial Implementation of Selective Attention Transformer☆20Updated last year
- DoG is SGD's Best Friend: A Parameter-Free Dynamic Step Size Schedule☆63Updated 2 years ago
- ☆33Updated 2 years ago
- Git Re-Basin: Merging Models modulo Permutation Symmetries in PyTorch☆78Updated 2 years ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated 2 years ago
- nanoGPT-like codebase for LLM training☆113Updated last month
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- Codebase for Mechanistic Mode Connectivity☆14Updated 2 years ago