zyushun / hessian-spectrumLinks
Code for the paper: Why Transformers Need Adam: A Hessian Perspective
☆59Updated 3 months ago
Alternatives and similar repositories for hessian-spectrum
Users that are interested in hessian-spectrum are comparing it to the libraries listed below
Sorting:
- ☆68Updated 6 months ago
- Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation (ICML'24 Oral)☆13Updated 11 months ago
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆31Updated 7 months ago
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆31Updated last year
- [ICML 2024] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆105Updated 11 months ago
- [ICLR 2025] Official PyTorch implementation of "Forgetting Transformer: Softmax Attention with a Forget Gate"☆108Updated last month
- Welcome to the 'In Context Learning Theory' Reading Group☆28Updated 7 months ago
- Stick-breaking attention☆57Updated last week
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆35Updated last year
- [NeurIPS 2023 Spotlight] Temperature Balancing, Layer-wise Weight Analysis, and Neural Network Training☆35Updated 2 months ago
- The code for creating the iGSM datasets in papers "Physics of Language Models Part 2.1, Grade-School Math and the Hidden Reasoning Proces…☆55Updated 5 months ago
- Implementation of CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation☆22Updated 4 months ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆66Updated 9 months ago
- ☆85Updated last year
- ☆53Updated last year
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆68Updated 10 months ago
- ☆33Updated 3 months ago
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆16Updated 7 months ago
- ☆32Updated last year
- [ICLR'24] "DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training" by Aochuan Chen*, Yimeng Zhang*, Jinghan Jia, James Di…☆59Updated 8 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆163Updated last year
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆88Updated 8 months ago
- ☆76Updated 4 months ago
- The implementation for MLSys 2023 paper: "Cuttlefish: Low-rank Model Training without All The Tuning"☆45Updated 2 years ago
- [ICLR 2023] Eva: Practical Second-order Optimization with Kronecker-vectorized Approximation☆12Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆88Updated last month
- A Sober Look at Language Model Reasoning☆74Updated last week
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆55Updated 8 months ago
- Code for "Echo Chamber: RL Post-training Amplifies Behaviors Learned in Pretraining"☆18Updated 2 months ago