zyushun / hessian-spectrumLinks
Code for the paper: Why Transformers Need Adam: A Hessian Perspective
☆63Updated 9 months ago
Alternatives and similar repositories for hessian-spectrum
Users that are interested in hessian-spectrum are comparing it to the libraries listed below
Sorting:
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆38Updated last year
- ☆73Updated last year
- The code for creating the iGSM datasets in papers "Physics of Language Models Part 2.1, Grade-School Math and the Hidden Reasoning Proces…☆78Updated 11 months ago
- Stick-breaking attention☆62Updated 5 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆86Updated last year
- ☆33Updated 2 years ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆187Updated last year
- Official PyTorch implementation of DistiLLM-2: A Contrastive Approach Boosts the Distillation of LLMs (ICML 2025 Oral)☆54Updated 6 months ago
- Physics of Language Models, Part 4☆280Updated 2 weeks ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆74Updated last year
- [ICML‘24] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆120Updated 5 months ago
- ☆50Updated last week
- Implementation of CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation☆25Updated 10 months ago
- ☆110Updated this week
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆20Updated last year
- Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation (ICML'24 Oral)☆13Updated last year
- nanoGPT-like codebase for LLM training☆113Updated last month
- ☆53Updated last year
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆148Updated 5 months ago
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆37Updated last year
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆134Updated last week
- ☆107Updated last year
- ☆101Updated 10 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆102Updated last year
- Kinetics: Rethinking Test-Time Scaling Laws☆84Updated 5 months ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆69Updated last year
- ☆241Updated last year
- Some preliminary explorations of Mamba's context scaling.☆218Updated last year
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆36Updated last year