zyushun / hessian-spectrumLinks
Code for the paper: Why Transformers Need Adam: A Hessian Perspective
☆64Updated 7 months ago
Alternatives and similar repositories for hessian-spectrum
Users that are interested in hessian-spectrum are comparing it to the libraries listed below
Sorting:
- Stick-breaking attention☆61Updated 3 months ago
- ☆103Updated last year
- Physics of Language Models, Part 4☆250Updated 2 months ago
- ☆71Updated 10 months ago
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆35Updated 11 months ago
- ☆53Updated last year
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆131Updated last month
- Code accompanying the paper "Massive Activations in Large Language Models"☆184Updated last year
- The code for creating the iGSM datasets in papers "Physics of Language Models Part 2.1, Grade-School Math and the Hidden Reasoning Proces…☆78Updated 9 months ago
- ☆41Updated this week
- ☆240Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆84Updated 11 months ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆69Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆103Updated 2 weeks ago
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆116Updated last year
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆68Updated last year
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆131Updated 3 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆197Updated 4 months ago
- ☆93Updated 8 months ago
- nanoGPT-like codebase for LLM training☆109Updated 5 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated 2 years ago
- 🔥 A minimal training framework for scaling FLA models☆266Updated last month
- ☆34Updated 7 months ago
- Some preliminary explorations of Mamba's context scaling.☆216Updated last year
- Implementation of CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation☆23Updated 8 months ago
- Kinetics: Rethinking Test-Time Scaling Laws☆81Updated 3 months ago
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆18Updated 11 months ago
- [ICML‘24] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆111Updated 3 months ago
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆57Updated last year
- ☆50Updated last year