zyushun / hessian-spectrumLinks
Code for the paper: Why Transformers Need Adam: A Hessian Perspective
☆60Updated 4 months ago
Alternatives and similar repositories for hessian-spectrum
Users that are interested in hessian-spectrum are comparing it to the libraries listed below
Sorting:
- ☆234Updated last year
- Stick-breaking attention☆58Updated last month
- Code accompanying the paper "Massive Activations in Large Language Models"☆173Updated last year
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆70Updated last year
- ☆70Updated 7 months ago
- [ICML 2024] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆109Updated last month
- Implementation of CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation☆23Updated 5 months ago
- 🔥 A minimal training framework for scaling FLA models☆220Updated last month
- ☆147Updated 2 years ago
- ☆53Updated last year
- Physics of Language Models, Part 4☆204Updated last week
- [ICLR 2025] Official PyTorch implementation of "Forgetting Transformer: Softmax Attention with a Forget Gate"☆118Updated last month
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆81Updated 9 months ago
- [ICLR 2025] Code for the paper "Beyond Autoregression: Discrete Diffusion for Complex Reasoning and Planning"☆68Updated 5 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆92Updated last week
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆56Updated 10 months ago
- ☆88Updated last year
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆32Updated 9 months ago
- ☆50Updated last year
- Some preliminary explorations of Mamba's context scaling.☆216Updated last year
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆66Updated 4 months ago
- The code for creating the iGSM datasets in papers "Physics of Language Models Part 2.1, Grade-School Math and the Hidden Reasoning Proces…☆61Updated 6 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆103Updated 3 weeks ago
- nanoGPT-like codebase for LLM training☆102Updated 2 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆114Updated 4 months ago
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆33Updated last year
- Kinetics: Rethinking Test-Time Scaling Laws☆70Updated 3 weeks ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆66Updated 10 months ago
- ☆106Updated last year