zyushun / hessian-spectrumLinks
Code for the paper: Why Transformers Need Adam: A Hessian Perspective
☆62Updated 6 months ago
Alternatives and similar repositories for hessian-spectrum
Users that are interested in hessian-spectrum are comparing it to the libraries listed below
Sorting:
- Stick-breaking attention☆60Updated 2 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆179Updated last year
- The code for creating the iGSM datasets in papers "Physics of Language Models Part 2.1, Grade-School Math and the Hidden Reasoning Proces…☆75Updated 8 months ago
- ☆99Updated last year
- Physics of Language Models, Part 4☆242Updated last month
- ☆84Updated 6 months ago
- Some preliminary explorations of Mamba's context scaling.☆217Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆82Updated 10 months ago
- ☆70Updated 9 months ago
- ☆53Updated last year
- ☆238Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆99Updated last month
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆35Updated last year
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆33Updated 10 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆124Updated 2 months ago
- ☆35Updated 6 months ago
- Implementation of CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation☆23Updated 6 months ago
- Kinetics: Rethinking Test-Time Scaling Laws☆80Updated 2 months ago
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆125Updated last month
- [ICML 2024] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆111Updated 2 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆118Updated 5 months ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆70Updated last year
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆58Updated 11 months ago
- Inference Speed Benchmark for Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆72Updated last year
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆115Updated last year
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- [ICLR 2025] Code for the paper "Beyond Autoregression: Discrete Diffusion for Complex Reasoning and Planning"☆73Updated 7 months ago
- 🔥 A minimal training framework for scaling FLA models☆239Updated this week
- The official implementation for Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free☆49Updated 4 months ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆91Updated 8 months ago