lmsdss / LayerNorm-ScalingLinks
Official Pytorch Implementation of "The Curse of Depth in Large Language Models" by Wenfang Sun, Xinyuan Song, Pengxiang Li, Lu Yin,Yefeng Zheng, Shiwei Liu
☆56Updated last month
Alternatives and similar repositories for LayerNorm-Scaling
Users that are interested in LayerNorm-Scaling are comparing it to the libraries listed below
Sorting:
- ☆85Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆83Updated 10 months ago
- ☆104Updated 11 months ago
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆31Updated 2 months ago
- ☆69Updated last year
- ☆35Updated 6 months ago
- The evaluation framework for training-free sparse attention in LLMs☆93Updated 3 months ago
- Some preliminary explorations of Mamba's context scaling.☆217Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆100Updated last month
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆53Updated 9 months ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆35Updated 6 months ago
- Language models scale reliably with over-training and on downstream tasks☆99Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆179Updated last year
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆179Updated 3 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆34Updated 3 weeks ago
- Stick-breaking attention☆60Updated 2 months ago
- Kinetics: Rethinking Test-Time Scaling Laws☆80Updated 2 months ago
- Muon fsdp 2☆43Updated last month
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆130Updated last week
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆116Updated last year
- Official PyTorch implementation and models for paper "Diffusion Beats Autoregressive in Data-Constrained Settings". We find diffusion mod…☆92Updated 3 weeks ago
- The official implementation for Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free☆54Updated 4 months ago
- Official PyTorch Implementation for Vision-Language Models Create Cross-Modal Task Representations, ICML 2025☆30Updated 4 months ago
- ☆32Updated last year
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆48Updated 5 months ago
- ☆34Updated 8 months ago
- ☆19Updated 8 months ago
- Mixture of A Million Experts☆47Updated last year
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆124Updated 2 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago