lmsdss / LayerNorm-ScalingLinks
Official Pytorch Implementation of "The Curse of Depth in Large Language Models" by Wenfang Sun, Xinyuan Song, Pengxiang Li, Lu Yin,Yefeng Zheng, Shiwei Liu
☆44Updated last month
Alternatives and similar repositories for LayerNorm-Scaling
Users that are interested in LayerNorm-Scaling are comparing it to the libraries listed below
Sorting:
- ☆79Updated 10 months ago
- ☆17Updated 5 months ago
- Official PyTorch Implementation for Vision-Language Models Create Cross-Modal Task Representations, ICML 2025☆27Updated last month
- The official implementation for Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free☆44Updated last month
- ☆33Updated 3 months ago
- Stick-breaking attention☆57Updated last week
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆32Updated 3 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆28Updated 9 months ago
- ☆32Updated 5 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆88Updated last month
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆56Updated 3 months ago
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆29Updated 7 months ago
- ☆32Updated last year
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆33Updated 3 months ago
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆51Updated 6 months ago
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆28Updated last year
- ☆20Updated 11 months ago
- Codebase for Instruction Following without Instruction Tuning☆34Updated 9 months ago
- DeciMamba: Exploring the Length Extrapolation Potential of Mamba (ICLR 2025)☆28Updated 2 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆105Updated 2 months ago
- [ICLR 2025] Official PyTorch implementation of "Forgetting Transformer: Softmax Attention with a Forget Gate"☆108Updated last month
- ☆51Updated 3 months ago
- ☆48Updated last year
- Mixture of A Million Experts☆46Updated 10 months ago
- ☆18Updated 7 months ago
- DELLA-Merging: Reducing Interference in Model Merging through Magnitude-Based Sampling☆32Updated 11 months ago
- ☆55Updated 11 months ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated 2 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆52Updated 4 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆74Updated 7 months ago