zqOuO / GWTLinks
☆13Updated 7 months ago
Alternatives and similar repositories for GWT
Users that are interested in GWT are comparing it to the libraries listed below
Sorting:
- ☆34Updated 5 months ago
- Work in progress.☆72Updated 2 months ago
- The evaluation framework for training-free sparse attention in LLMs☆91Updated 2 months ago
- Here we will test various linear attention designs.☆62Updated last year
- ☆11Updated 5 months ago
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆33Updated 10 months ago
- This repository contains code for the MicroAdam paper.☆19Updated 8 months ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆47Updated 4 months ago
- Unofficial Implementation of Selective Attention Transformer☆17Updated 10 months ago
- An extention to the GaLore paper, to perform Natural Gradient Descent in low rank subspace☆17Updated 10 months ago
- ☆37Updated 2 weeks ago
- Pytorch implementation of "Oscillation-Reduced MXFP4 Training for Vision Transformers" on DeiT Model Pre-training☆26Updated 2 months ago
- ☆81Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆174Updated 2 months ago
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆30Updated 2 months ago
- AdaSplash: Adaptive Sparse Flash Attention (aka Flash Entmax Attention)☆19Updated last month
- ☆14Updated 11 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆82Updated 10 months ago
- ☆53Updated last year
- ☆40Updated 4 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆33Updated last year
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆44Updated last year
- ☆85Updated last year
- ☆56Updated 10 months ago
- ☆23Updated last month
- ☆123Updated 3 months ago
- Fast and memory-efficient exact attention☆70Updated 5 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆33Updated last week
- Code for "RSQ: Learning from Important Tokens Leads to Better Quantized LLMs"☆19Updated 2 months ago
- M1: Towards Scalable Test-Time Compute with Mamba Reasoning Models☆37Updated last month