kyleliang919 / Super_Muon
☆47Updated last week
Alternatives and similar repositories for Super_Muon:
Users that are interested in Super_Muon are comparing it to the libraries listed below
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆26Updated 3 weeks ago
- RWKV-7: Surpassing GPT☆82Updated 4 months ago
- ☆74Updated 7 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆27Updated last month
- This repo is based on https://github.com/jiaweizzhao/GaLore☆26Updated 6 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆91Updated 3 weeks ago
- research impl of Native Sparse Attention (2502.11089)☆53Updated last month
- GoldFinch and other hybrid transformer components☆45Updated 8 months ago
- A repository for research on medium sized language models.☆76Updated 10 months ago
- working implimention of deepseek MLA☆38Updated 2 months ago
- Work in progress.☆50Updated last week
- DPO, but faster 🚀☆40Updated 3 months ago
- Code accompanying the paper "Generalized Interpolating Discrete Diffusion"☆69Updated last week
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆31Updated 7 months ago
- Here we will test various linear attention designs.☆60Updated 11 months ago
- Collection of autoregressive model implementation☆83Updated last month
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆44Updated 2 weeks ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆44Updated 8 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆85Updated last week
- ☆50Updated 5 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models"☆59Updated 5 months ago
- My fork os allen AI's OLMo for educational purposes.☆30Updated 3 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆103Updated 4 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆125Updated 3 months ago
- Triton Implementation of HyperAttention Algorithm☆47Updated last year
- Efficient triton implementation of Native Sparse Attention.☆127Updated this week
- Code Implementation, Evaluations, Documentation, Links and Resources for Min P paper☆28Updated 2 weeks ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆54Updated 11 months ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆30Updated 3 months ago
- ☆32Updated 9 months ago