Haiyang-W / TokenFormerLinks
[ICLR2025 Spotlightπ₯] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters
β576Updated 9 months ago
Alternatives and similar repositories for TokenFormer
Users that are interested in TokenFormer are comparing it to the libraries listed below
Sorting:
- Official JAX implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ426Updated last week
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paperβ779Updated 3 months ago
- H-Net: Hierarchical Network with Dynamic Chunkingβ778Updated last month
- [ICLR 2025 Oral] Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Modelsβ884Updated 4 months ago
- [NeurIPS 2025 Spotlight] TPA: Tensor ProducT ATTenTion Transformer (T6) (https://arxiv.org/abs/2501.06425)β422Updated 3 weeks ago
- β302Updated 6 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793β440Updated 6 months ago
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Ruleβ367Updated 2 months ago
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAIβ291Updated 5 months ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorchβ369Updated last year
- When it comes to optimizers, it's always better to be safe than sorryβ377Updated last month
- Helpful tools and examples for working with flex-attentionβ1,053Updated this week
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ1,273Updated last year
- β545Updated last month
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorchβ334Updated 7 months ago
- Muon is an optimizer for hidden layers in neural networksβ1,983Updated 4 months ago
- Implementation of π Ring Attention, from Liu et al. at Berkeley AI, in Pytorchβ543Updated 5 months ago
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptationβ878Updated last year
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Modelsβ231Updated last month
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modelingβ926Updated 2 weeks ago
- [NeurIPS 2024] Simple and Effective Masked Diffusion Language Modelβ555Updated last month
- [ICLR2025] DiffuGPT and DiffuLLaMA: Scaling Diffusion Language Models via Adaptation from Autoregressive Modelsβ327Updated 5 months ago
- Pretraining and inference code for a large-scale depth-recurrent language modelβ843Updated 3 weeks ago
- β628Updated 7 months ago
- Annotated version of the Mamba paperβ490Updated last year
- Official PyTorch implementation for ICLR2025 paper "Scaling up Masked Diffusion Models on Text"β334Updated 10 months ago
- β201Updated 11 months ago
- Some preliminary explorations of Mamba's context scaling.β216Updated last year
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, sparsβ¦β355Updated 11 months ago
- Normalized Transformer (nGPT)β192Updated 11 months ago