Haiyang-W / TokenFormer
[ICLR2025 Spotlightπ₯] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters
β547Updated 2 months ago
Alternatives and similar repositories for TokenFormer:
Users that are interested in TokenFormer are comparing it to the libraries listed below
- Official JAX implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ405Updated 8 months ago
- Muon optimizer: +>30% sample efficiency with <3% wallclock overheadβ575Updated 3 weeks ago
- Implementation of the sparse attention pattern proposed by the Deepseek team in their "Native Sparse Attention" paperβ588Updated 3 weeks ago
- The official implementation of Tensor ProducT ATTenTion Transformer (T6)β359Updated this week
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ1,170Updated 9 months ago
- Helpful tools and examples for working with flex-attentionβ720Updated this week
- Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Modelsβ519Updated this week
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793β404Updated this week
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAIβ279Updated 3 weeks ago
- β262Updated last month
- Annotated version of the Mamba paperβ481Updated last year
- code for "Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion"β803Updated 2 weeks ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modelingβ860Updated last month
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Modelsβ212Updated last week
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attentionβ¦β288Updated 11 months ago
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAIβ1,052Updated 3 weeks ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorchβ327Updated 10 months ago
- [ECCV 2024] Official PyTorch implementation of RoPE-ViT "Rotary Position Embedding for Vision Transformer"β315Updated 3 months ago
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorchβ282Updated 2 weeks ago
- When it comes to optimizers, it's always better to be safe than sorryβ217Updated 2 weeks ago
- β516Updated this week
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"β166Updated last week
- This repo contains the code for 1D tokenizer and generatorβ821Updated 3 weeks ago
- [ICML 2024 Best Paper] Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution (https://arxiv.org/abs/2310.16834)β552Updated last year
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorchβ665Updated 4 months ago
- [CVPR 2025 Highlight] The official CLIP training codebase of Inf-CL: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for Cβ¦β240Updated 3 months ago
- Normalized Transformer (nGPT)β167Updated 4 months ago
- Implementation of π Ring Attention, from Liu et al. at Berkeley AI, in Pytorchβ510Updated 5 months ago
- Implementation of Autoregressive Diffusion in Pytorchβ370Updated 5 months ago
- β289Updated 4 months ago