lucidrains / PEER-pytorch
Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind
โ123Updated 8 months ago
Alternatives and similar repositories for PEER-pytorch
Users that are interested in PEER-pytorch are comparing it to the libraries listed below
Sorting:
- Mixture of A Million Expertsโ44Updated 9 months ago
- Implementation of ๐ฅฅ Coconut, Chain of Continuous Thought, in Pytorchโ166Updated 4 months ago
- โ78Updated 8 months ago
- Griffin MQA + Hawk Linear RNN Hybridโ86Updated last year
- Implementation of Infini-Transformer in Pytorchโ110Updated 4 months ago
- Some preliminary explorations of Mamba's context scaling.โ213Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"โ97Updated 7 months ago
- Understand and test language model architectures on synthetic tasks.โ195Updated 2 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.โ120Updated this week
- Token Omission Via Attentionโ126Updated 7 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"โ232Updated 2 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)โ154Updated last month
- A MAD laboratory to improve AI architecture designs ๐งชโ115Updated 4 months ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-expertsโ118Updated 6 months ago
- supporting pytorch FSDP for optimizersโ80Updated 5 months ago
- โ187Updated this week
- [ICLR 2025] Official PyTorch implementation of "Forgetting Transformer: Softmax Attention with a Forget Gate"โ99Updated last month
- Normalized Transformer (nGPT)โ176Updated 5 months ago
- A curated reading list of research in Adaptive Computation, Inference-Time Computation & Mixture of Experts (MoE).โ144Updated 4 months ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrunโ49Updated 2 months ago
- Efficiently discovering algorithms via LLMs with evolutionary search and reinforcement learning.โ74Updated 3 weeks ago
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Ruleโ161Updated last month
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"โ231Updated 3 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clustersโ126Updated 5 months ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Modelsโ216Updated last week
- โ81Updated last year
- PyTorch implementation of models from the Zamba2 series.โ180Updated 3 months ago
- EvaByte: Efficient Byte-level Language Models at Scaleโ92Updated 3 weeks ago
- research impl of Native Sparse Attention (2502.11089)โ53Updated 2 months ago
- RWKV-7: Surpassing GPTโ84Updated 5 months ago