lucidrains / PEER-pytorch
Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind
β119Updated 6 months ago
Alternatives and similar repositories for PEER-pytorch:
Users that are interested in PEER-pytorch are comparing it to the libraries listed below
- Mixture of A Million Expertsβ41Updated 7 months ago
- Implementation of π₯₯ Coconut, Chain of Continuous Thought, in Pytorchβ158Updated 2 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"β222Updated 2 weeks ago
- Understand and test language model architectures on synthetic tasks.β183Updated last month
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"β96Updated 5 months ago
- Implementation of Infini-Transformer in Pytorchβ109Updated last month
- Some preliminary explorations of Mamba's context scaling.β213Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)β149Updated 2 months ago
- Griffin MQA + Hawk Linear RNN Hybridβ85Updated 10 months ago
- β181Updated this week
- β72Updated 6 months ago
- Explorations into the recently proposed Taylor Series Linear Attentionβ93Updated 6 months ago
- supporting pytorch FSDP for optimizersβ77Updated 2 months ago
- Normalized Transformer (nGPT)β156Updated 3 months ago
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Modeβ¦β98Updated 5 months ago
- Language models scale reliably with over-training and on downstream tasksβ96Updated 11 months ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Modelsβ196Updated last month
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"β218Updated last month
- β75Updated 7 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clustersβ117Updated 3 months ago
- Token Omission Via Attentionβ124Updated 4 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT trainingβ122Updated 10 months ago
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Ruleβ137Updated last week
- A MAD laboratory to improve AI architecture designs π§ͺβ105Updated 2 months ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023β130Updated 10 months ago
- Here we will test various linear attention designs.β59Updated 10 months ago
- β84Updated 5 months ago
- β91Updated 9 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.β99Updated 3 months ago