Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch
☆345Apr 2, 2025Updated last year
Alternatives and similar repositories for soft-moe-pytorch
Users that are interested in soft-moe-pytorch are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆383Jun 17, 2024Updated last year
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆83Oct 5, 2023Updated 2 years ago
- PyTorch implementation of "From Sparse to Soft Mixtures of Experts"☆70Aug 22, 2023Updated 2 years ago
- ☆717Dec 6, 2025Updated 5 months ago
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆859Sep 13, 2023Updated 2 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆122Oct 17, 2024Updated last year
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆231Sep 6, 2024Updated last year
- Fork of HyenaDNA, a long-range genomic foundation model built with Hyena☆10Aug 14, 2023Updated 2 years ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆92Jun 18, 2024Updated last year
- A collection of AWESOME things about mixture-of-experts☆1,275Dec 8, 2024Updated last year
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆57May 17, 2024Updated last year
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆227Mar 25, 2026Updated last month
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,242Apr 19, 2024Updated 2 years ago
- A Transformer made of Rotation-equivariant Attention using Vector Neurons☆102Aug 1, 2023Updated 2 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- My own attempt at a long context genomics model, leveraging recent advances in long context attention modeling (Flash Attention + other h…☆54Jul 2, 2023Updated 2 years ago
- Implementation of Tranception, an attention network, paired with retrieval, that is SOTA for protein fitness prediction☆32Jun 19, 2022Updated 3 years ago
- A fast MoE impl for PyTorch☆1,846Feb 10, 2025Updated last year
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆811Jan 30, 2026Updated 3 months ago
- Implementation of an Attention layer where each head can attend to more than just one token, using coordinate descent to pick topk☆47Jul 16, 2023Updated 2 years ago
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆423Jan 6, 2025Updated last year
- Implementation of "Towards Understanding Mixture of Experts in Deep Learning", NeurIPS 2022☆10Jan 6, 2023Updated 3 years ago
- Implementation of Insertion-deletion Denoising Diffusion Probabilistic Models☆30May 31, 2022Updated 3 years ago
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch☆104Oct 10, 2023Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- ☆77Apr 29, 2024Updated 2 years ago
- ☆95Apr 3, 2023Updated 3 years ago
- [ECCV 2024] This is the official implementation of "Stitched ViTs are Flexible Vision Backbones".☆29Jan 23, 2024Updated 2 years ago
- A curated reading list of research in Mixture-of-Experts(MoE).☆663Oct 30, 2024Updated last year
- Self contained pytorch implementation of a sinkhorn based router, for mixture of experts or otherwise☆40Aug 29, 2024Updated last year
- ☆145Jul 21, 2024Updated last year
- ☆22Oct 22, 2025Updated 6 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,678Mar 8, 2024Updated 2 years ago
- Vector (and Scalar) Quantization, in Pytorch☆3,920Apr 17, 2026Updated 3 weeks ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- A simple Transformer where the softmax has been replaced with normalization☆20Sep 11, 2020Updated 5 years ago
- Implementation of a holodeck, written in Pytorch☆19Nov 1, 2023Updated 2 years ago
- Implementation of E(n)-Transformer, which incorporates attention mechanisms into Welling's E(n)-Equivariant Graph Neural Network☆226Jun 2, 2024Updated last year
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆988Apr 11, 2026Updated 3 weeks ago
- Implementation of the Llama architecture with RLHF + Q-learning☆170Feb 1, 2025Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆136Nov 1, 2025Updated 6 months ago
- Implementation of Metaformer, but in an autoregressive manner☆26Jun 21, 2022Updated 3 years ago