PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)
☆83Oct 5, 2023Updated 2 years ago
Alternatives and similar repositories for soft-mixture-of-experts
Users that are interested in soft-mixture-of-experts are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆345Apr 2, 2025Updated last year
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆381Jun 17, 2024Updated last year
- Template repo for Python projects, especially those focusing on machine learning and/or deep learning.☆15Jan 14, 2026Updated 3 months ago
- ☆715Dec 6, 2025Updated 4 months ago
- AdaMoLE: Adaptive Mixture of LoRA Experts☆38Oct 11, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Unofficial Implementation of Selective Attention Transformer☆21Oct 31, 2024Updated last year
- A collection of AWESOME things about mixture-of-experts☆1,274Dec 8, 2024Updated last year
- Implementation of MixCE method described in ACL 2023 paper by Zhang et al.☆20May 29, 2023Updated 2 years ago
- ☆20Oct 31, 2022Updated 3 years ago
- PyTorch implementation of moe, which stands for mixture of experts☆53Feb 11, 2021Updated 5 years ago
- Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation☆19Nov 28, 2022Updated 3 years ago
- [ICCV-2023] Heterogeneous Forgetting Compensation for Class-Incremental Learning☆12Dec 4, 2023Updated 2 years ago
- [EMNLP'24] Code and data for paper "Ladder: A Model-Agnostic Framework Boosting LLM-based Machine Translation to the Next Level"☆24Jun 29, 2024Updated last year
- ☆24Aug 2, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [ICLR 2024 Spotlight] Social Reward: Evaluating and Enhancing Generative AI through Million-User Feedback from an Online Creative Communi…☆11Mar 29, 2024Updated 2 years ago
- sigma-MoE layer☆21Jan 5, 2024Updated 2 years ago
- Experiments to assess SPADE on different LLM pipelines.☆17Apr 7, 2024Updated 2 years ago
- List of papers on Hallucination in LMM☆10Nov 29, 2023Updated 2 years ago
- Implementation of "Towards Understanding Mixture of Experts in Deep Learning", NeurIPS 2022☆10Jan 6, 2023Updated 3 years ago
- The code for paper: PeFoMed: Parameter Efficient Fine-tuning on Multi-modal Large Language Models for Medical Visual Question Answering☆62Dec 21, 2025Updated 3 months ago
- [IEEE TNSRE] Mixture of Experts for EEG-Based Seizure Subtype Classification☆12Aug 20, 2024Updated last year
- Towards Understanding the Mixture-of-Experts Layer in Deep Learning☆35Dec 12, 2023Updated 2 years ago
- ☆35Aug 23, 2023Updated 2 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Pytorch-based adaptive deformable convolution☆18Jun 26, 2021Updated 4 years ago
- ☆14May 27, 2024Updated last year
- A curated reading list of research in Mixture-of-Experts(MoE).☆663Oct 30, 2024Updated last year
- ☆95Apr 3, 2023Updated 3 years ago
- optimize neuro-centric parameters instead of weights to solve RL tasks☆14Oct 2, 2023Updated 2 years ago
- [NeurIPS 2024 Spotlight] code for "Diffusion Model with Cross Attention as an Inductive Bias for Disentanglement"☆20Jan 26, 2025Updated last year
- [ICCV' 23] MRM: Masked Relation Modeling for Medical Image Pre-Training with Genetics☆10Oct 28, 2024Updated last year
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆105Nov 24, 2023Updated 2 years ago
- This repository hosts code for converting the original MLP Mixer models (JAX) to TensorFlow.☆15Sep 29, 2021Updated 4 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- ☆10Mar 15, 2024Updated 2 years ago
- ☆19Jun 10, 2024Updated last year
- MENTOR is a highly efficient visual RL algorithm that excels in both simulation and real-world complex robotic learning tasks.☆27Jul 9, 2025Updated 9 months ago
- ☆15Dec 9, 2024Updated last year
- Learning generative models with Sinkhorn Loss☆31Nov 9, 2018Updated 7 years ago
- Implementation of Switch Transformers from the paper: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficien…☆138Apr 13, 2026Updated last week
- The code of paper "Learning Heterogeneous Strategies via Graph-based Multi-agent Reinforcement Learning in Mixed Cooperative-Competitive …☆16Jul 17, 2021Updated 4 years ago