Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"
☆39Jun 11, 2025Updated 10 months ago
Alternatives and similar repositories for moe
Users that are interested in moe are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- sigma-MoE layer☆21Jan 5, 2024Updated 2 years ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆101Sep 30, 2024Updated last year
- ☆17Jun 11, 2025Updated 10 months ago
- An implementation of DreamerV2 written in JAX, with support for running multiple random seeds of an experiment on a single GPU.☆18Jan 16, 2023Updated 3 years ago
- Probabilistic inference for models of behaviour☆13Mar 5, 2026Updated last month
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- ☆91Aug 18, 2024Updated last year
- ☆14Oct 7, 2022Updated 3 years ago
- Bayesian model reduction for probabilistic machine learning☆11Jul 3, 2025Updated 9 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆57Aug 20, 2024Updated last year
- ☆21Oct 22, 2025Updated 5 months ago
- Simplistic Pytorch Implementation of the Dreamer-RL☆20May 7, 2025Updated 11 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆274Oct 3, 2025Updated 6 months ago
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆34Jun 11, 2025Updated 10 months ago
- Map (deep learning) model weights between different model implementations.☆19Mar 9, 2026Updated last month
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆16Dec 9, 2023Updated 2 years ago
- ☆18Nov 25, 2022Updated 3 years ago
- Mixture of Attention Heads☆52Oct 10, 2022Updated 3 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆281Nov 3, 2023Updated 2 years ago
- [AAAI 2021 Workshop] The official repository for the LST-MAP model for few-shot image classification.☆13Feb 12, 2021Updated 5 years ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆343Feb 23, 2025Updated last year
- Neural Graphical models are neural network based graphical models that offer richer representation, faster inference & sampling☆30Aug 12, 2025Updated 8 months ago
- Code for the paper "Getting the most out of your tokenizer for pre-training and domain adaptation"☆22Feb 14, 2024Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Seamless Voice Interactions with LLMs☆12Oct 28, 2023Updated 2 years ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆29Apr 17, 2024Updated 2 years ago
- A flexible, fast and scalable python library for Self-Organizing Maps☆16Aug 9, 2025Updated 8 months ago
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆34Aug 6, 2023Updated 2 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Feb 28, 2023Updated 3 years ago
- ☆22Aug 27, 2023Updated 2 years ago
- The offcial repository for 'CharacterBERT and Self-Teaching for Improving the Robustness of Dense Retrievers on Queries with Typos', SIGI…☆16May 4, 2022Updated 3 years ago
- GoldFinch and other hybrid transformer components☆46Jul 20, 2024Updated last year
- Easily serialize dataclasses to and from tensors (PyTorch, NumPy)☆18Apr 10, 2021Updated 5 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- ☆21Jul 1, 2021Updated 4 years ago
- [NAACL 2024] A Framework aims to wisely initialize unseen subword embeddings in PLMs for efficient large-scale continued pretraining☆18Nov 26, 2023Updated 2 years ago
- An unofficial PyTorch implementation of "A Sliced Wasserstein Loss for Neural Texture Synthesis" paper [CVPR 2021].☆14Nov 10, 2021Updated 4 years ago
- 🧶 Minimal PyTorch Soft Actor Critic (SAC) implementation☆39Feb 19, 2022Updated 4 years ago
- ☆12Mar 17, 2026Updated 3 weeks ago
- Minimal A2C/A3C example of an LSTM-based meta-learner.☆13Feb 2, 2021Updated 5 years ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Aug 30, 2023Updated 2 years ago