RobertCsordas / moeView external linksLinks
Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"
☆38Jun 11, 2025Updated 8 months ago
Alternatives and similar repositories for moe
Users that are interested in moe are comparing it to the libraries listed below
Sorting:
- sigma-MoE layer☆21Jan 5, 2024Updated 2 years ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Sep 30, 2024Updated last year
- ☆17Jun 11, 2025Updated 8 months ago
- ☆91Aug 18, 2024Updated last year
- Probabilistic inference for models of behaviour☆10Oct 13, 2025Updated 4 months ago
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆34Jun 11, 2025Updated 8 months ago
- An implementation of DreamerV2 written in JAX, with support for running multiple random seeds of an experiment on a single GPU.☆18Jan 16, 2023Updated 3 years ago
- Seamless Voice Interactions with LLMs☆12Oct 28, 2023Updated 2 years ago
- ☆20Oct 22, 2025Updated 3 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- ☆13Dec 6, 2024Updated last year
- A flexible, fast and scalable python library for Self-Organizing Maps☆16Aug 9, 2025Updated 6 months ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆34Aug 6, 2023Updated 2 years ago
- A new model for quickly training and simulating adaptive leaky integrate-and-fire spiking neural networks.☆14Apr 9, 2024Updated last year
- Simplified recipes for preparing commonly used speech datasets, and a PyTorch-compatible Python data loader that can perform standard fea…☆15Jun 12, 2023Updated 2 years ago
- ☆14Oct 7, 2022Updated 3 years ago
- ☆143Jul 21, 2024Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆265Oct 3, 2025Updated 4 months ago
- Easily serialize dataclasses to and from tensors (PyTorch, NumPy)☆18Apr 10, 2021Updated 4 years ago
- Map (deep learning) model weights between different model implementations.☆19Jan 30, 2025Updated last year
- Codes and files for the paper Are Emergent Abilities in Large Language Models just In-Context Learning☆33Jan 9, 2025Updated last year
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆341Feb 23, 2025Updated 11 months ago
- PyTorch Language Modeling Toolkit for Fast Weight Programmers☆19Jun 11, 2025Updated 8 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Aug 30, 2023Updated 2 years ago
- Speech in Flax/JAX☆15Jul 11, 2022Updated 3 years ago
- A repository for research on medium sized language models.☆77May 23, 2024Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆279Nov 3, 2023Updated 2 years ago
- Simple-to-use scoring function for arbitrarily tokenized texts.☆47Feb 19, 2025Updated 11 months ago
- A Framework aims to wisely initialize unseen subword embeddings in PLMs for efficient large-scale continued pretraining☆18Nov 26, 2023Updated 2 years ago
- The official implementation of the DAC 2024 paper GQA-LUT☆20Dec 20, 2024Updated last year
- ☆44Jun 2, 2024Updated last year
- GPT-2 Metadata Pretraining Towards Instruction Finetuning for Ukrainian☆20Aug 6, 2023Updated 2 years ago
- Code for the paper "Getting the most out of your tokenizer for pre-training and domain adaptation"☆21Feb 14, 2024Updated 2 years ago
- ☆22Sep 2, 2025Updated 5 months ago
- ☆19May 6, 2023Updated 2 years ago
- ☆18Nov 25, 2022Updated 3 years ago
- Implementations of growing and pruning in neural networks☆22Jul 26, 2023Updated 2 years ago
- GoldFinch and other hybrid transformer components☆45Jul 20, 2024Updated last year