sigma-MoE layer
☆21Jan 5, 2024Updated 2 years ago
Alternatives and similar repositories for moe_layer
Users that are interested in moe_layer are comparing it to the libraries listed below
Sorting:
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆39Jun 11, 2025Updated 8 months ago
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆34Jun 11, 2025Updated 8 months ago
- ☆17Jun 11, 2025Updated 8 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆101Sep 30, 2024Updated last year
- Mixture of Attention Heads☆51Oct 10, 2022Updated 3 years ago
- ☆16Dec 9, 2023Updated 2 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Feb 28, 2023Updated 3 years ago
- The official Pytorch implementation of the paper "Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT …☆40Mar 7, 2024Updated last year
- [EMNLP'23] Code for "Non-autoregressive Text Editing with Copy-aware Latent Alignments".☆20Oct 17, 2023Updated 2 years ago
- Inference Llama 2 in one file of pure Cuda☆17Aug 20, 2023Updated 2 years ago
- ☆91Aug 18, 2024Updated last year
- ☆23Nov 6, 2022Updated 3 years ago
- Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"☆51Jul 17, 2022Updated 3 years ago
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- ☆22Nov 9, 2024Updated last year
- ☆29May 4, 2024Updated last year
- [CoLM 24] Official Repository of MambaByte: Token-free Selective State Space Model☆24Oct 12, 2024Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆268Oct 3, 2025Updated 5 months ago
- ☆106Mar 9, 2024Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated last year
- This package implements THOR: Transformer with Stochastic Experts.☆64Oct 7, 2021Updated 4 years ago
- Official Code Repository for the paper "Key-value memory in the brain"☆31Feb 25, 2025Updated last year
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- A repository for research on medium sized language models.☆78May 23, 2024Updated last year
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆34Aug 6, 2023Updated 2 years ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73May 26, 2024Updated last year
- ☆143Jul 21, 2024Updated last year
- Code for this paper "HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts via HyperNetwork"☆33Nov 29, 2023Updated 2 years ago
- HeadlessPivot☆29Feb 27, 2026Updated last week
- Continual Resilient (CoRe) Optimizer for PyTorch☆11Jun 10, 2024Updated last year
- A summarizer for Japanese articles (but ChatGPT is better)☆10Aug 1, 2022Updated 3 years ago
- ☆84Nov 10, 2025Updated 3 months ago
- List of open-source implementations of Magenta projects in PyTorch.☆37Apr 20, 2020Updated 5 years ago
- Linear Attention Sequence Parallelism (LASP)☆89Jun 4, 2024Updated last year
- LMTuner: Make the LLM Better for Everyone☆38Sep 21, 2023Updated 2 years ago
- Simple-to-use scoring function for arbitrarily tokenized texts.☆47Feb 19, 2025Updated last year
- ☆16Updated this week
- Filipino multi-modal NLP dataset. Consists of 350k+ Filipino news articles and associated images☆12Mar 11, 2025Updated 11 months ago