RobertCsordas / moe_layerView external linksLinks
sigma-MoE layer
☆21Jan 5, 2024Updated 2 years ago
Alternatives and similar repositories for moe_layer
Users that are interested in moe_layer are comparing it to the libraries listed below
Sorting:
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Jun 11, 2025Updated 8 months ago
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆34Jun 11, 2025Updated 8 months ago
- ☆17Jun 11, 2025Updated 8 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Sep 30, 2024Updated last year
- Mixture of Attention Heads☆51Oct 10, 2022Updated 3 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Feb 28, 2023Updated 2 years ago
- Easily serialize dataclasses to and from tensors (PyTorch, NumPy)☆18Apr 10, 2021Updated 4 years ago
- The official Pytorch implementation of the paper "Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT …☆40Mar 7, 2024Updated last year
- Inference Llama 2 in one file of pure Cuda☆17Aug 20, 2023Updated 2 years ago
- [EMNLP'23] Code for "Non-autoregressive Text Editing with Copy-aware Latent Alignments".☆20Oct 17, 2023Updated 2 years ago
- ☆22Nov 6, 2022Updated 3 years ago
- Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"☆51Jul 17, 2022Updated 3 years ago
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- ☆22Nov 9, 2024Updated last year
- ☆46May 20, 2025Updated 8 months ago
- ☆29May 4, 2024Updated last year
- [CoLM 24] Official Repository of MambaByte: Token-free Selective State Space Model☆24Oct 12, 2024Updated last year
- ☆32Jan 1, 2024Updated 2 years ago
- Triton-based implementation of Sparse Mixture of Experts.☆265Oct 3, 2025Updated 4 months ago
- ☆106Mar 9, 2024Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated last year
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32May 25, 2024Updated last year
- Official Code Repository for the paper "Key-value memory in the brain"☆31Feb 25, 2025Updated 11 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- A repository for research on medium sized language models.☆77May 23, 2024Updated last year
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆34Aug 6, 2023Updated 2 years ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73May 26, 2024Updated last year
- ☆143Jul 21, 2024Updated last year
- Code for this paper "HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts via HyperNetwork"☆33Nov 29, 2023Updated 2 years ago
- HeadlessPivot☆25Jan 29, 2026Updated 2 weeks ago
- A summarizer for Japanese articles (but ChatGPT is better)☆10Aug 1, 2022Updated 3 years ago
- Continual Resilient (CoRe) Optimizer for PyTorch☆11Jun 10, 2024Updated last year
- ☆84Nov 10, 2025Updated 3 months ago
- Transformers at any scale☆42Jan 18, 2024Updated 2 years ago
- List of open-source implementations of Magenta projects in PyTorch.☆37Apr 20, 2020Updated 5 years ago
- Linear Attention Sequence Parallelism (LASP)☆88Jun 4, 2024Updated last year
- LMTuner: Make the LLM Better for Everyone☆38Sep 21, 2023Updated 2 years ago
- UnrealEngine5版VOICEVOX Engine☆13Nov 29, 2025Updated 2 months ago