microsoft / AutoMoELinks
AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers
☆47Updated 2 years ago
Alternatives and similar repositories for AutoMoE
Users that are interested in AutoMoE are comparing it to the libraries listed below
Sorting:
- JORA: JAX Tensor-Parallel LoRA Library (ACL 2024)☆34Updated last year
- Can GPT-4 Perform Neural Architecture Search?☆87Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆52Updated 2 years ago
- ☆26Updated last year
- Code for "Merging Text Transformers from Different Initializations"☆20Updated 5 months ago
- ☆26Updated last year
- ☆64Updated last year
- ☆27Updated 2 years ago
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 2 years ago
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆66Updated 7 months ago
- Using FlexAttention to compute attention with different masking patterns☆44Updated 9 months ago
- official repo of AAAI2024 paper Mitigating the Impact of False Negatives in Dense Retrieval with Contrastive Confidence Regularization☆13Updated last year
- Codebase for Instruction Following without Instruction Tuning☆35Updated 9 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆52Updated 5 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year
- ☆20Updated 8 months ago
- ☆56Updated last year
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆75Updated last year
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆30Updated 2 weeks ago
- This package implements THOR: Transformer with Stochastic Experts.☆65Updated 3 years ago
- Official implementation of the paper: "A deeper look at depth pruning of LLMs"☆15Updated 11 months ago
- The open source implementation of "Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers"☆20Updated last year
- Long Context Extension and Generalization in LLMs☆57Updated 9 months ago
- ☆24Updated last year
- Linear Attention Sequence Parallelism (LASP)☆85Updated last year
- Latest Weight Averaging (NeurIPS HITY 2022)☆30Updated 2 years ago
- Transformers at any scale☆41Updated last year
- Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719☆22Updated last year
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Updated last year
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆48Updated 2 years ago