The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":
☆44Feb 28, 2026Updated 3 weeks ago
Alternatives and similar repositories for MEO
Users that are interested in MEO are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Source code of ACL 2023 Main Conference Paper "PAD-Net: An Efficient Framework for Dynamic Networks".☆11Feb 28, 2026Updated 3 weeks ago
- Source code of EMNLP 2022 Findings paper "SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters"☆21Feb 28, 2026Updated 3 weeks ago
- Implementation for EACL 2024 paper "Corpus-Steered Query Expansion with Large Language Models"☆12Mar 19, 2024Updated 2 years ago
- The official implementation of the paper "Uncovering the Redundancy in Transformers via a Unified Study of Layer Dropping (TMLR)".☆188Mar 6, 2026Updated 2 weeks ago
- ☆14Aug 18, 2022Updated 3 years ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- [ACL 2023 Findings] Emergent Modularity in Pre-trained Transformers☆26Jun 7, 2023Updated 2 years ago
- ☆16Dec 9, 2023Updated 2 years ago
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆81Apr 10, 2023Updated 2 years ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆20May 27, 2025Updated 10 months ago
- The official repository for the experiments included in the paper titled "Patch-level Routing in Mixture-of-Experts is Provably Sample-ef…☆14Feb 12, 2026Updated last month
- ☆12Sep 23, 2024Updated last year
- Notes and slides for Stanford CS231n 2021 & 2022 in English. I merged the contents together to get a better version. Assignments are not …☆27Sep 30, 2022Updated 3 years ago
- Website for HKU NLP group (under construction)☆14Mar 20, 2026Updated last week
- Code release for AdapMoE accepted by ICCAD 2024☆36Apr 28, 2025Updated 10 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆30Nov 12, 2024Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,000Dec 6, 2024Updated last year
- Sparse Backpropagation for Mixture-of-Expert Training☆29Jul 2, 2024Updated last year
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated 2 years ago
- Mixture of Decision Trees for Interpretable Machine Learning☆11Sep 2, 2021Updated 4 years ago
- Codes and data for EMNLP 2021 paper "Self- and Pseudo-self-supervised Prediction of Speaker and Key-utterance for Multi-party Dialogue Re…☆16Oct 15, 2022Updated 3 years ago
- [CoLM 24] Official Repository of MambaByte: Token-free Selective State Space Model☆24Oct 12, 2024Updated last year
- ☆19Sep 15, 2022Updated 3 years ago
- Code and data for EACL 2024 paper "Contextualization Distillation from Large Language Models for Knowledge Graph Completion"☆24Oct 17, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting☆20Mar 25, 2024Updated 2 years ago
- Mixture of Attention Heads☆52Oct 10, 2022Updated 3 years ago
- Syphus: Automatic Instruction-Response Generation Pipeline☆14Dec 14, 2023Updated 2 years ago
- The official implementation of the ICML'24 paper "A Graph is Worth K Words: Euclideanizing Graph using Pure Transformer".☆48Mar 19, 2025Updated last year
- Tensorflow code for "Hierarchical Decompositional Mixtures of Variational Autoencoders" (ICML'19)☆12Jun 7, 2020Updated 5 years ago
- sigma-MoE layer☆21Jan 5, 2024Updated 2 years ago
- AbstainQA, ACL 2024☆29Feb 4, 2026Updated last month
- Transformers components but in Triton☆34May 9, 2025Updated 10 months ago
- ☆21Oct 10, 2025Updated 5 months ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- The information of NLP PhD application in the world.☆37Aug 27, 2024Updated last year
- The code for "AttentionPredictor: Temporal Pattern Matters for Efficient LLM Inference", Qingyue Yang, Jie Wang, Xing Li, Zhihai Wang, Ch…☆28Jul 15, 2025Updated 8 months ago
- [ICML 2024] Code for the paper "MoE-RBench: Towards Building Reliable Language Models with Sparse Mixture-of-Experts"☆10Jul 1, 2024Updated last year
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆156Jul 9, 2025Updated 8 months ago
- ☆22Dec 11, 2024Updated last year
- ☆10Apr 16, 2024Updated last year
- ☆128Jan 22, 2024Updated 2 years ago