OLMoE: Open Mixture-of-Experts Language Models
☆990Sep 23, 2025Updated 6 months ago
Alternatives and similar repositories for OLMoE
Users that are interested in OLMoE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Modeling, training, eval, and inference code for OLMo☆6,404Nov 24, 2025Updated 4 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,667Mar 8, 2024Updated 2 years ago
- AllenAI's post-training codebase☆3,643Updated this week
- Data and tools for generating and inspecting OLMo pre-training data.☆1,460Nov 5, 2025Updated 4 months ago
- Minimalistic large language model 3D-parallelism training☆2,617Feb 19, 2026Updated last month
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,000Dec 6, 2024Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆247Sep 12, 2025Updated 6 months ago
- PyTorch building blocks for the OLMo ecosystem☆967Updated this week
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,904Jan 16, 2024Updated 2 years ago
- DataComp for Language Models☆1,427Sep 9, 2025Updated 6 months ago
- Official Repo for Open-Reasoner-Zero☆2,086Jun 2, 2025Updated 9 months ago
- Muon is Scalable for LLM Training☆1,446Aug 3, 2025Updated 7 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,231Updated this week
- Ongoing research training transformer models at scale☆15,744Updated this week
- verl: Volcano Engine Reinforcement Learning for LLMs☆20,097Updated this week
- A framework for few-shot evaluation of language models.☆11,802Updated this week
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,692Updated this week
- GRadient-INformed MoE☆264Sep 25, 2024Updated last year
- Democratizing Reinforcement Learning for LLMs☆5,259Updated this week
- Fast and memory-efficient exact attention☆22,938Updated this week
- A collection of AWESOME things about mixture-of-experts☆1,270Dec 8, 2024Updated last year
- ☆979Feb 7, 2025Updated last year
- Simple RL training for reasoning☆3,841Dec 23, 2025Updated 3 months ago
- Scalable toolkit for efficient model alignment☆850Oct 6, 2025Updated 5 months ago
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,837Jan 17, 2025Updated last year
- Tools for merging pretrained large language models.☆6,895Mar 15, 2026Updated last week
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,198Mar 9, 2026Updated 2 weeks ago
- SGLang is a high-performance serving framework for large language models and multimodal models.☆24,829Updated this week
- Evaluation suite for LLMs☆379Jul 11, 2025Updated 8 months ago
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,307Jul 15, 2025Updated 8 months ago
- Recipes to train reward model for RLHF.☆1,521Apr 24, 2025Updated 11 months ago
- MoBA: Mixture of Block Attention for Long-Context LLMs☆2,083Apr 3, 2025Updated 11 months ago
- Codebase for Aria - an Open Multimodal Native MoE☆1,086Jan 22, 2025Updated last year
- Reproducible, flexible LLM evaluations☆354Mar 2, 2026Updated 3 weeks ago
- Scalable RL solution for advanced reasoning of language models☆1,821Mar 18, 2025Updated last year
- A family of compressed models obtained via pruning and knowledge distillation☆374Nov 6, 2025Updated 4 months ago
- O1 Replication Journey☆1,999Jan 14, 2025Updated last year
- Efficient Triton Kernels for LLM Training☆6,216Mar 18, 2026Updated last week
- Next-Token Prediction is All You Need☆2,374Jan 12, 2026Updated 2 months ago