OLMoE: Open Mixture-of-Experts Language Models
☆975Sep 23, 2025Updated 5 months ago
Alternatives and similar repositories for OLMoE
Users that are interested in OLMoE are comparing it to the libraries listed below
Sorting:
- Modeling, training, eval, and inference code for OLMo☆6,326Nov 24, 2025Updated 3 months ago
- AllenAI's post-training codebase☆3,605Updated this week
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,663Mar 8, 2024Updated last year
- Minimalistic large language model 3D-parallelism training☆2,579Feb 19, 2026Updated last week
- Data and tools for generating and inspecting OLMo pre-training data.☆1,416Nov 5, 2025Updated 3 months ago
- DataComp for Language Models☆1,421Sep 9, 2025Updated 5 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆247Sep 12, 2025Updated 5 months ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,001Dec 6, 2024Updated last year
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,893Jan 16, 2024Updated 2 years ago
- Official Repo for Open-Reasoner-Zero☆2,087Jun 2, 2025Updated 9 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,037Feb 21, 2026Updated last week
- Muon is Scalable for LLM Training☆1,440Aug 3, 2025Updated 7 months ago
- verl: Volcano Engine Reinforcement Learning for LLMs☆19,519Updated this week
- PyTorch building blocks for the OLMo ecosystem☆839Updated this week
- Ongoing research training transformer models at scale☆15,461Updated this week
- GRadient-INformed MoE☆264Sep 25, 2024Updated last year
- A framework for few-shot evaluation of language models.☆11,540Updated this week
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,190Sep 30, 2025Updated 5 months ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,428Updated this week
- Codebase for Aria - an Open Multimodal Native MoE☆1,082Jan 22, 2025Updated last year
- Tools for merging pretrained large language models.☆6,826Updated this week
- Fast and memory-efficient exact attention☆22,361Feb 25, 2026Updated last week
- Simple RL training for reasoning☆3,830Dec 23, 2025Updated 2 months ago
- Scalable toolkit for efficient model alignment☆849Oct 6, 2025Updated 4 months ago
- Meta Lingua: a lean, efficient, and easy-to-hack codebase to research LLMs.☆4,754Jul 18, 2025Updated 7 months ago
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,833Jan 17, 2025Updated last year
- SGLang is a high-performance serving framework for large language models and multimodal models.☆23,905Updated this week
- Democratizing Reinforcement Learning for LLMs☆5,167Updated this week
- Efficient Triton Kernels for LLM Training☆6,162Updated this week
- XVERSE-MoE-A36B: A multilingual large language model developed by XVERSE Technology Inc.☆39Sep 12, 2024Updated last year
- The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention☆3,347Jul 7, 2025Updated 7 months ago
- A family of compressed models obtained via pruning and knowledge distillation☆368Nov 6, 2025Updated 3 months ago
- O1 Replication Journey☆1,999Jan 14, 2025Updated last year
- Reproducible, flexible LLM evaluations☆347Jan 28, 2026Updated last month
- A collection of AWESOME things about mixture-of-experts☆1,266Dec 8, 2024Updated last year
- ☆978Feb 7, 2025Updated last year
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,303Jul 15, 2025Updated 7 months ago
- Train transformer language models with reinforcement learning.☆17,460Updated this week
- Next-Token Prediction is All You Need☆2,355Jan 12, 2026Updated last month