OLMoE: Open Mixture-of-Experts Language Models
☆1,013Sep 23, 2025Updated 7 months ago
Alternatives and similar repositories for OLMoE
Users that are interested in OLMoE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Modeling, training, eval, and inference code for OLMo☆6,488Nov 24, 2025Updated 5 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,678Mar 8, 2024Updated 2 years ago
- AllenAI's post-training codebase☆3,708Updated this week
- Data and tools for generating and inspecting OLMo pre-training data.☆1,492Nov 5, 2025Updated 5 months ago
- Minimalistic large language model 3D-parallelism training☆2,674Apr 7, 2026Updated 3 weeks ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,001Dec 6, 2024Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆250Sep 12, 2025Updated 7 months ago
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,922Jan 16, 2024Updated 2 years ago
- DataComp for Language Models☆1,439Sep 9, 2025Updated 7 months ago
- Official Repo for Open-Reasoner-Zero☆2,093Jun 2, 2025Updated 11 months ago
- PyTorch building blocks for the OLMo ecosystem☆1,186Updated this week
- Muon is Scalable for LLM Training☆1,469Aug 3, 2025Updated 9 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & VLM & TIS & vLLM & Ray & Asy…☆9,441Updated this week
- Ongoing research training transformer models at scale☆16,203Updated this week
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- verl/HybridFlow: A Flexible and Efficient RL Post-Training Framework☆21,046Updated this week
- A framework for few-shot evaluation of language models.☆12,411Updated this week
- 🚀 Efficient implementations for emerging model architectures☆5,032Updated this week
- GRadient-INformed MoE☆264Sep 25, 2024Updated last year
- Democratizing Reinforcement Learning for LLMs☆5,462Updated this week
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,210Apr 8, 2026Updated 3 weeks ago
- A collection of AWESOME things about mixture-of-experts☆1,275Dec 8, 2024Updated last year
- Fast and memory-efficient exact attention☆23,628Updated this week
- ☆986Feb 7, 2025Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Simple RL training for reasoning☆3,851Dec 23, 2025Updated 4 months ago
- Scalable toolkit for efficient model alignment☆853Oct 6, 2025Updated 6 months ago
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,843Jan 17, 2025Updated last year
- Tools for merging pretrained large language models.☆7,052Mar 15, 2026Updated last month
- Evaluation suite for LLMs☆379Jul 11, 2025Updated 9 months ago
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,314Jul 15, 2025Updated 9 months ago
- Recipes to train reward model for RLHF.☆1,531Apr 24, 2025Updated last year
- MoBA: Mixture of Block Attention for Long-Context LLMs☆2,108Apr 3, 2025Updated last year
- SGLang is a high-performance serving framework for large language models and multimodal models.☆26,832Updated this week
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Codebase for Aria - an Open Multimodal Native MoE☆1,087Jan 22, 2025Updated last year
- Scalable RL solution for advanced reasoning of language models☆1,852Mar 18, 2025Updated last year
- A family of compressed models obtained via pruning and knowledge distillation☆377Nov 6, 2025Updated 5 months ago
- Reproducible, flexible LLM evaluations☆367Mar 24, 2026Updated last month
- O1 Replication Journey☆1,999Jan 14, 2025Updated last year
- Efficient Triton Kernels for LLM Training☆6,315Apr 27, 2026Updated last week
- Next-Token Prediction is All You Need☆2,402Jan 12, 2026Updated 3 months ago