OLMoE: Open Mixture-of-Experts Language Models
☆1,007Sep 23, 2025Updated 6 months ago
Alternatives and similar repositories for OLMoE
Users that are interested in OLMoE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Modeling, training, eval, and inference code for OLMo☆6,463Nov 24, 2025Updated 4 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,675Mar 8, 2024Updated 2 years ago
- AllenAI's post-training codebase☆3,683Updated this week
- Data and tools for generating and inspecting OLMo pre-training data.☆1,476Nov 5, 2025Updated 5 months ago
- Minimalistic large language model 3D-parallelism training☆2,644Apr 7, 2026Updated last week
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,000Dec 6, 2024Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆248Sep 12, 2025Updated 7 months ago
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,907Jan 16, 2024Updated 2 years ago
- DataComp for Language Models☆1,436Sep 9, 2025Updated 7 months ago
- Official Repo for Open-Reasoner-Zero☆2,089Jun 2, 2025Updated 10 months ago
- PyTorch building blocks for the OLMo ecosystem☆1,131Updated this week
- Muon is Scalable for LLM Training☆1,453Aug 3, 2025Updated 8 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,315Apr 7, 2026Updated last week
- Ongoing research training transformer models at scale☆15,985Updated this week
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- verl: Volcano Engine Reinforcement Learning for LLMs☆20,603Updated this week
- A framework for few-shot evaluation of language models.☆12,138Updated this week
- 🚀 Efficient implementations for emerging model architectures☆4,823Apr 7, 2026Updated last week
- GRadient-INformed MoE☆264Sep 25, 2024Updated last year
- Democratizing Reinforcement Learning for LLMs☆5,402Updated this week
- A collection of AWESOME things about mixture-of-experts☆1,273Dec 8, 2024Updated last year
- Fast and memory-efficient exact attention☆23,185Apr 6, 2026Updated last week
- ☆980Feb 7, 2025Updated last year
- Simple RL training for reasoning☆3,846Dec 23, 2025Updated 3 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Scalable toolkit for efficient model alignment☆852Oct 6, 2025Updated 6 months ago
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,840Jan 17, 2025Updated last year
- Tools for merging pretrained large language models.☆6,973Mar 15, 2026Updated 3 weeks ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,203Updated this week
- Evaluation suite for LLMs☆379Jul 11, 2025Updated 9 months ago
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,316Jul 15, 2025Updated 8 months ago
- SGLang is a high-performance serving framework for large language models and multimodal models.☆25,643Updated this week
- Recipes to train reward model for RLHF.☆1,527Apr 24, 2025Updated 11 months ago
- MoBA: Mixture of Block Attention for Long-Context LLMs☆2,090Apr 3, 2025Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Codebase for Aria - an Open Multimodal Native MoE☆1,084Jan 22, 2025Updated last year
- Scalable RL solution for advanced reasoning of language models☆1,841Mar 18, 2025Updated last year
- A family of compressed models obtained via pruning and knowledge distillation☆375Nov 6, 2025Updated 5 months ago
- Reproducible, flexible LLM evaluations☆359Mar 24, 2026Updated 3 weeks ago
- O1 Replication Journey☆1,999Jan 14, 2025Updated last year
- Efficient Triton Kernels for LLM Training☆6,265Updated this week
- Next-Token Prediction is All You Need☆2,393Jan 12, 2026Updated 3 months ago