allenai / OLMoELinks
OLMoE: Open Mixture-of-Experts Language Models
☆875Updated last week
Alternatives and similar repositories for OLMoE
Users that are interested in OLMoE are comparing it to the libraries listed below
Sorting:
- Large Reasoning Models☆805Updated 10 months ago
- A project to improve skills of large language models☆568Updated this week
- Recipes to scale inference-time compute of open models☆1,109Updated 4 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆747Updated last year
- Muon is Scalable for LLM Training☆1,318Updated 2 months ago
- [COLM 2025] LIMO: Less is More for Reasoning☆1,022Updated 2 months ago
- ☆963Updated 8 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,133Updated this week
- Scalable toolkit for efficient model alignment☆842Updated 2 months ago
- SkyRL: A Modular Full-stack RL Library for LLMs☆906Updated last week
- Understanding R1-Zero-Like Training: A Critical Perspective☆1,100Updated last month
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆443Updated 4 months ago
- A family of compressed models obtained via pruning and knowledge distillation☆352Updated 10 months ago
- ☆948Updated 3 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆439Updated 11 months ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆491Updated 7 months ago
- Training Large Language Model to Reason in a Continuous Latent Space☆1,272Updated last month
- An Open Source Toolkit For LLM Distillation☆732Updated 2 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆923Updated 7 months ago
- ☆816Updated 3 months ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆775Updated 6 months ago
- An Open Large Reasoning Model for Real-World Solutions☆1,522Updated 4 months ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆631Updated last year
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆372Updated last week
- slime is an LLM post-training framework for RL Scaling.☆2,023Updated this week
- Scalable toolkit for efficient model reinforcement☆910Updated this week
- Pretraining and inference code for a large-scale depth-recurrent language model