allenai / FlexOlmoLinks
Code and training scripts for FlexOlmo
☆123Updated this week
Alternatives and similar repositories for FlexOlmo
Users that are interested in FlexOlmo are comparing it to the libraries listed below
Sorting:
- ☆112Updated last year
- PeRL: Parameter-Efficient Reinforcement Learning☆68Updated 3 weeks ago
- Process Reward Models That Think☆78Updated 2 months ago
- PostTrainBench measures how well CLI agents like Claude Code or Codex CLI can post-train base LLMs on a single H100 GPU in 10 hours☆131Updated last week
- ☆82Updated 2 months ago
- [EMNLP'25 Industry] Repo for "Z1: Efficient Test-time Scaling with Code"☆68Updated 10 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆89Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- ☆63Updated 8 months ago
- MatFormer repo☆70Updated last year
- Esoteric Language Models☆111Updated this week
- ☆91Updated last year
- General Reasoner: Advancing LLM Reasoning Across All Domains [NeurIPS25]☆216Updated 2 months ago
- ☆102Updated last month
- ☆84Updated 3 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆115Updated 9 months ago
- ☆38Updated 5 months ago
- RL Scaling and Test-Time Scaling (ICML'25)☆113Updated last year
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆344Updated last month
- [ICML 2025] From Low Rank Gradient Subspace Stabilization to Low-Rank Weights: Observations, Theories and Applications☆52Updated 3 months ago
- Official implementation for "Law of the Weakest Link: Cross capabilities of Large Language Models"☆43Updated last year
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆35Updated 11 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆42Updated last month
- [Preprint] RLVE: Scaling Up Reinforcement Learning for Language Models with Adaptive Verifiable Environments☆177Updated last month
- Official implementation for DenseMixer: Improving MoE Post-Training with Precise Router Gradient☆66Updated 6 months ago
- Official repo of paper LM2☆46Updated 11 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Updated 10 months ago
- Multiplex Thinking: Reasoning via Token-wise Branch-and-Merge☆105Updated last week
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆228Updated 3 months ago
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆116Updated last year