allenai / FlexOlmoLinks
Code and training scripts for FlexOlmo
☆120Updated this week
Alternatives and similar repositories for FlexOlmo
Users that are interested in FlexOlmo are comparing it to the libraries listed below
Sorting:
- ☆112Updated last year
- EvaByte: Efficient Byte-level Language Models at Scale☆114Updated 8 months ago
- ☆61Updated 6 months ago
- Esoteric Language Models☆108Updated last month
- ☆91Updated last year
- PeRL: Parameter-Efficient Reinforcement Learning☆56Updated this week
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- ☆82Updated last month
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆340Updated 3 weeks ago
- This is the official repository for Inheritune.☆119Updated 11 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆89Updated last year
- Process Reward Models That Think☆70Updated last month
- ☆85Updated 2 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Updated 9 months ago
- Official JAX implementation of End-to-End Test-Time Training for Long Context☆214Updated last week
- General Reasoner: Advancing LLM Reasoning Across All Domains [NeurIPS25]☆210Updated last month
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆35Updated 10 months ago
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆181Updated 6 months ago
- ☆124Updated 10 months ago
- ☆71Updated last year
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆41Updated 2 weeks ago
- PostTrainBench measures how well CLI agents like Claude Code or Codex CLI can post-train base LLMs on a single H100 GPU in 10 hours☆102Updated this week
- [EMNLP'25 Industry] Repo for "Z1: Efficient Test-time Scaling with Code"☆68Updated 9 months ago
- ☆96Updated last week
- MatFormer repo☆67Updated last year
- ☆371Updated 2 months ago
- ☆35Updated 7 months ago
- ☆37Updated 4 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆228Updated 2 months ago
- Ring-V2 is a reasoning MoE LLM provided and open-sourced by InclusionAI.☆87Updated 2 months ago