mlfoundations / evalchemyLinks
Automatic evals for LLMs
☆547Updated 3 months ago
Alternatives and similar repositories for evalchemy
Users that are interested in evalchemy are comparing it to the libraries listed below
Sorting:
- Reproducible, flexible LLM evaluations☆257Updated this week
- ☆544Updated 11 months ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆539Updated this week
- Official repository for ORPO☆463Updated last year
- A project to improve skills of large language models☆587Updated this week
- Recipes to scale inference-time compute of open models☆1,111Updated 5 months ago
- PyTorch building blocks for the OLMo ecosystem☆307Updated last week
- RewardBench: the first evaluation tool for reward models.☆643Updated 4 months ago
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,060Updated last week
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆362Updated last year
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆607Updated 7 months ago
- An Open Source Toolkit For LLM Distillation☆740Updated 3 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆553Updated 2 months ago
- The official evaluation suite and dynamic data release for MixEval.☆250Updated 11 months ago
- ☆971Updated 3 months ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆782Updated 7 months ago
- A simple unified framework for evaluating LLMs☆251Updated 6 months ago
- [NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards☆1,194Updated 2 weeks ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆242Updated 11 months ago
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆266Updated last week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,021Updated this week
- ☆323Updated 4 months ago
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆291Updated 2 weeks ago
- OLMoE: Open Mixture-of-Experts Language Models☆888Updated last month
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆477Updated last year
- [COLM 2025] LIMO: Less is More for Reasoning☆1,037Updated 2 months ago
- Code for Quiet-STaR☆739Updated last year
- awesome synthetic (text) datasets☆302Updated 3 months ago
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike stat…☆321Updated last week
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆747Updated last year