mlfoundations / evalchemyLinks
Automatic evals for LLMs
☆569Updated this week
Alternatives and similar repositories for evalchemy
Users that are interested in evalchemy are comparing it to the libraries listed below
Sorting:
- Reproducible, flexible LLM evaluations☆305Updated last month
- ☆559Updated last year
- PyTorch building blocks for the OLMo ecosystem☆612Updated this week
- A project to improve skills of large language models☆715Updated this week
- Official repository for ORPO☆468Updated last year
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆582Updated last month
- A simple unified framework for evaluating LLMs☆257Updated 8 months ago
- Recipes to scale inference-time compute of open models☆1,120Updated 7 months ago
- RewardBench: the first evaluation tool for reward models.☆670Updated 6 months ago
- The official evaluation suite and dynamic data release for MixEval.☆253Updated last year
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆366Updated last year
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆601Updated 4 months ago
- ☆1,045Updated 5 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆246Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆478Updated last year
- An Open Source Toolkit For LLM Distillation☆810Updated this week
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆807Updated 9 months ago
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆270Updated 2 months ago
- OLMoE: Open Mixture-of-Experts Language Models☆930Updated 3 months ago
- [NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards☆1,283Updated last week
- [NeurIPS'25] Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆637Updated 9 months ago
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike stat…☆405Updated last month
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,394Updated this week
- Arena-Hard-Auto: An automatic LLM benchmark.☆974Updated 6 months ago
- awesome synthetic (text) datasets☆315Updated last month
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆330Updated last month
- ☆1,035Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆751Updated last year
- ☆328Updated 6 months ago
- The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" [NeurIPS 2024]☆320Updated last month