mlfoundations / evalchemyLinks
Automatic evals for LLMs
☆461Updated 2 weeks ago
Alternatives and similar repositories for evalchemy
Users that are interested in evalchemy are comparing it to the libraries listed below
Sorting:
- Reproducible, flexible LLM evaluations☆215Updated 2 months ago
- ☆523Updated 7 months ago
- A project to improve skills of large language models☆456Updated this week
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆357Updated 10 months ago
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆244Updated last month
- Official repository for ORPO☆457Updated last year
- Recipes to scale inference-time compute of open models☆1,101Updated last month
- RewardBench: the first evaluation tool for reward models.☆609Updated last month
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆397Updated this week
- The official evaluation suite and dynamic data release for MixEval.☆242Updated 8 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆497Updated 2 months ago
- Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆566Updated 3 months ago
- A simple unified framework for evaluating LLMs☆221Updated 2 months ago
- ☆585Updated 2 months ago
- PyTorch building blocks for the OLMo ecosystem☆258Updated this week
- SkyRL: A Modular Full-stack RL Library for LLMs☆574Updated this week
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆728Updated 3 months ago
- ☆824Updated last week
- OLMoE: Open Mixture-of-Experts Language Models☆798Updated 3 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆228Updated 8 months ago
- An Open Source Toolkit For LLM Distillation☆669Updated last month
- LOFT: A 1 Million+ Token Long-Context Benchmark