mlfoundations / evalchemy
Automatic evals for LLMs
☆361Updated this week
Alternatives and similar repositories for evalchemy:
Users that are interested in evalchemy are comparing it to the libraries listed below
- Reproducible, flexible LLM evaluations☆189Updated 3 weeks ago
- ☆509Updated 4 months ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆673Updated 3 weeks ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆354Updated 7 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,414Updated this week
- ☆617Updated 2 weeks ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including preference learning, reinforcement learning, etc.☆320Updated this week
- RewardBench: the first evaluation tool for reward models.☆553Updated last month
- OLMoE: Open Mixture-of-Experts Language Models☆713Updated last month
- awesome synthetic (text) datasets☆267Updated 5 months ago
- Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆495Updated last month
- Official repository for ORPO☆447Updated 10 months ago
- A simple unified framework for evaluating LLMs☆210Updated this week
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆457Updated last year
- FuseAI Project☆562Updated 2 months ago
- An Open Source Toolkit For LLM Distillation☆569Updated 3 months ago
- ☆513Updated last week
- Recipes to scale inference-time compute of open models☆1,051Updated last month
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym☆430Updated 2 weeks ago
- ☆278Updated last month
- ☆921Updated 2 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆221Updated 5 months ago
- Large Reasoning Models☆800Updated 4 months ago
- The official evaluation suite and dynamic data release for MixEval.☆234Updated 5 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆180Updated 3 weeks ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆300Updated last year
- A project to improve skills of large language models☆275Updated this week
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆406Updated 11 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆210Updated 5 months ago
- ☆1,014Updated 4 months ago