openai / simple-evalsLinks
☆4,027Updated last month
Alternatives and similar repositories for simple-evals
Users that are interested in simple-evals are comparing it to the libraries listed below
Sorting:
- AllenAI's post-training codebase☆3,166Updated this week
- PyTorch native post-training library☆5,458Updated this week
- TextGrad: Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients. Published in Nature.☆2,902Updated last month
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,867Updated last week
- DataComp for Language Models☆1,367Updated this week
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆1,998Updated last year
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,849Updated last month
- Democratizing Reinforcement Learning for LLMs☆4,111Updated this week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,866Updated this week
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆922Updated this week
- Sky-T1: Train your own O1 preview model within $450☆3,326Updated last month
- Fully open data curation for reasoning models☆2,058Updated this week
- Tools for merging pretrained large language models.☆6,258Updated 3 weeks ago
- SWE-bench [Multimodal]: Can Language Models Resolve Real-world Github Issues?☆3,433Updated this week
- A library for advanced large language model reasoning☆2,267Updated 2 months ago
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,486Updated 2 years ago
- Verifiers for LLM Reinforcement Learning☆2,903Updated this week
- An Open Large Reasoning Model for Real-World Solutions☆1,514Updated 3 months ago
- A unified evaluation framework for large language models☆2,708Updated last month
- Robust recipes to align language models with human and AI preferences☆5,343Updated last month
- Synthetic data curation for post-training and structured data extraction☆1,491Updated last month
- ☆4,089Updated last year
- A reading list on LLM based Synthetic Data Generation 🔥☆1,398Updated 3 months ago
- Search-R1: An Efficient, Scalable RL Training Framework for Reasoning & Search Engine Calling interleaved LLM based on veRL☆3,103Updated this week
- LiveBench: A Challenging, Contamination-Free LLM Benchmark☆862Updated last week
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,454Updated this week
- [ICLR 2025] Automated Design of Agentic Systems☆1,412Updated 7 months ago
- Arena-Hard-Auto: An automatic LLM benchmark.☆920Updated 2 months ago
- Agentic components of the Llama Stack APIs☆4,272Updated last month
- ☆1,034Updated 8 months ago