openai / simple-evalsLinks
☆4,125Updated 2 months ago
Alternatives and similar repositories for simple-evals
Users that are interested in simple-evals are comparing it to the libraries listed below
Sorting:
- AllenAI's post-training codebase☆3,263Updated this week
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆2,056Updated last year
- TextGrad: Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients. Published in Nature.☆3,012Updated 2 months ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,903Updated this week
- Democratizing Reinforcement Learning for LLMs☆4,534Updated this week
- A library for advanced large language model reasoning☆2,291Updated 4 months ago
- Tools for merging pretrained large language models.☆6,394Updated last month
- PyTorch native post-training library☆5,547Updated this week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,021Updated this week
- ☆4,100Updated last year
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆1,029Updated last week
- DataComp for Language Models☆1,381Updated last month
- SWE-bench: Can Language Models Resolve Real-world Github Issues?☆3,692Updated last week
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,877Updated 2 months ago
- Sky-T1: Train your own O1 preview model within $450☆3,341Updated 3 months ago
- Data and tools for generating and inspecting OLMo pre-training data.☆1,332Updated last month
- Curated list of datasets and tools for post-training.☆3,792Updated 2 months ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,687Updated last week
- Robust recipes to align language models with human and AI preferences☆5,406Updated last month
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,058Updated 2 years ago
- Modeling, training, eval, and inference code for OLMo☆6,044Updated last week
- Recipes to scale inference-time compute of open models☆1,111Updated 5 months ago
- Fully open data curation for reasoning models☆2,120Updated last month
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,519Updated this week
- [ICLR 2025] Automated Design of Agentic Systems☆1,438Updated 8 months ago
- ☆1,350Updated 11 months ago
- An Open Large Reasoning Model for Real-World Solutions☆1,522Updated 4 months ago
- Official repo for the paper "Scaling Synthetic Data Creation with 1,000,000,000 Personas"☆1,362Updated 8 months ago
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆2,864Updated last week
- Environments for LLM Reinforcement Learning☆3,338Updated this week