openai / simple-evalsLinks
☆4,093Updated 2 months ago
Alternatives and similar repositories for simple-evals
Users that are interested in simple-evals are comparing it to the libraries listed below
Sorting:
- AllenAI's post-training codebase☆3,222Updated this week
- Robust recipes to align language models with human and AI preferences☆5,386Updated last month
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,987Updated this week
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆2,042Updated last year
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,895Updated this week
- A library for advanced large language model reasoning☆2,284Updated 3 months ago
- TextGrad: Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients. Published in Nature.☆2,978Updated 2 months ago
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,492Updated last week
- Sky-T1: Train your own O1 preview model within $450☆3,339Updated 2 months ago
- Tools for merging pretrained large language models.☆6,337Updated 3 weeks ago
- PyTorch native post-training library☆5,523Updated this week
- Modeling, training, eval, and inference code for OLMo☆6,019Updated last month
- ☆4,096Updated last year
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,870Updated 2 months ago
- Democratizing Reinforcement Learning for LLMs☆4,414Updated this week
- A framework for few-shot evaluation of language models.☆10,270Updated this week
- Synthetic data curation for post-training and structured data extraction☆1,512Updated 2 months ago
- DataComp for Language Models☆1,367Updated last month
- A unified evaluation framework for large language models☆2,717Updated 2 months ago
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆985Updated last week
- Minimalistic large language model 3D-parallelism training☆2,246Updated last month
- Fully open data curation for reasoning models☆2,109Updated last month
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,502Updated 2 years ago
- Data and tools for generating and inspecting OLMo pre-training data.☆1,323Updated 2 weeks ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,660Updated last week
- Search-R1: An Efficient, Scalable RL Training Framework for Reasoning & Search Engine Calling interleaved LLM based on veRL☆3,267Updated this week
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆2,825Updated 8 months ago
- SWE-bench: Can Language Models Resolve Real-world Github Issues?☆3,613Updated 2 weeks ago
- [ICLR 2025] Automated Design of Agentic Systems☆1,428Updated 8 months ago
- ☆2,539Updated last year