openai / simple-evalsLinks
☆4,233Updated 4 months ago
Alternatives and similar repositories for simple-evals
Users that are interested in simple-evals are comparing it to the libraries listed below
Sorting:
- AllenAI's post-training codebase☆3,456Updated this week
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆2,117Updated last year
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,202Updated last week
- PyTorch native post-training library☆5,619Updated last week
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,971Updated last week
- SWE-bench: Can Language Models Resolve Real-world Github Issues?☆3,978Updated this week
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,925Updated 4 months ago
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,585Updated this week
- Robust recipes to align language models with human and AI preferences☆5,453Updated 3 months ago
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆1,236Updated this week
- ☆4,109Updated last year
- Tools for merging pretrained large language models.☆6,611Updated last week
- A library for advanced large language model reasoning☆2,318Updated 6 months ago
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆2,993Updated last month
- Arena-Hard-Auto: An automatic LLM benchmark.☆974Updated 6 months ago
- Sky-T1: Train your own O1 preview model within $450☆3,361Updated 5 months ago
- A framework for few-shot evaluation of language models.☆10,976Updated this week
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,537Updated 2 years ago
- A unified evaluation framework for large language models☆2,767Updated 2 months ago
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,688Updated last month
- Curated list of datasets and tools for post-training.☆4,100Updated last month
- TextGrad: Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients. Published in Nature.☆3,164Updated 4 months ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,780Updated this week
- DataComp for Language Models☆1,401Updated 3 months ago
- An Open Large Reasoning Model for Real-World Solutions☆1,537Updated 6 months ago
- Official repo for the paper "Scaling Synthetic Data Creation with 1,000,000,000 Personas"☆1,422Updated 10 months ago
- This repo contains the dataset and code for the paper "SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software E…☆1,438Updated 5 months ago
- A collection of benchmarks and datasets for evaluating LLM.☆537Updated last year
- [ICLR 2025] Automated Design of Agentic Systems☆1,475Updated 10 months ago
- A curated list of Large Language Model (LLM) Interpretability resources.☆1,456Updated 6 months ago