openai / simple-evalsLinks
☆4,340Updated 6 months ago
Alternatives and similar repositories for simple-evals
Users that are interested in simple-evals are comparing it to the libraries listed below
Sorting:
- AllenAI's post-training codebase☆3,551Updated last week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,291Updated 2 weeks ago
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆2,159Updated last year
- PyTorch native post-training library☆5,660Updated this week
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆1,295Updated 3 weeks ago
- Democratizing Reinforcement Learning for LLMs☆5,060Updated this week
- A framework for few-shot evaluation of language models.☆11,298Updated last week
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,939Updated 5 months ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,074Updated last week
- Tools for merging pretrained large language models.☆6,718Updated last week
- Sky-T1: Train your own O1 preview model within $450☆3,370Updated 6 months ago
- ☆4,113Updated last year
- Our library for RL environments + evals☆3,791Updated this week
- A library for advanced large language model reasoning☆2,326Updated 7 months ago
- Synthetic data curation for post-training and structured data extraction☆1,618Updated last week
- DataComp for Language Models☆1,413Updated 4 months ago
- Search-R1: An Efficient, Scalable RL Training Framework for Reasoning & Search Engine Calling interleaved LLM based on veRL☆3,889Updated 2 months ago
- Official repo for the paper "Scaling Synthetic Data Creation with 1,000,000,000 Personas"☆1,456Updated 11 months ago
- Fully open data curation for reasoning models☆2,200Updated 2 months ago
- Official Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhan…☆1,573Updated last year
- Recipes to scale inference-time compute of open models☆1,124Updated 8 months ago
- A reading list on LLM based Synthetic Data Generation 🔥☆1,515Updated 8 months ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,859Updated last week
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,230Updated last year
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,546Updated 2 years ago
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,556Updated 3 weeks ago
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,662Updated this week
- A unified evaluation framework for large language models☆2,773Updated 2 weeks ago
- TextGrad: Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients. Published in Nature.☆3,331Updated 6 months ago
- Curated list of datasets and tools for post-training.☆4,205Updated 2 months ago