openai / simple-evalsLinks
☆3,880Updated 3 weeks ago
Alternatives and similar repositories for simple-evals
Users that are interested in simple-evals are comparing it to the libraries listed below
Sorting:
- AllenAI's post-training codebase☆3,077Updated this week
- PyTorch native post-training library☆5,361Updated last week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,753Updated last week
- Curated list of datasets and tools for post-training.☆3,295Updated 6 months ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,821Updated this week
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆1,951Updated 11 months ago
- A library for advanced large language model reasoning☆2,190Updated last month
- SWE-bench [Multimodal]: Can Language Models Resolve Real-world Github Issues?☆3,222Updated last week
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,812Updated 7 months ago
- A framework for few-shot evaluation of language models.☆9,648Updated this week
- Democratizing Reinforcement Learning for LLMs☆3,910Updated last week
- Robust recipes to align language models with human and AI preferences☆5,280Updated this week
- A unified evaluation framework for large language models☆2,669Updated 3 weeks ago
- Tools for merging pretrained large language models.☆6,092Updated 2 weeks ago
- ☆4,087Updated last year
- Sky-T1: Train your own O1 preview model within $450☆3,313Updated 2 weeks ago
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆2,704Updated 6 months ago
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,354Updated this week
- Official repo for the paper "Scaling Synthetic Data Creation with 1,000,000,000 Personas"☆1,249Updated 5 months ago
- An Open Large Reasoning Model for Real-World Solutions☆1,508Updated 2 months ago
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,461Updated 2 years ago
- DataComp for Language Models☆1,337Updated 4 months ago
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆815Updated last month
- TextGrad: Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients.☆2,785Updated this week
- ☆1,356Updated 8 months ago
- LiveBench: A Challenging, Contamination-Free LLM Benchmark☆832Updated last week
- Agentless🐱: an agentless approach to automatically solve software development problems☆1,831Updated 7 months ago
- Large Concept Models: Language modeling in a sentence representation space☆2,252Updated 6 months ago
- Search-R1: An Efficient, Scalable RL Training Framework for Reasoning & Search Engine Calling interleaved LLM based on veRL☆2,920Updated 2 weeks ago
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,300Updated 4 months ago