openai / simple-evals
☆2,798Updated last week
Alternatives and similar repositories for simple-evals
Users that are interested in simple-evals are comparing it to the libraries listed below
Sorting:
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,700Updated this week
- AllenAI's post-training codebase☆2,950Updated this week
- Robust recipes to align language models with human and AI preferences☆5,173Updated 2 weeks ago
- SWE-bench [Multimodal]: Can Language Models Resolve Real-world Github Issues?☆2,911Updated last week
- Tools for merging pretrained large language models.☆5,646Updated last week
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆703Updated last week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,516Updated last week
- TextGrad: Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients.☆2,524Updated last month
- A library for advanced large language model reasoning☆2,122Updated last month
- PyTorch native post-training library☆5,171Updated last week
- A framework for few-shot evaluation of language models.☆8,904Updated last week
- [ICLR 2025] Automated Design of Agentic Systems☆1,286Updated 3 months ago
- DataComp for Language Models☆1,295Updated last month
- ☆4,079Updated 11 months ago
- A unified evaluation framework for large language models☆2,609Updated 2 weeks ago
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,152Updated last year
- A framework for prompt tuning using Intent-based Prompt Calibration☆2,507Updated last month
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,740Updated 4 months ago
- LiveBench: A Challenging, Contamination-Free LLM Benchmark☆732Updated this week
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆2,972Updated last week
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,214Updated last week
- This is a Phi Family of SLMs book for getting started with Phi Models. Phi a family of open sourced AI models developed by Microsoft. Phi…☆3,277Updated this week
- [NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments☆1,847Updated last week
- An Open Large Reasoning Model for Real-World Solutions☆1,488Updated 2 months ago
- A PyTorch native platform for training generative AI models☆3,808Updated this week
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,377Updated last week
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆1,854Updated 9 months ago
- Recipes to scale inference-time compute of open models☆1,071Updated last week
- A collection of LLM papers, blogs, and projects, with a focus on OpenAI o1 🍓 and reasoning techniques.☆6,726Updated this week
- Minimalistic large language model 3D-parallelism training☆1,870Updated this week