openai / simple-evalsLinks
☆3,967Updated 2 weeks ago
Alternatives and similar repositories for simple-evals
Users that are interested in simple-evals are comparing it to the libraries listed below
Sorting:
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆1,973Updated last year
- AllenAI's post-training codebase☆3,116Updated this week
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,849Updated last week
- PyTorch native post-training library☆5,418Updated this week
- A unified evaluation framework for large language models☆2,691Updated last week
- A library for advanced large language model reasoning☆2,203Updated 2 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,820Updated this week
- Sharing both practical insights and theoretical knowledge about LLM evaluation that we gathered while managing the Open LLM Leaderboard a…☆1,540Updated 7 months ago
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,418Updated this week
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆2,735Updated 6 months ago
- Robust recipes to align language models with human and AI preferences☆5,322Updated 3 weeks ago
- A framework for few-shot evaluation of language models.☆9,860Updated this week
- ☆4,086Updated last year
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,554Updated this week
- Modeling, training, eval, and inference code for OLMo☆5,895Updated last week
- Curated list of datasets and tools for post-training.☆3,365Updated 3 weeks ago
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,341Updated 5 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,835Updated last week
- Synthetic data curation for post-training and structured data extraction☆1,477Updated 3 weeks ago
- Data and tools for generating and inspecting OLMo pre-training data.☆1,296Updated this week
- [ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling☆1,732Updated last year
- An Open Large Reasoning Model for Real-World Solutions☆1,515Updated 2 months ago
- [NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments☆2,069Updated this week
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆848Updated this week
- Tools for merging pretrained large language models.☆6,195Updated this week
- TextGrad: Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients.☆2,832Updated 3 weeks ago
- DataComp for Language Models☆1,351Updated 2 weeks ago
- SWE-bench [Multimodal]: Can Language Models Resolve Real-world Github Issues?☆3,351Updated this week
- MTEB: Massive Text Embedding Benchmark☆2,775Updated this week
- ☆3,003Updated 11 months ago