openai / simple-evalsLinks
☆3,764Updated last month
Alternatives and similar repositories for simple-evals
Users that are interested in simple-evals are comparing it to the libraries listed below
Sorting:
- Tools for merging pretrained large language models.☆5,937Updated 2 weeks ago
- SWE-bench [Multimodal]: Can Language Models Resolve Real-world Github Issues?☆3,115Updated this week
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,788Updated last week
- Democratizing Reinforcement Learning for LLMs☆3,411Updated last month
- PyTorch native post-training library☆5,296Updated this week
- A framework for few-shot evaluation of language models.☆9,423Updated this week
- verl: Volcano Engine Reinforcement Learning for LLMs☆10,204Updated this week
- AllenAI's post-training codebase☆3,033Updated this week
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆1,912Updated 10 months ago
- MTEB: Massive Text Embedding Benchmark☆2,648Updated this week
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆2,639Updated 5 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,781Updated 6 months ago
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,201Updated 3 months ago
- A library for advanced large language model reasoning☆2,159Updated 3 weeks ago
- Modeling, training, eval, and inference code for OLMo☆5,739Updated this week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,670Updated this week
- TextGrad: Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients.☆2,707Updated 3 months ago
- Search-R1: An Efficient, Scalable RL Training Framework for Reasoning & Search Engine Calling interleaved LLM based on veRL☆2,705Updated 2 weeks ago
- Supercharge Your LLM Application Evaluations 🚀☆9,799Updated this week
- Robust recipes to align language models with human and AI preferences☆5,241Updated 2 months ago
- This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflection by Akari Asai,…☆2,116Updated last year
- Agent framework and applications built upon Qwen>=3.0, featuring Function Calling, MCP, Code Interpreter, RAG, Chrome extension, etc.☆9,788Updated 2 weeks ago
- LiveBench: A Challenging, Contamination-Free LLM Benchmark☆807Updated last week
- DataComp for Language Models☆1,318Updated 3 months ago
- Simple RL training for reasoning☆3,650Updated 2 months ago
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,310Updated this week
- Sky-T1: Train your own O1 preview model within $450☆3,286Updated last month
- SGLang is a fast serving framework for large language models and vision language models.☆15,567Updated this week
- A unified evaluation framework for large language models☆2,656Updated last month
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,434Updated last week