openai / simple-evalsLinks
☆3,740Updated last month
Alternatives and similar repositories for simple-evals
Users that are interested in simple-evals are comparing it to the libraries listed below
Sorting:
- PyTorch native post-training library☆5,287Updated this week
- AllenAI's post-training codebase☆3,028Updated this week
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,773Updated this week
- Sky-T1: Train your own O1 preview model within $450☆3,272Updated last month
- ☆4,088Updated last year
- Modeling, training, eval, and inference code for OLMo☆5,702Updated last week
- Democratizing Reinforcement Learning for LLMs☆3,396Updated last month
- TextGrad: Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients.☆2,672Updated 2 months ago
- Meta Lingua: a lean, efficient, and easy-to-hack codebase to research LLMs.☆4,622Updated last week
- verl: Volcano Engine Reinforcement Learning for LLMs☆9,958Updated this week
- MTEB: Massive Text Embedding Benchmark☆2,626Updated last week
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆1,897Updated 10 months ago
- SWE-bench [Multimodal]: Can Language Models Resolve Real-world Github Issues?☆3,107Updated this week
- Search-R1: An Efficient, Scalable RL Training Framework for Reasoning & Search Engine Calling interleaved LLM based on veRL☆2,656Updated last week
- A library for advanced large language model reasoning☆2,148Updated 2 weeks ago
- Tools for merging pretrained large language models.☆5,853Updated last week
- Curated list of datasets and tools for post-training.☆3,175Updated 4 months ago
- Robust recipes to align language models with human and AI preferences☆5,235Updated last month
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,641Updated this week
- A framework for serving and evaluating LLM routers - save LLM costs without compromising quality☆4,052Updated 10 months ago
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,028Updated last month
- Agentless🐱: an agentless approach to automatically solve software development problems☆1,743Updated 6 months ago
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆760Updated last week
- SGLang is a fast serving framework for large language models and vision language models.☆15,421Updated this week
- A framework for few-shot evaluation of language models.☆9,379Updated this week
- This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflection by Akari Asai,…☆2,113Updated last year
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,191Updated 3 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,773Updated 6 months ago
- Set of tools to assess and improve LLM security.☆3,505Updated last week
- Everything about the SmolLM2 and SmolVLM family of models☆2,590Updated 2 months ago