haizelabs / verdictLinks
Inference-time scaling for LLMs-as-a-judge.
☆303Updated 3 weeks ago
Alternatives and similar repositories for verdict
Users that are interested in verdict are comparing it to the libraries listed below
Sorting:
- A framework for optimizing DSPy programs with RL☆202Updated last week
- ⚖️ Awesome LLM Judges ⚖️☆132Updated 5 months ago
- Red-Teaming Language Models with DSPy☆221Updated 8 months ago
- Training-Ready RL Environments + Evals☆128Updated this week
- A small library of LLM judges☆294Updated 2 months ago
- Sphynx Hallucination Induction☆53Updated 8 months ago
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆450Updated last year
- ☆135Updated 7 months ago
- Kura is a simple reproduction of the CLIO paper which uses language models to label user behaviour before clustering them based on embedd…☆336Updated last month
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆116Updated last week
- ☆159Updated 10 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆187Updated 7 months ago
- Train your own SOTA deductive reasoning model☆108Updated 7 months ago
- A DSPy-based implementation of the tree of thoughts method (Yao et al., 2023) for generating persuasive arguments☆89Updated 2 weeks ago
- TapeAgents is a framework that facilitates all stages of the LLM Agent development lifecycle☆298Updated last week
- Super basic implementation (gist-like) of RLMs with REPL environments.☆132Updated this week
- Collection of evals for Inspect AI☆254Updated this week
- ☆167Updated last week
- Use the OpenAI Batch tool to make async batch requests to the OpenAI API.☆100Updated last year
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆98Updated 3 months ago
- OSS RL environment + evals toolkit☆189Updated last week
- A strongly typed Python DSL for developing message passing multi agent systems☆53Updated last year
- ☆123Updated last year
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆105Updated last month
- Attribute (or cite) statements generated by LLMs back to in-context information.☆291Updated last year
- ☆58Updated 8 months ago
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆291Updated 2 weeks ago
- Open source interpretability artefacts for R1.☆161Updated 6 months ago
- Easiest way to give context to LLMs; Attachments has the ambition to be the general funnel for any files to be transformed into images+te…☆315Updated last month
- ☆112Updated last week