haizelabs / verdictLinks
Inference-time scaling for LLMs-as-a-judge.
☆308Updated last week
Alternatives and similar repositories for verdict
Users that are interested in verdict are comparing it to the libraries listed below
Sorting:
- ⚖️ Awesome LLM Judges ⚖️☆133Updated 6 months ago
- A framework for optimizing DSPy programs with RL☆273Updated this week
- ☆135Updated 7 months ago
- Red-Teaming Language Models with DSPy☆235Updated 8 months ago
- ☆159Updated 11 months ago
- Super basic implementation (gist-like) of RLMs with REPL environments.☆242Updated 3 weeks ago
- Training-Ready RL Environments + Evals☆164Updated this week
- A DSPy-based implementation of the tree of thoughts method (Yao et al., 2023) for generating persuasive arguments☆92Updated last month
- Use the OpenAI Batch tool to make async batch requests to the OpenAI API.☆100Updated last year
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆450Updated last year
- A small library of LLM judges☆301Updated 3 months ago
- Sphynx Hallucination Induction☆53Updated 9 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 8 months ago
- Kura is a simple reproduction of the CLIO paper which uses language models to label user behaviour before clustering them based on embedd…☆357Updated 2 months ago
- TapeAgents is a framework that facilitates all stages of the LLM Agent development lifecycle☆299Updated last week
- ☆232Updated 4 months ago
- Train your own SOTA deductive reasoning model☆108Updated 8 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆98Updated 3 months ago
- Tutorial for building LLM router☆233Updated last year
- OSS RL environment + evals toolkit☆198Updated last week
- Synthetic Data for LLM Fine-Tuning☆119Updated last year
- Collection of evals for Inspect AI☆280Updated this week
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆106Updated last month
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆119Updated 2 weeks ago
- ☆59Updated 9 months ago
- ☆68Updated 5 months ago
- Attribute (or cite) statements generated by LLMs back to in-context information.☆296Updated last year
- ☆114Updated 3 weeks ago
- ☆124Updated last year
- Using various instructor clients evaluating the quality and capabilities of extractions and reasoning.☆52Updated last year