princeton-pli / hal-harness
☆55Updated this week
Alternatives and similar repositories for hal-harness:
Users that are interested in hal-harness are comparing it to the libraries listed below
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆103Updated last year
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆166Updated 3 weeks ago
- Replicating O1 inference-time scaling laws☆83Updated 4 months ago
- Repository for the paper Stream of Search: Learning to Search in Language☆142Updated last month
- Can Language Models Solve Olympiad Programming?☆112Updated 2 months ago
- SWE Arena☆28Updated this week
- Flow of Reasoning: Training LLMs for Divergent Problem Solving with Minimal Examples☆80Updated last week
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- ☆65Updated 4 months ago
- ☆111Updated last month
- Functional Benchmarks and the Reasoning Gap☆84Updated 6 months ago
- ☆96Updated 9 months ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆44Updated last month
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆73Updated last year
- ☆40Updated last month
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆98Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆67Updated 9 months ago
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences☆71Updated 9 months ago
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆111Updated 9 months ago
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆82Updated this week
- Middleware for LLMs: Tools Are Instrumental for Language Agents in Complex Environments (EMNLP'2024)☆36Updated 3 months ago
- ☆130Updated 4 months ago
- Collection of evals for Inspect AI☆101Updated this week
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆62Updated this week
- ☆38Updated 4 months ago
- A banchmark list for evaluation of large language models.☆91Updated 3 weeks ago
- Code for Paper: Autonomous Evaluation and Refinement of Digital Agents [COLM 2024]☆130Updated 4 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆91Updated last month
- A repository for transformer critique learning and generation☆89Updated last year