allenai / super-benchmarkLinks
☆42Updated 2 months ago
Alternatives and similar repositories for super-benchmark
Users that are interested in super-benchmark are comparing it to the libraries listed below
Sorting:
- Implementation of the paper: "AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?"☆56Updated 5 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆43Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- ☆49Updated 3 weeks ago
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆57Updated 9 months ago
- ☆24Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆47Updated last year
- [arXiv preprint] Official Repository for "Evaluating Language Models as Synthetic Data Generators"☆33Updated 5 months ago
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions☆44Updated 11 months ago
- ☆29Updated 10 months ago
- Aioli: A unified optimization framework for language model data mixing☆25Updated 4 months ago
- Middleware for LLMs: Tools Are Instrumental for Language Agents in Complex Environments (EMNLP'2024)☆36Updated 5 months ago
- ☆22Updated 5 months ago
- Revisiting Mid-training in the Era of RL Scaling☆48Updated last month
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆81Updated 9 months ago
- This repository includes a benchmark and code for the paper "Evaluating LLMs at Detecting Errors in LLM Responses".☆29Updated 9 months ago
- Code for RATIONALYST: Pre-training Process-Supervision for Improving Reasoning https://arxiv.org/pdf/2410.01044☆33Updated 8 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆54Updated last year
- Code and Data for the NAACL 24 paper: MacGyver: Are Large Language Models Creative Problem Solvers?☆28Updated last year
- ☆44Updated 9 months ago
- ☆20Updated last month
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆47Updated 6 months ago
- Benchmarking Benchmark Leakage in Large Language Models☆51Updated last year
- ☆27Updated this week
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated 9 months ago
- official implementation of paper "Process Reward Model with Q-value Rankings"☆59Updated 3 months ago
- Long Context Extension and Generalization in LLMs☆56Updated 8 months ago
- Training and Benchmarking LLMs for Code Preference.☆33Updated 6 months ago
- Evaluate the Quality of Critique☆35Updated last year