allenai / WildBenchLinks
Benchmarking LLMs with Challenging Tasks from Real Users
☆245Updated last year
Alternatives and similar repositories for WildBench
Users that are interested in WildBench are comparing it to the libraries listed below
Sorting:
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆148Updated last year
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆110Updated 11 months ago
- LOFT: A 1 Million+ Token Long-Context Benchmark☆225Updated 7 months ago
- A simple unified framework for evaluating LLMs☆261Updated 9 months ago
- ☆107Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆255Updated last year
- ☆203Updated 9 months ago
- ☆130Updated last year
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆114Updated this week
- ☆140Updated last year
- General Reasoner: Advancing LLM Reasoning Across All Domains [NeurIPS25]☆216Updated 2 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆178Updated 6 months ago
- Reproducible, flexible LLM evaluations☆337Updated last week
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆146Updated last year
- The HELMET Benchmark☆198Updated 2 months ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆136Updated last year
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆183Updated 8 months ago
- Complex Function Calling Benchmark.☆163Updated last year
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆205Updated last year
- Self-Alignment with Principle-Following Reward Models☆169Updated 4 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated last year
- Evaluating LLMs with fewer examples☆169Updated last year
- BrowseComp-Plus: A More Fair and Transparent Evaluation Benchmark of Deep-Research Agent☆169Updated last month
- ☆313Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆134Updated last year
- Critique-out-Loud Reward Models☆73Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆316Updated 2 years ago
- ☆80Updated 10 months ago
- ☆123Updated 11 months ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆366Updated last year