GAIR-NLP / OlympicArena
[NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI
☆96Updated last week
Alternatives and similar repositories for OlympicArena:
Users that are interested in OlympicArena are comparing it to the libraries listed below
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆72Updated 9 months ago
- The official repository of the Omni-MATH benchmark.☆74Updated 2 months ago
- [ACL 2024] Code for "MoPS: Modular Story Premise Synthesis for Open-Ended Automatic Story Generation"☆35Updated 8 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆53Updated 3 months ago
- Reformatted Alignment☆114Updated 5 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆136Updated 4 months ago
- ☆59Updated 6 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆130Updated 5 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆75Updated last year
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆60Updated 4 months ago
- Code for ICLR 2024 paper "CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets"☆52Updated 9 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆44Updated 2 months ago
- Evaluate the Quality of Critique☆35Updated 9 months ago
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large L…☆46Updated 8 months ago
- ☆54Updated 5 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆79Updated 7 months ago
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆70Updated 3 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆163Updated last week
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆46Updated last year
- LongHeads: Multi-Head Attention is Secretly a Long Context Processor☆29Updated 11 months ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆44Updated 3 months ago
- The official repo for "TheoremQA: A Theorem-driven Question Answering dataset" (EMNLP 2023)☆28Updated 10 months ago