princeton-nlp / HELMETLinks
The HELMET Benchmark
☆198Updated 2 months ago
Alternatives and similar repositories for HELMET
Users that are interested in HELMET are comparing it to the libraries listed below
Sorting:
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆245Updated 4 months ago
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆230Updated last year
- Reproducible, flexible LLM evaluations☆331Updated last week
- LOFT: A 1 Million+ Token Long-Context Benchmark☆225Updated 7 months ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆110Updated 11 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆183Updated 8 months ago
- General Reasoner: Advancing LLM Reasoning Across All Domains [NeurIPS25]☆214Updated 2 months ago
- ☆203Updated 9 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆245Updated last year
- Long Context Extension and Generalization in LLMs☆62Updated last year
- Organize the Web: Constructing Domains Enhances Pre-Training Data Curation☆76Updated 9 months ago
- BrowseComp-Plus: A More Fair and Transparent Evaluation Benchmark of Deep-Research Agent☆164Updated last month
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆178Updated 6 months ago
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆200Updated last month
- LongProc: Benchmarking Long-Context Language Models on Long Procedural Generation☆33Updated 3 months ago
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆78Updated last year
- ☆328Updated 8 months ago
- A simple unified framework for evaluating LLMs☆261Updated 9 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆147Updated last year
- [ICLR 2025] BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval☆189Updated 4 months ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆120Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆110Updated 3 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆260Updated 8 months ago
- Async pipelined version of Verl☆124Updated 9 months ago
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆182Updated 6 months ago
- ☆107Updated last year
- Repo of paper "Free Process Rewards without Process Labels"☆168Updated 10 months ago
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆238Updated 5 months ago
- ☆85Updated 2 months ago
- [COLM 2025] Code for Paper: Learning Adaptive Parallel Reasoning with Language Models☆138Updated last month