casmlab / NPHardEval
Repository for NPHardEval, a quantified-dynamic benchmark of LLMs
☆52Updated 11 months ago
Alternatives and similar repositories for NPHardEval:
Users that are interested in NPHardEval are comparing it to the libraries listed below
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆46Updated last year
- Replicating O1 inference-time scaling laws☆83Updated 3 months ago
- ☆38Updated 4 months ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆26Updated 6 months ago
- ☆39Updated 7 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆44Updated last month
- ☆95Updated 8 months ago
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆54Updated 11 months ago
- Codebase for Instruction Following without Instruction Tuning☆33Updated 6 months ago
- ☆87Updated 5 months ago
- This is an official implementation of the paper ``Building Math Agents with Multi-Turn Iterative Preference Learning'' with multi-turn DP…☆23Updated 3 months ago
- Code for ICML 2024 paper☆17Updated last week
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆103Updated last year
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆37Updated 3 months ago
- ☆60Updated 10 months ago
- ☆45Updated last year
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆44Updated 2 months ago
- ☆34Updated 11 months ago
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆45Updated last year
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆47Updated 2 weeks ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆130Updated 6 months ago
- Flow of Reasoning: Training LLMs for Divergent Problem Solving with Minimal Examples☆76Updated 2 weeks ago
- Language models scale reliably with over-training and on downstream tasks☆96Updated 11 months ago
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆29Updated 9 months ago
- The repository contains code for Adaptive Data Optimization☆20Updated 3 months ago