protagolabs / odyssey-mathLinks
☆83Updated 5 months ago
Alternatives and similar repositories for odyssey-math
Users that are interested in odyssey-math are comparing it to the libraries listed below
Sorting:
- ☆95Updated last year
- Exploration of automated dataset selection approaches at large scales.☆45Updated 3 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆81Updated 10 months ago
- A library for efficient patching and automatic circuit discovery.☆67Updated 2 months ago
- The official repo for "TheoremQA: A Theorem-driven Question Answering dataset" (EMNLP 2023)☆31Updated last year
- A framework for few-shot evaluation of autoregressive language models.☆24Updated last year
- ☆132Updated 7 months ago
- ☆98Updated last year
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆138Updated 9 months ago
- Replicating O1 inference-time scaling laws☆87Updated 6 months ago
- Revisiting Mid-training in the Era of RL Scaling☆56Updated 2 months ago
- ☆97Updated 11 months ago
- ☆85Updated 10 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆120Updated 9 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆73Updated last month
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆30Updated last year
- ☆180Updated 2 months ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆57Updated 3 months ago
- ☆48Updated last month
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆78Updated last year
- ☆85Updated last year
- ☆122Updated 11 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆144Updated 7 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆158Updated last month
- ☆78Updated last month
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆93Updated 3 years ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆127Updated 11 months ago
- ☆34Updated last year
- The GitHub repo for Goal Driven Discovery of Distributional Differences via Language Descriptions☆70Updated 2 years ago