meta-math / MetaMath
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
☆403Updated 11 months ago
Alternatives and similar repositories for MetaMath:
Users that are interested in MetaMath are comparing it to the libraries listed below
- A recipe for online RLHF and online iterative DPO.☆464Updated last month
- ☆304Updated last week
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆239Updated 4 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆450Updated 10 months ago
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆335Updated last week
- MathVista: data, code, and evaluation for Mathematical Reasoning in Visual Contexts☆267Updated 2 months ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆388Updated 9 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆306Updated 4 months ago
- The official implementation of Self-Play Preference Optimization (SPPO)☆471Updated last week
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆154Updated 9 months ago
- ☆301Updated 4 months ago
- Recipes to train reward model for RLHF.☆1,122Updated last week
- A series of technical report on Slow Thinking with LLM☆359Updated this week
- A large-scale, fine-grained, diverse preference dataset (and models).☆327Updated last year
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆284Updated 5 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆805Updated 2 months ago
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)☆254Updated 9 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆531Updated last month
- ☆252Updated 6 months ago
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied wit…☆105Updated 6 months ago
- Generative Judge for Evaluating Alignment☆224Updated last year
- RewardBench: the first evaluation tool for reward models.☆494Updated this week
- Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"☆209Updated 3 months ago
- Code and data for "MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning" (ICLR 2024)☆358Updated 5 months ago
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆327Updated last year
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆388Updated 8 months ago
- Data and Code for Program of Thoughts (TMLR 2023)☆257Updated 8 months ago
- Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆134Updated 3 months ago
- Official implementation of paper "Cumulative Reasoning With Large Language Models" (https://arxiv.org/abs/2308.04371)☆288Updated 4 months ago
- FireAct: Toward Language Agent Fine-tuning☆261Updated last year