meta-math / MetaMath
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
☆422Updated last year
Alternatives and similar repositories for MetaMath:
Users that are interested in MetaMath are comparing it to the libraries listed below
- A recipe for online RLHF and online iterative DPO.☆500Updated 2 months ago
- ☆325Updated last month
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆250Updated 6 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆454Updated last year
- RewardBench: the first evaluation tool for reward models.☆526Updated 3 weeks ago
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆296Updated 6 months ago
- ☆253Updated last year
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆313Updated 6 months ago
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆357Updated 2 months ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆405Updated 11 months ago
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆189Updated 11 months ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆299Updated 7 months ago
- ☆260Updated last week
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆597Updated last year
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆595Updated 2 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆393Updated 5 months ago
- ☆263Updated 8 months ago
- Recipes to train reward model for RLHF.☆1,250Updated last month
- A large-scale, fine-grained, diverse preference dataset (and models).☆335Updated last year
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆247Updated 3 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆542Updated 3 months ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆475Updated 2 months ago
- The official repo for paper, LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods.☆312Updated 3 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated 10 months ago
- Code and data for "MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning" (ICLR 2024)☆364Updated 7 months ago
- [NeurIPS 2024] BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models☆248Updated last week
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆334Updated last month
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆242Updated 6 months ago
- Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆138Updated last week
- Official implementation of paper "Cumulative Reasoning With Large Language Models" (https://arxiv.org/abs/2308.04371)☆291Updated 6 months ago