meta-math / MetaMath
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
☆433Updated last year
Alternatives and similar repositories for MetaMath:
Users that are interested in MetaMath are comparing it to the libraries listed below
- A recipe for online RLHF and online iterative DPO.☆508Updated 4 months ago
- ☆328Updated 3 months ago
- ☆257Updated last year
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆261Updated 7 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆459Updated last year
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆607Updated last year
- Code and data for "MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning" [ICLR 2024]☆370Updated 8 months ago
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆303Updated 8 months ago
- [NeurIPS 2024] BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models☆257Updated last month
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆484Updated 3 months ago
- Code for STaR: Bootstrapping Reasoning With Reasoning (NeurIPS 2022)☆205Updated 2 years ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆407Updated 6 months ago
- Recipes to train reward model for RLHF.☆1,322Updated 2 weeks ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆353Updated 8 months ago
- A large-scale, fine-grained, diverse preference dataset (and models).☆337Updated last year
- [ACL 2024]Official GitHub repo for OlympiadBench: A Challenging Benchmark for Promoting AGI with Olympiad-Level Bilingual Multimodal Scie…☆145Updated 9 months ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆306Updated 9 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆323Updated 7 months ago
- RewardBench: the first evaluation tool for reward models.☆562Updated this week
- ☆287Updated last month
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆343Updated last year
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆409Updated last year
- Controllable Text Generation for Large Language Models: A Survey☆172Updated 8 months ago
- SOTA Math Opensource LLM☆331Updated last year
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆891Updated 2 months ago
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆522Updated 3 months ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆442Updated 6 months ago
- ☆279Updated 9 months ago
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆395Updated 11 months ago
- This is the official implementation of "Progressive-Hint Prompting Improves Reasoning in Large Language Models"☆207Updated last year