TIGER-AI-Lab / MAmmoTHLinks
Code and data for "MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning" [ICLR 2024]
☆382Updated last year
Alternatives and similar repositories for MAmmoTH
Users that are interested in MAmmoTH are comparing it to the libraries listed below
Sorting:
- FireAct: Toward Language Agent Fine-tuning☆291Updated 2 years ago
- SOTA Math Opensource LLM☆333Updated 2 years ago
- ☆320Updated last year
- A large-scale, fine-grained, diverse preference dataset (and models).☆361Updated 2 years ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆269Updated last year
- This is the official implementation of "Progressive-Hint Prompting Improves Reasoning in Large Language Models"☆209Updated 2 years ago
- ☆340Updated 7 months ago
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)☆268Updated last year
- [TMLR] Cumulative Reasoning With Large Language Models (https://arxiv.org/abs/2308.04371)☆309Updated 5 months ago
- Data and Code for Program of Thoughts [TMLR 2023]☆303Updated last year
- ☆313Updated last year
- [NeurlPS D&B 2024] Generative AI for Math: MathPile☆419Updated 9 months ago
- Generative Judge for Evaluating Alignment☆249Updated 2 years ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆482Updated last year
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆583Updated last year
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆544Updated last year
- a Fine-tuned LLaMA that is Good at Arithmetic Tasks☆178Updated 2 years ago
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆325Updated last year
- ☆321Updated last year
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467☆303Updated 11 months ago
- ☆167Updated last year
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆264Updated 6 months ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆551Updated last year
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆389Updated last year
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆533Updated last year
- (ICML 2024) Alphazero-like Tree-Search can guide large language model decoding and training☆284Updated last year
- Multi-agent Social Simulation + Efficient, Effective, and Stable alternative of RLHF. Code for the paper "Training Socially Aligned Langu…☆353Updated 2 years ago
- ☆274Updated 2 years ago
- Datasets for Instruction Tuning of Large Language Models☆260Updated 2 years ago
- RewardBench: the first evaluation tool for reward models.☆683Updated last week