Spico197 / MoE-SFT
πΌ Official implementation of Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts
β38Updated 6 months ago
Alternatives and similar repositories for MoE-SFT:
Users that are interested in MoE-SFT are comparing it to the libraries listed below
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.β59Updated 9 months ago
- Code and data for "ConflictBank: A Benchmark for Evaluating the Influence of Knowledge Conflicts in LLM" (NeurIPS 2024 Track Datasets andβ¦β39Updated 5 months ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?β79Updated last year
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large Lβ¦β46Updated 9 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"β63Updated last year
- [ACL 2024 (Oral)] A Prospector of Long-Dependency Data for Large Language Modelsβ54Updated 8 months ago
- Towards Systematic Measurement for Long Text Qualityβ34Updated 7 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".β74Updated 3 months ago
- Code and data for "Living in the Moment: Can Large Language Models Grasp Co-Temporal Reasoning?" (ACL 2024)β32Updated 9 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modelingβ47Updated 3 months ago
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.β61Updated 5 months ago
- BeHonest: Benchmarking Honesty in Large Language Modelsβ31Updated 7 months ago
- β29Updated 3 months ago
- [NeurIPS 2024] Can Language Models Learn to Skip Steps?β15Updated 2 months ago
- β34Updated last year
- [EMNLP 2023] ALCUNA: Large Language Models Meet New Knowledgeβ26Updated last year
- β59Updated 7 months ago
- The implementation of paper "LLM Critics Help Catch Bugs in Mathematics: Towards a Better Mathematical Verifier with Natural Language Feeβ¦β39Updated 8 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"β74Updated 10 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Styleβ30Updated 2 weeks ago
- Code and data for "Timo: Towards Better Temporal Reasoning for Language Models" (COLM 2024)β20Updated 5 months ago
- Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Processβ26Updated 8 months ago
- Source code for Truth-Aware Context Selection: Mitigating the Hallucinations of Large Language Models Being Misled by Untruthful Contextsβ17Updated 7 months ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Modelsβ54Updated 11 months ago
- Resources for our ACL 2023 paper: Distilling Script Knowledge from Large Language Models for Constrained Language Planningβ36Updated last year
- β53Updated 7 months ago
- Code for M4LE: A Multi-Ability Multi-Range Multi-Task Multi-Domain Long-Context Evaluation Benchmark for Large Language Modelsβ22Updated 8 months ago
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinismβ28Updated 8 months ago
- π©Ί A collection of ChatGPT evaluation reports on various bechmarks.β48Updated 2 years ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"β58Updated last year