NJUNLP / MAPOLinks
The implement of ACL2024: "MAPO: Advancing Multilingual Reasoning through Multilingual Alignment-as-Preference Optimization"
☆43Updated last year
Alternatives and similar repositories for MAPO
Users that are interested in MAPO are comparing it to the libraries listed below
Sorting:
- ☆78Updated last year
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆168Updated 2 years ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆119Updated last year
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆56Updated last year
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆64Updated last year
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆29Updated last year
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆40Updated 2 years ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- ☆22Updated 2 years ago
- [ACL 2024 (Oral)] A Prospector of Long-Dependency Data for Large Language Models☆59Updated last year
- Do Large Language Models Know What They Don’t Know?☆102Updated last year
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆79Updated last year
- ☆72Updated last year
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆85Updated 2 years ago
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆132Updated 2 years ago
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆201Updated 2 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆77Updated 4 months ago
- BeHonest: Benchmarking Honesty in Large Language Models☆34Updated last year
- Code for M4LE: A Multi-Ability Multi-Range Multi-Task Multi-Domain Long-Context Evaluation Benchmark for Large Language Models☆23Updated last year
- ☆30Updated last year
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆83Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆63Updated 2 years ago
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆64Updated last year
- An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).☆59Updated last year
- ☆48Updated 2 years ago
- ☆64Updated 3 years ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆71Updated 3 years ago
- ☆88Updated 3 years ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆136Updated last year
- Source code for Truth-Aware Context Selection: Mitigating the Hallucinations of Large Language Models Being Misled by Untruthful Contexts☆17Updated last year