OpenBMB / UltraLinkLinks
An Open-Source Knowledge-Enhanced Multilingual Supervised Fine-tuning Dataset
☆27Updated last year
Alternatives and similar repositories for UltraLink
Users that are interested in UltraLink are comparing it to the libraries listed below
Sorting:
- ☆58Updated last year
- 🩺 A collection of ChatGPT evaluation reports on various bechmarks.☆50Updated 2 years ago
- 🍼 Official implementation of Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts☆41Updated last year
- Resources for our ACL 2023 paper: Distilling Script Knowledge from Large Language Models for Constrained Language Planning☆36Updated 2 years ago
- ☆55Updated last year
- Towards Systematic Measurement for Long Text Quality☆37Updated last year
- Collection of papers for scalable automated alignment.☆93Updated last year
- [ACL 2024 (Oral)] A Prospector of Long-Dependency Data for Large Language Models☆58Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆136Updated last year
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆64Updated last year
- Contrastive Chain-of-Thought Prompting☆68Updated 2 years ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 8 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆129Updated last year
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆42Updated 2 years ago
- The official code of TACL 2021, "Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies".☆83Updated 3 years ago
- [ICLR 2024] COLLIE: Systematic Construction of Constrained Text Generation Tasks☆59Updated 2 years ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆63Updated 2 years ago
- ☆52Updated 2 years ago
- First explanation metric (diagnostic report) for text generation evaluation☆62Updated 11 months ago
- [ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models☆119Updated 7 months ago
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆78Updated last year
- Code and data for the paper "Can Large Language Models Understand Real-World Complex Instructions?"(AAAI2024)☆50Updated last year
- This code accompanies the paper DisentQA: Disentangling Parametric and Contextual Knowledge with Counterfactual Question Answering.☆16Updated 2 years ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆85Updated 2 years ago
- [ACL 2024] Code for "MoPS: Modular Story Premise Synthesis for Open-Ended Automatic Story Generation"☆42Updated last year
- ☆88Updated 3 years ago
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)☆101Updated 11 months ago
- ☆23Updated 2 years ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆83Updated last year
- BrowseComp-Plus: A More Fair and Transparent Evaluation Benchmark of Deep-Research Agent☆169Updated last month