microsoft / DiVeRSeLinks
☆38Updated 2 years ago
Alternatives and similar repositories for DiVeRSe
Users that are interested in DiVeRSe are comparing it to the libraries listed below
Sorting:
- ☆62Updated 2 years ago
- Resources for our ACL 2023 paper: Distilling Script Knowledge from Large Language Models for Constrained Language Planning☆36Updated last year
- Methods and evaluation for aligning language models temporally☆29Updated last year
- The official repository for the paper "From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning".☆65Updated 2 years ago
- ☆87Updated 2 years ago
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆101Updated 2 years ago
- ACL'23: Unified Demonstration Retriever for In-Context Learning☆38Updated last year
- [EMNLP 2022] Code for our paper “ZeroGen: Efficient Zero-shot Learning via Dataset Generation”.☆16Updated 3 years ago
- ☆41Updated last year
- Complexity Based Prompting for Multi-Step Reasoning☆17Updated 2 years ago
- Technical Report: Is ChatGPT a Good NLG Evaluator? A Preliminary Study☆43Updated 2 years ago
- [ACL 2023] Solving Math Word Problems via Cooperative Reasoning induced Language Models (LLMs + MCTS + Self-Improvement)☆49Updated last year
- ☆45Updated last year
- ☆51Updated last year
- ☆33Updated 2 years ago
- Grade-School Math with Irrelevant Context (GSM-IC) benchmark is an arithmetic reasoning dataset built upon GSM8K, by adding irrelevant se…☆60Updated 2 years ago
- ☆36Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- Repository for Decomposed Prompting☆91Updated last year
- ☆28Updated last year
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆131Updated 2 years ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆40Updated 2 years ago
- The official code of TACL 2021, "Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies".☆74Updated 2 years ago
- [Findings of EMNLP22] From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models☆19Updated 2 years ago
- Towards Systematic Measurement for Long Text Quality☆36Updated 9 months ago
- WikiWhy is a new benchmark for evaluating LLMs' ability to explain between cause-effect relationships. It is a QA dataset containing 9000…☆47Updated last year
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆26Updated last year
- The information of NLP PhD application in the world.☆37Updated 10 months ago
- The project page for "SCITAB: A Challenging Benchmark for Compositional Reasoning and Claim Verification on Scientific Tables"☆22Updated last year
- Code and Data for NeurIPS2021 Paper "A Dataset for Answering Time-Sensitive Questions"☆72Updated 3 years ago