amazon-science / street-reasoningLinks
STREET: a multi-task and multi-step reasoning dataset
☆24Updated last year
Alternatives and similar repositories for street-reasoning
Users that are interested in street-reasoning are comparing it to the libraries listed below
Sorting:
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆103Updated 2 years ago
- ☆88Updated 2 years ago
- [ICLR 2023] Code for our paper "Selective Annotation Makes Language Models Better Few-Shot Learners"☆110Updated 2 years ago
- ☆41Updated 2 years ago
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆132Updated 2 years ago
- [EMNLP-2022 Findings] Code for paper “ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback”.☆27Updated 2 years ago
- Methods and evaluation for aligning language models temporally☆30Updated last year
- Code for ACL 2023 paper "BOLT: Fast Energy-based Controlled Text Generation with Tunable Biases".☆21Updated 2 years ago
- AbstainQA, ACL 2024☆28Updated last year
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆99Updated 4 years ago
- The official repository for the paper "From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning".☆66Updated 2 years ago
- ☆27Updated 2 years ago
- ☆76Updated last year
- ☆177Updated last year
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆41Updated 2 years ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆55Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆118Updated last year
- Evaluate the Quality of Critique☆36Updated last year
- Code for the ACL-2022 paper "Knowledge Neurons in Pretrained Transformers"☆173Updated last year
- ☆55Updated last year
- ☆102Updated 2 years ago
- ☆17Updated last year
- [EMNLP 2022] Code for our paper “ZeroGen: Efficient Zero-shot Learning via Dataset Generation”.☆48Updated 3 years ago
- WikiWhy is a new benchmark for evaluating LLMs' ability to explain between cause-effect relationships. It is a QA dataset containing 9000…☆48Updated 2 years ago
- ☆103Updated 2 years ago
- "FiD-ICL: A Fusion-in-Decoder Approach for Efficient In-Context Learning" (ACL 2023)☆15Updated 2 years ago
- Offical code of the paper Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Le…☆75Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆70Updated 3 years ago
- Grade-School Math with Irrelevant Context (GSM-IC) benchmark is an arithmetic reasoning dataset built upon GSM8K, by adding irrelevant se…☆65Updated 2 years ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆62Updated last year