NoviScl / GPT3-ReliabilityLinks
☆78Updated 2 years ago
Alternatives and similar repositories for GPT3-Reliability
Users that are interested in GPT3-Reliability are comparing it to the libraries listed below
Sorting:
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆102Updated 2 years ago
- ☆86Updated 2 years ago
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆102Updated 2 years ago
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆78Updated 2 years ago
- [ICLR 2023] Code for our paper "Selective Annotation Makes Language Models Better Few-Shot Learners"☆111Updated 2 years ago
- ☆176Updated last year
- Grade-School Math with Irrelevant Context (GSM-IC) benchmark is an arithmetic reasoning dataset built upon GSM8K, by adding irrelevant se…☆62Updated 2 years ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- ☆44Updated last year
- ☆75Updated last year
- Code for paper "CrossFit : A Few-shot Learning Challenge for Cross-task Generalization in NLP" (https://arxiv.org/abs/2104.08835)☆112Updated 3 years ago
- The code for lifelong few-shot language learning☆55Updated 3 years ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆41Updated 2 years ago
- ☆54Updated 2 years ago
- ☆64Updated 2 years ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- ☆57Updated 4 months ago
- Restore safety in fine-tuned language models through task arithmetic☆28Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆70Updated 2 years ago
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆133Updated 2 years ago
- ☆41Updated last year
- Code for Editing Factual Knowledge in Language Models☆141Updated 3 years ago
- [EMNLP 2022] Code for our paper “ZeroGen: Efficient Zero-shot Learning via Dataset Generation”.☆48Updated 3 years ago
- The official code of TACL 2021, "Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies".☆80Updated 2 years ago
- Official Code for the papers: "Controlled Text Generation as Continuous Optimization with Multiple Constraints" and "Gradient-based Const…☆63Updated last year
- ☆51Updated last year
- ☆27Updated 2 years ago
- Inspecting and Editing Knowledge Representations in Language Models☆116Updated 2 years ago
- ☆82Updated 2 years ago
- [NeurIPS 2022] Generating Training Data with Language Models: Towards Zero-Shot Language Understanding☆69Updated 3 years ago