NoviScl / GPT3-ReliabilityLinks
☆78Updated 2 years ago
Alternatives and similar repositories for GPT3-Reliability
Users that are interested in GPT3-Reliability are comparing it to the libraries listed below
Sorting:
- ☆85Updated 2 years ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆100Updated 2 years ago
- ☆44Updated 9 months ago
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆100Updated 2 years ago
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆76Updated 2 years ago
- Supporting code for ReCEval paper☆28Updated 8 months ago
- Restore safety in fine-tuned language models through task arithmetic☆28Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- [ICLR 2023] Code for our paper "Selective Annotation Makes Language Models Better Few-Shot Learners"☆109Updated last year
- ☆61Updated 2 years ago
- ☆17Updated last year
- ☆75Updated last year
- Inspecting and Editing Knowledge Representations in Language Models☆116Updated last year
- ☆41Updated last year
- ☆62Updated 2 years ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆40Updated 2 years ago
- ☆54Updated 2 weeks ago
- ☆27Updated 2 years ago
- Code for preprint: Summarizing Differences between Text Distributions with Natural Language☆42Updated 2 years ago
- DEMix Layers for Modular Language Modeling☆53Updated 3 years ago
- Code for our paper: "GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models"☆55Updated 2 years ago
- Let's Sample Step by Step: Adaptive-Consistency for Efficient Reasoning with LLMs☆36Updated last year
- ☆44Updated last year
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆55Updated last year
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆38Updated 2 years ago
- ☆34Updated 3 years ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆91Updated 3 years ago
- ☆54Updated 2 years ago
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆130Updated last year