NoviScl / GPT3-Reliability
☆78Updated 2 years ago
Alternatives and similar repositories for GPT3-Reliability:
Users that are interested in GPT3-Reliability are comparing it to the libraries listed below
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆100Updated 2 years ago
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆76Updated 2 years ago
- ☆44Updated 8 months ago
- ☆85Updated 2 years ago
- DEMix Layers for Modular Language Modeling☆53Updated 3 years ago
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆100Updated 2 years ago
- ☆53Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- Restore safety in fine-tuned language models through task arithmetic☆28Updated last year
- ☆75Updated last year
- The code for lifelong few-shot language learning☆55Updated 3 years ago
- Grade-School Math with Irrelevant Context (GSM-IC) benchmark is an arithmetic reasoning dataset built upon GSM8K, by adding irrelevant se…☆60Updated 2 years ago
- ☆61Updated 2 years ago
- [ICLR 2023] Code for our paper "Selective Annotation Makes Language Models Better Few-Shot Learners"☆109Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- ☆62Updated last year
- TBC☆27Updated 2 years ago
- This is AlpaGasus2-QLoRA based on LLaMA2 with AlpaGasus mechanism using QLoRA!☆15Updated last year
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆39Updated 2 years ago
- ☆34Updated 3 years ago
- ☆54Updated last year
- Data and code for the preprint "In-Context Learning with Long-Context Models: An In-Depth Exploration"☆35Updated 8 months ago
- ☆49Updated last year
- Inspecting and Editing Knowledge Representations in Language Models☆116Updated last year
- ☆174Updated 9 months ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆91Updated 3 years ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆98Updated 2 years ago
- ☆41Updated last year
- ☆51Updated 4 months ago
- Code for preprint: Summarizing Differences between Text Distributions with Natural Language☆42Updated 2 years ago