INK-USC / CrossFit
Code for paper "CrossFit : A Few-shot Learning Challenge for Cross-task Generalization in NLP" (https://arxiv.org/abs/2104.08835)
☆105Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for CrossFit
- ☆58Updated 2 years ago
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆84Updated 2 years ago
- Code for Editing Factual Knowledge in Language Models☆135Updated 2 years ago
- ☆80Updated last year
- code associated with ACL 2021 DExperts paper☆113Updated last year
- ☆74Updated last year
- ☆42Updated 10 months ago
- Automatic metrics for GEM tasks☆61Updated 2 years ago
- DEMix Layers for Modular Language Modeling☆53Updated 3 years ago
- Benchmarking Generalization to New Tasks from Natural Language Instructions☆25Updated 3 years ago
- ☆33Updated 3 years ago
- Official Code for the papers: "Controlled Text Generation as Continuous Optimization with Multiple Constraints" and "Gradient-based Const…☆59Updated 8 months ago
- Code and models for the paper "Questions Are All You Need to Train a Dense Passage Retriever (TACL 2023)"☆60Updated last year
- An original implementation of "Noisy Channel Language Model Prompting for Few-Shot Text Classification"☆131Updated 2 years ago
- TBC☆26Updated 2 years ago
- ☆80Updated 2 years ago
- ☆48Updated last year
- ☆36Updated 7 months ago
- ☆95Updated 2 years ago
- 🐥 Code and Dataset for our EMNLP 2022 paper - "ProsocialDialog: A Prosocial Backbone for Conversational Agents"☆60Updated last year
- Code base of In-Context Learning for Dialogue State tracking☆44Updated last year
- ☆51Updated last year
- ☆42Updated 3 years ago
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆92Updated last year
- ☆48Updated last year
- Detect hallucinated tokens for conditional sequence generation.☆63Updated 2 years ago
- ☆60Updated last year
- Code for the paper Code for the paper InstructDial: Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuning☆97Updated last year
- Faithfulness and factuality annotations of XSum summaries from our paper "On Faithfulness and Factuality in Abstractive Summarization" (h…☆81Updated 3 years ago
- [NeurIPS 2022] Generating Training Data with Language Models: Towards Zero-Shot Language Understanding☆62Updated 2 years ago