HKUNLP / ZeroGenLinks
[EMNLP 2022] Code for our paper “ZeroGen: Efficient Zero-shot Learning via Dataset Generation”.
☆16Updated 3 years ago
Alternatives and similar repositories for ZeroGen
Users that are interested in ZeroGen are comparing it to the libraries listed below
Sorting:
- [EMNLP 2022] Code for our paper “ZeroGen: Efficient Zero-shot Learning via Dataset Generation”.☆48Updated 3 years ago
- Resources for Retrieval Augmentation for Commonsense Reasoning: A Unified Approach. EMNLP 2022.☆22Updated 2 years ago
- [NeurIPS 2022] Generating Training Data with Language Models: Towards Zero-Shot Language Understanding☆67Updated 2 years ago
- Code for Aesop: Paraphrase Generation with Adaptive Syntactic Control (EMNLP 2021)☆26Updated 3 years ago
- EMNLP'2022: BERTScore is Unfair: On Social Bias in Language Model-Based Metrics for Text Generation☆41Updated 2 years ago
- The code implementation of the EMNLP2022 paper: DisCup: Discriminator Cooperative Unlikelihood Prompt-tuning for Controllable Text Gene…☆26Updated last year
- ☆33Updated 2 years ago
- ReCross: Unsupervised Cross-Task Generalization via Retrieval Augmentation☆24Updated 3 years ago
- Dataset for Unified Editing, EMNLP 2023. This is a model editing dataset where edits are natural language phrases.☆23Updated 11 months ago
- Code associated with the paper: "Few-Shot Self-Rationalization with Natural Language Prompts"☆13Updated 3 years ago
- The official repository for the paper "From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning".☆65Updated 2 years ago
- ☆88Updated 2 years ago
- "FiD-ICL: A Fusion-in-Decoder Approach for Efficient In-Context Learning" (ACL 2023)☆15Updated 2 years ago
- Code for embedding and retrieval research.☆17Updated last year
- Methods and evaluation for aligning language models temporally☆29Updated last year
- [EMNLP 2022] Code and data for "Controllable Dialogue Simulation with In-Context Learning"☆35Updated 2 years ago
- [Findings of EMNLP22] From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models☆19Updated 2 years ago
- Code base of In-Context Learning for Dialogue State tracking☆45Updated last year
- The official code of TACL 2021, "Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies".☆76Updated 2 years ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆41Updated 2 years ago
- ☆27Updated 2 years ago
- ☆41Updated last year
- The project page for "SCITAB: A Challenging Benchmark for Compositional Reasoning and Claim Verification on Scientific Tables"☆22Updated last year
- WikiWhy is a new benchmark for evaluating LLMs' ability to explain between cause-effect relationships. It is a QA dataset containing 9000…☆47Updated last year
- ☆45Updated last year
- ☆82Updated 2 years ago
- ☆51Updated 2 years ago
- Code and Data for NeurIPS2021 Paper "A Dataset for Answering Time-Sensitive Questions"☆72Updated 3 years ago
- ☆36Updated last year
- ☆35Updated 3 years ago