zjunlp / DART
[ICLR 2022] Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners
☆127Updated last year
Related projects: ⓘ
- [NeurIPS'22 Spotlight] Data and code for our paper CoNT: Contrastive Neural Text Generation☆150Updated last year
- Official Code for "PPT: Pre-trained Prompt Tuning for Few-shot Learning". ACL 2022☆107Updated 2 years ago
- code for the ICLR'22 paper: On Robust Prefix-Tuning for Text Classification☆26Updated 2 years ago
- ☆164Updated last month
- Scaling Sentence Embeddings with Large Language Models☆93Updated 5 months ago
- Code for Editing Factual Knowledge in Language Models☆134Updated 2 years ago
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆91Updated last year
- ☆84Updated 2 years ago
- Code for the ACL-2022 paper "Knowledge Neurons in Pretrained Transformers"☆148Updated 4 months ago
- ☆57Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆59Updated last year
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆96Updated last year
- Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"☆157Updated 3 years ago
- Paper list of "The Life Cycle of Knowledge in Big Language Models: A Survey"☆61Updated last year
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆119Updated last year
- ☆15Updated last year
- ☆79Updated 2 years ago
- An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi☆249Updated last year
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆144Updated 7 months ago
- Code for ACL'2021 paper WARP 🌀 Word-level Adversarial ReProgramming. Outperforming `GPT-3` on SuperGLUE Few-Shot text classification. ht…☆83Updated 2 years ago
- contrastive decoding☆174Updated last year
- [NeurIPS 2022] Generating Training Data with Language Models: Towards Zero-Shot Language Understanding☆63Updated 2 years ago
- Code for "Small Models are Valuable Plug-ins for Large Language Models"☆119Updated last year
- ☆35Updated last year
- ☆116Updated 2 years ago
- ☆334Updated 3 years ago
- ☆32Updated last year
- ICML'2022: Black-Box Tuning for Language-Model-as-a-Service & EMNLP'2022: BBTv2: Towards a Gradient-Free Future with Large Language Model…☆258Updated last year
- Author implementation of the paper "CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge"☆148Updated last month
- Dataset for TACL 2022 paper: "FeTaQA: Free-form Table Question Answering"☆79Updated last year