GXimingLu / QuarkLinks
☆75Updated 2 years ago
Alternatives and similar repositories for Quark
Users that are interested in Quark are comparing it to the libraries listed below
Sorting:
- ☆88Updated 3 years ago
- ☆177Updated last year
- Repository for ACL 2022 paper Mix and Match: Learning-free Controllable Text Generation using Energy Language Models☆45Updated 3 years ago
- Code for paper "CrossFit : A Few-shot Learning Challenge for Cross-task Generalization in NLP" (https://arxiv.org/abs/2104.08835)☆113Updated 3 years ago
- Official Code for the papers: "Controlled Text Generation as Continuous Optimization with Multiple Constraints" and "Gradient-based Const…☆65Updated last year
- ☆113Updated 3 years ago
- Code for Editing Factual Knowledge in Language Models☆142Updated 3 years ago
- code associated with ACL 2021 DExperts paper☆118Updated 2 years ago
- ☆101Updated 3 years ago
- ☆83Updated 2 years ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆63Updated 2 years ago
- ☆47Updated last year
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆104Updated 3 years ago
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆78Updated 2 years ago
- ☆44Updated last year
- This code accompanies the paper DisentQA: Disentangling Parametric and Contextual Knowledge with Counterfactual Question Answering.☆16Updated 2 years ago
- TBC☆28Updated 3 years ago
- ☆28Updated last year
- [EMNLP 2022] Code and data for "Controllable Dialogue Simulation with In-Context Learning"☆35Updated 2 years ago
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆102Updated 2 years ago
- DEMix Layers for Modular Language Modeling☆54Updated 4 years ago
- ☆64Updated 3 years ago
- EMNLP 2022: "MABEL: Attenuating Gender Bias using Textual Entailment Data" https://arxiv.org/abs/2210.14975☆38Updated 2 years ago
- An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi☆272Updated 2 years ago
- [NeurIPS'22 Spotlight] Data and code for our paper CoNT: Contrastive Neural Text Generation☆153Updated 2 years ago
- ☆54Updated 2 years ago
- Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"☆167Updated 4 years ago
- [ICLR 2023] Code for our paper "Selective Annotation Makes Language Models Better Few-Shot Learners"☆109Updated 2 years ago
- ☆79Updated 2 years ago
- ☆50Updated 2 years ago