ICML'2022: Black-Box Tuning for Language-Model-as-a-Service & EMNLP'2022: BBTv2: Towards a Gradient-Free Future with Large Language Models
☆271Nov 8, 2022Updated 3 years ago
Alternatives and similar repositories for Black-Box-Tuning
Users that are interested in Black-Box-Tuning are comparing it to the libraries listed below
Sorting:
- EMNLP'2022: BERTScore is Unfair: On Social Bias in Language Model-Based Metrics for Text Generation☆41Oct 19, 2022Updated 3 years ago
- Paradigm shift in natural language processing☆42May 29, 2022Updated 3 years ago
- Code for our paper: "GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models"☆57Apr 23, 2023Updated 2 years ago
- A plug-and-play library for parameter-efficient-tuning (Delta Tuning)☆1,039Sep 19, 2024Updated last year
- 擂台赛3-大规模预训练调优比赛的示例代码与baseline实现☆37Sep 27, 2022Updated 3 years ago
- Source code for the TMLR paper "Black-Box Prompt Learning for Pre-trained Language Models"☆57Sep 7, 2023Updated 2 years ago
- [NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240☆168Oct 7, 2022Updated 3 years ago
- Paper List for In-context Learning 🌷☆875Oct 8, 2024Updated last year
- 本科毕业论文、源码及相关材料☆15Dec 30, 2019Updated 6 years ago
- This repository is the official implementation of our EMNLP 2022 paper ELMER: A Non-Autoregressive Pre-trained Language Model for Efficie…☆26Oct 27, 2022Updated 3 years ago
- [NeurIPS'22 Spotlight] A Contrastive Framework for Neural Text Generation☆475Mar 7, 2024Updated last year
- Must-read papers on prompt-based tuning for pre-trained language models.☆4,293Jul 17, 2023Updated 2 years ago
- [NeurIPS'22 Spotlight] Data and code for our paper CoNT: Contrastive Neural Text Generation☆152May 10, 2023Updated 2 years ago
- An (incomplete) overview of information extraction☆43Apr 28, 2022Updated 3 years ago
- Accompanying repo for the RLPrompt paper☆361Jun 6, 2024Updated last year
- Diffusion-LM☆1,227Aug 8, 2024Updated last year
- This repo contains the code for Late Prompt Tuning.☆12Dec 22, 2025Updated 2 months ago
- [ACL 2021] LM-BFF: Better Few-shot Fine-tuning of Language Models https://arxiv.org/abs/2012.15723☆730Aug 29, 2022Updated 3 years ago
- Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).☆766Jul 20, 2023Updated 2 years ago
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)☆543Mar 24, 2022Updated 3 years ago
- PyTorch codes for "LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning"☆241Jan 20, 2023Updated 3 years ago
- Paper collection on building and evaluating language model agents via executable language grounding☆365Apr 29, 2024Updated last year
- ☆98Jun 6, 2022Updated 3 years ago
- Paper collections of methods that using language to interact with environment, including interact with real world, simulated world or WWW…☆129Jul 26, 2023Updated 2 years ago
- [EMNLP 2022] Unifying and multi-tasking structured knowledge grounding with language models☆568Aug 22, 2023Updated 2 years ago
- [NAACL'22] TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning☆94Jun 8, 2022Updated 3 years ago
- ☆10Sep 27, 2021Updated 4 years ago
- Lite Self-Training☆30Jul 25, 2023Updated 2 years ago
- [IJCAI 2023] Black-box Prompt Tuning for Vision-Language Model as a Service☆18Sep 18, 2023Updated 2 years ago
- Prefix-Tuning: Optimizing Continuous Prompts for Generation☆958Apr 26, 2024Updated last year
- Code for the ACL-2022 paper "Knowledge Neurons in Pretrained Transformers"☆173May 4, 2024Updated last year
- ☆177Jul 24, 2024Updated last year
- MOSS is a conversational language model like ChatGPT.☆743Apr 20, 2023Updated 2 years ago
- A pre-trained model with multi-exit transformer architecture.☆56Dec 10, 2022Updated 3 years ago
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"☆456Sep 6, 2023Updated 2 years ago
- An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi☆273Apr 15, 2023Updated 2 years ago
- The official repository for the paper "From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning".☆66Apr 18, 2023Updated 2 years ago
- Datasets for Instruction Tuning of Large Language Models☆261Nov 30, 2023Updated 2 years ago
- On Transferability of Prompt Tuning for Natural Language Processing☆101May 3, 2024Updated last year