shizhediao / Black-Box-Prompt-LearningLinks
Source code for the TMLR paper "Black-Box Prompt Learning for Pre-trained Language Models"
☆57Updated 2 years ago
Alternatives and similar repositories for Black-Box-Prompt-Learning
Users that are interested in Black-Box-Prompt-Learning are comparing it to the libraries listed below
Sorting:
- ☆51Updated 2 years ago
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆38Updated 7 months ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆47Updated last year
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated 2 years ago
- ☆43Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98Updated last year
- [AAAI 2024] MELO: Enhancing Model Editing with Neuron-indexed Dynamic LoRA☆26Updated last year
- ☆46Updated 2 years ago
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆65Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆71Updated 3 years ago
- ☆41Updated 2 years ago
- Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning☆36Updated last year
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆65Updated last year
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆82Updated last year
- ☆35Updated 2 years ago
- [ICLR'25 Spotlight] Min-K%++: Improved baseline for detecting pre-training data of LLMs☆52Updated 7 months ago
- ☆43Updated 2 years ago
- This is the official implementation of ScaleBiO: Scalable Bilevel Optimization for LLM Data Reweighting☆23Updated last year
- Restore safety in fine-tuned language models through task arithmetic☆31Updated last year
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆20Updated last year
- Codebase for decoding compressed trust.☆25Updated last year
- TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models☆85Updated last year
- Active Example Selection for In-Context Learning (EMNLP'22)☆49Updated last year
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆37Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Updated last year
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated 9 months ago
- ☆18Updated last year
- [ICML 2023] "Robust Weight Signatures: Gaining Robustness as Easy as Patching Weights?" by Ruisi Cai, Zhenyu Zhang, Zhangyang Wang☆16Updated 2 years ago
- Test-time-training on nearest neighbors for large language models☆49Updated last year
- ☆29Updated last year