shizhediao / Black-Box-Prompt-LearningLinks
Source code for the TMLR paper "Black-Box Prompt Learning for Pre-trained Language Models"
☆56Updated 2 years ago
Alternatives and similar repositories for Black-Box-Prompt-Learning
Users that are interested in Black-Box-Prompt-Learning are comparing it to the libraries listed below
Sorting:
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆37Updated 5 months ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated 7 months ago
- Codebase for decoding compressed trust.☆25Updated last year
- Official implementation of Privacy Implications of Retrieval-Based Language Models (EMNLP 2023). https://arxiv.org/abs/2305.14888☆37Updated last year
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated last year
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆40Updated 2 years ago
- Restore safety in fine-tuned language models through task arithmetic☆29Updated last year
- ☆42Updated 2 years ago
- An implementation of online data mixing for the Pile dataset, based on the GPT-NeoX library.☆13Updated last year
- Let's Sample Step by Step: Adaptive-Consistency for Efficient Reasoning with LLMs☆39Updated last year
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆46Updated last year
- ☆43Updated 2 years ago
- ☆51Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆99Updated last year
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆63Updated 11 months ago
- ☆41Updated last year
- [AAAI 2024] MELO: Enhancing Model Editing with Neuron-indexed Dynamic LoRA☆25Updated last year
- This is the official implementation of ScaleBiO: Scalable Bilevel Optimization for LLM Data Reweighting☆22Updated last year
- Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning☆36Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models☆80Updated last year
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆19Updated last year
- ☆23Updated 2 years ago
- ☆29Updated last year
- [ICML 2023] "Robust Weight Signatures: Gaining Robustness as Easy as Patching Weights?" by Ruisi Cai, Zhenyu Zhang, Zhangyang Wang☆16Updated 2 years ago
- Less is More: Task-aware Layer-wise Distillation for Language Model Compression (ICML2023)☆40Updated 2 years ago
- [ICLR 2025] Code&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"☆13Updated last year
- ☆41Updated 2 years ago
- ☆30Updated last year
- A curated list of resources on Reinforcement Learning with Verifiable Rewards (RLVR) and the reasoning capability boundary of Large Langu…☆76Updated 3 weeks ago