maszhongming / ParaKnowTransfer
Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"
☆32Updated 9 months ago
Alternatives and similar repositories for ParaKnowTransfer:
Users that are interested in ParaKnowTransfer are comparing it to the libraries listed below
- ☆25Updated last year
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆21Updated 2 months ago
- [ICLR 2024] Unveiling the Pitfalls of Knowledge Editing for Large Language Models☆22Updated 8 months ago
- Data Valuation on In-Context Examples (ACL23)☆23Updated last month
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆34Updated last year
- AbstainQA, ACL 2024☆25Updated 4 months ago
- [ICML 2023] Tuning Language Models as Training Data Generators for Augmentation-Enhanced Few-Shot Learning☆40Updated last year
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆39Updated 3 months ago
- Evaluate the Quality of Critique☆35Updated 8 months ago
- ☆15Updated 6 months ago
- ☆20Updated 7 months ago
- [EMNLP-2022 Findings] Code for paper “ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback”.☆26Updated 2 years ago
- ☆33Updated 10 months ago
- Mosaic IT: Enhancing Instruction Tuning with Data Mosaics☆17Updated last week
- ✨ Resolving Knowledge Conflicts in Large Language Models, COLM 2024☆15Updated 4 months ago
- Models, data, and codes for the paper: MetaAligner: Towards Generalizable Multi-Objective Alignment of Language Models☆18Updated 4 months ago
- Adding new tasks to T0 without catastrophic forgetting☆32Updated 2 years ago
- Resources for Retrieval Augmentation for Commonsense Reasoning: A Unified Approach. EMNLP 2022.☆21Updated 2 years ago
- Codebase for Instruction Following without Instruction Tuning☆33Updated 4 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆44Updated last month
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated 11 months ago
- Directional Preference Alignment☆56Updated 4 months ago
- ☆40Updated last year
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆39Updated last year
- ☆9Updated 9 months ago
- ☆13Updated 11 months ago
- ☆33Updated last year
- Unofficial Implementation of Chain-of-Thought Reasoning Without Prompting☆27Updated 11 months ago
- Restore safety in fine-tuned language models through task arithmetic☆26Updated 10 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆42Updated 6 months ago