Li-Hyn / LLM_CatastrophicForgettingLinks
Code for LLM_Catastrophic_Forgetting via SAM.
☆11Updated last year
Alternatives and similar repositories for LLM_CatastrophicForgetting
Users that are interested in LLM_CatastrophicForgetting are comparing it to the libraries listed below
Sorting:
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- Restore safety in fine-tuned language models through task arithmetic☆31Updated last year
- ☆16Updated last year
- ☆26Updated 2 years ago
- [ACL 2024 main] Aligning Large Language Models with Human Preferences through Representation Engineering (https://aclanthology.org/2024.…☆28Updated last year
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆65Updated last year
- Mostly recording papers about models' trustworthy applications. Intending to include topics like model evaluation & analysis, security, c…☆21Updated 2 years ago
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆79Updated last year
- ☆17Updated last year
- ☆44Updated last year
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆38Updated 8 months ago
- ☆31Updated 11 months ago
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆82Updated 2 years ago
- Our research proposes a novel MoGU framework that improves LLMs' safety while preserving their usability.☆18Updated last year
- ☆47Updated 10 months ago
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated 2 years ago
- TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models☆87Updated 2 years ago
- [ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing☆36Updated last year
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆56Updated last year
- ☆88Updated 3 years ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Updated last year
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆50Updated last year
- [COLM 2025] SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆51Updated 10 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98Updated last year
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆83Updated last year
- Methods and evaluation for aligning language models temporally☆30Updated last year
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆17Updated last year
- Lightweight tool to identify Data Contamination in LLMs evaluation☆53Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆71Updated 3 years ago
- This is the official implementation of ScaleBiO: Scalable Bilevel Optimization for LLM Data Reweighting☆24Updated last year