UCSB-NLP-Chang / ULD
Implementation of paper 'Reversing the Forget-Retain Objectives: An Efficient LLM Unlearning Framework from Logit Difference' [NeurIPS'24]
☆13Updated 5 months ago
Related projects ⓘ
Alternatives and complementary repositories for ULD
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆33Updated this week
- ☆26Updated last year
- Code for paper "Diffusion Language Models Can Perform Many Tasks with Scaling and Instruction-Finetuning"☆63Updated 9 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆39Updated 3 months ago
- ☆32Updated last year
- Tasks for describing differences between text distributions.☆16Updated 3 months ago
- Landing Page for TOFU☆98Updated 5 months ago
- ☆12Updated 3 months ago
- Data Valuation on In-Context Examples (ACL23)☆23Updated last month
- [ATTRIB @ NeurIPS 2024] When Attention Sink Emerges in Language Models: An Empirical View☆29Updated last month
- Code for 'Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality', EMNLP 2022☆30Updated last year
- ☆38Updated last year
- Restore safety in fine-tuned language models through task arithmetic☆26Updated 7 months ago
- ☆25Updated 4 months ago
- ☆49Updated last year
- This repository includes the official implementation of our paper "Sight Beyond Text: Multi-Modal Training Enhances LLMs in Truthfulness …☆19Updated last year
- ☆24Updated 4 months ago
- Codebase for decoding compressed trust.☆20Updated 6 months ago
- ☆18Updated last month
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆30Updated 6 months ago
- [ICLR 2024] Provable Robust Watermarking for AI-Generated Text☆26Updated 11 months ago
- What do we learn from inverting CLIP models?☆45Updated 8 months ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆44Updated last year
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆28Updated 4 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆84Updated 5 months ago
- Official repo of Progressive Data Expansion: data, code and evaluation☆27Updated last year
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆29Updated last month
- Code for the paper "Data Feedback Loops: Model-driven Amplification of Dataset Biases"☆15Updated 2 years ago
- Data for "Datamodels: Predicting Predictions with Training Data"☆90Updated last year
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆19Updated 5 months ago