Thartvigsen / GRACELinks
[NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors
☆81Updated 9 months ago
Alternatives and similar repositories for GRACE
Users that are interested in GRACE are comparing it to the libraries listed below
Sorting:
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆70Updated 2 years ago
- ☆29Updated last year
- [ICLR'25 Spotlight] Min-K%++: Improved baseline for detecting pre-training data of LLMs☆45Updated 4 months ago
- ☆97Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆116Updated last year
- ☆99Updated last year
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆56Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆120Updated last year
- ☆56Updated 2 years ago
- Grade-School Math with Irrelevant Context (GSM-IC) benchmark is an arithmetic reasoning dataset built upon GSM8K, by adding irrelevant se…☆62Updated 2 years ago
- A Survey of Hallucination in Large Foundation Models☆54Updated last year
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆74Updated 10 months ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆80Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆80Updated 6 months ago
- ☆41Updated last year
- ☆45Updated last year
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆37Updated 4 months ago
- ☆25Updated 3 months ago
- ☆51Updated last year
- AI Logging for Interpretability and Explainability🔬☆128Updated last year
- The Paper List on Data Contamination for Large Language Models Evaluation.☆100Updated last month
- Restore safety in fine-tuned language models through task arithmetic☆28Updated last year
- LoFiT: Localized Fine-tuning on LLM Representations☆41Updated 8 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆97Updated 4 years ago
- ☆177Updated last year
- Learning adapter weights from task descriptions☆19Updated last year
- Code for the ACL-2022 paper "Knowledge Neurons in Pretrained Transformers"☆172Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated last year