fc2869 / lo-fit
LoFiT: Localized Fine-tuning on LLM Representations
☆30Updated this week
Alternatives and similar repositories for lo-fit:
Users that are interested in lo-fit are comparing it to the libraries listed below
- Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆39Updated last month
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆105Updated 4 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆66Updated 2 years ago
- ☆29Updated 8 months ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆53Updated 9 months ago
- AbstainQA, ACL 2024☆25Updated 3 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆102Updated 9 months ago
- ☆71Updated 7 months ago
- Official Code Repository for LM-Steer Paper: "Word Embeddings Are Steers for Language Models" (ACL 2024 Outstanding Paper Award)☆73Updated 3 months ago
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆64Updated 9 months ago
- ☆44Updated 4 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆106Updated 6 months ago
- The Paper List on Data Contamination for Large Language Models Evaluation.☆86Updated last week
- BeHonest: Benchmarking Honesty in Large Language Models☆30Updated 5 months ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆71Updated 3 weeks ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆57Updated last year
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆26Updated 2 months ago
- Repo for paper: Examining LLMs' Uncertainty Expression Towards Questions Outside Parametric Knowledge☆12Updated 10 months ago
- Official repository for ICLR 2024 Spotlight paper "Large Language Models Are Not Robust Multiple Choice Selectors"☆37Updated 7 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆50Updated 9 months ago
- ☆29Updated 8 months ago
- Official code for ICML 2024 paper on Persona In-Context Learning (PICLe)☆23Updated 6 months ago
- ☆34Updated 2 months ago
- ☆47Updated 9 months ago
- AI Logging for Interpretability and Explainability🔬☆97Updated 7 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆61Updated 2 months ago
- [NeurIPS 2023 D&B Track] Code and data for paper "Revisiting Out-of-distribution Robustness in NLP: Benchmarks, Analysis, and LLMs Evalua…☆31Updated last year
- A Survey of Hallucination in Large Foundation Models☆50Updated last year
- Grade-School Math with Irrelevant Context (GSM-IC) benchmark is an arithmetic reasoning dataset built upon GSM8K, by adding irrelevant se…☆58Updated last year
- ☆61Updated last year