peterljq / Parsimonious-Concept-EngineeringLinks
PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)
☆39Updated 9 months ago
Alternatives and similar repositories for Parsimonious-Concept-Engineering
Users that are interested in Parsimonious-Concept-Engineering are comparing it to the libraries listed below
Sorting:
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆112Updated last month
- Function Vectors in Large Language Models (ICLR 2024)☆175Updated 3 months ago
- ☆51Updated 3 months ago
- ☆15Updated last year
- ☆103Updated 5 months ago
- ☆47Updated last year
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆91Updated 8 months ago
- ☆34Updated 6 months ago
- ☆96Updated last year
- Lightweight Adapting for Black-Box Large Language Models☆22Updated last year
- ☆35Updated 7 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 3 months ago
- A library for efficient patching and automatic circuit discovery.☆73Updated 2 weeks ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- ☆50Updated last year
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆113Updated last year
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆81Updated 9 months ago
- ☆38Updated last year
- ☆34Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆74Updated 4 months ago
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆36Updated 5 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆114Updated 4 months ago
- ☆29Updated last year
- This repository contains the code and data for the paper "SelfIE: Self-Interpretation of Large Language Model Embeddings" by Haozhe Chen,…☆50Updated 7 months ago
- [COLING'25] Exploring Concept Depth: How Large Language Models Acquire Knowledge at Different Layers?☆79Updated 6 months ago
- ☆60Updated 5 months ago
- Online Adaptation of Language Models with a Memory of Amortized Contexts (NeurIPS 2024)☆63Updated last year
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆59Updated last week
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆59Updated 10 months ago
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆12Updated 6 months ago