peterljq / Parsimonious-Concept-EngineeringLinks
PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)
☆41Updated last year
Alternatives and similar repositories for Parsimonious-Concept-Engineering
Users that are interested in Parsimonious-Concept-Engineering are comparing it to the libraries listed below
Sorting:
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆150Updated 5 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆186Updated 7 months ago
- ☆52Updated 7 months ago
- ☆37Updated 11 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆84Updated 8 months ago
- ☆51Updated last year
- ☆16Updated last year
- ☆51Updated 2 years ago
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆84Updated last year
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆64Updated last year
- [ICLR 2025] General-purpose activation steering library☆123Updated 2 months ago
- ☆110Updated 9 months ago
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆77Updated last year
- ☆41Updated 2 years ago
- Lightweight Adapting for Black-Box Large Language Models☆24Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆63Updated last year
- [EMNLP 2025 Main] ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆38Updated 3 months ago
- NeurIPS'24 - LLM Safety Landscape☆33Updated last month
- Confidence Regulation Neurons in Language Models (NeurIPS 2024)☆14Updated 10 months ago
- ☆29Updated last year
- Code for the paper "Spectral Editing of Activations for Large Language Model Alignments"☆29Updated 11 months ago
- ☆102Updated 2 years ago
- Multi-Layer Sparse Autoencoders (ICLR 2025)☆27Updated 9 months ago
- ☆66Updated 8 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆93Updated last year
- Trains Sparse Autoencoders based on outputs from language models☆11Updated last year
- ☆33Updated 10 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 7 months ago
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆57Updated last month
- Test-time-training on nearest neighbors for large language models☆48Updated last year