peterljq / Parsimonious-Concept-Engineering
Implementation of PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)
☆26Updated 2 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for Parsimonious-Concept-Engineering
- ☆36Updated 3 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆45Updated 7 months ago
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆75Updated last month
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆24Updated 3 weeks ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆84Updated 5 months ago
- ☆38Updated last year
- Function Vectors in Large Language Models (ICLR 2024)☆119Updated last month
- Röttger et al. (2023): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆63Updated 10 months ago
- The Paper List on Data Contamination for Large Language Models Evaluation.☆75Updated this week
- A resource repository for representation engineering in large language models☆54Updated this week
- AI Logging for Interpretability and Explainability🔬☆89Updated 5 months ago
- Code for the paper "Spectral Editing of Activations for Large Language Model Alignments"☆16Updated 4 months ago
- ☆81Updated last year
- ☆33Updated last year
- ☆49Updated last year
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆58Updated 8 months ago
- ☆26Updated 3 weeks ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆54Updated 2 weeks ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆97Updated 7 months ago
- ☆71Updated 3 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆84Updated 7 months ago
- [ICLR 2024] Provable Robust Watermarking for AI-Generated Text☆26Updated 11 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆63Updated last month
- Codebase for Instruction Following without Instruction Tuning☆31Updated last month
- Is In-Context Learning Sufficient for Instruction Following in LLMs?☆25Updated 5 months ago
- [ICLR 2024] Unveiling the Pitfalls of Knowledge Editing for Large Language Models☆22Updated 5 months ago
- ☆43Updated 9 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆48Updated 7 months ago
- Official Code Repository for LM-Steer Paper: "Word Embeddings Are Steers for Language Models" (ACL 2024 Outstanding Paper Award)☆62Updated last month
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆33Updated this week