tonychenxyz / selfie
This repository contains the code and data for the paper "SelfIE: Self-Interpretation of Large Language Model Embeddings" by Haozhe Chen, Carl Vondrick, and Chengzhi Mao.
☆45Updated 2 months ago
Alternatives and similar repositories for selfie:
Users that are interested in selfie are comparing it to the libraries listed below
- Function Vectors in Large Language Models (ICLR 2024)☆140Updated 4 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆104Updated 11 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆50Updated 11 months ago
- Code and Data Repo for [ICLR 2025] Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆24Updated 2 months ago
- The Paper List on Data Contamination for Large Language Models Evaluation.☆91Updated this week
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆89Updated 9 months ago
- [NAACL'25] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆49Updated 3 months ago
- ☆37Updated last year
- ☆58Updated 6 months ago
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆44Updated 3 months ago
- FeatureAlignment = Alignment + Mechanistic Interpretability☆28Updated last month
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆96Updated 4 months ago
- Weak-to-Strong Jailbreaking on Large Language Models☆72Updated last year
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆48Updated 2 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆101Updated 11 months ago
- Official Code Repository for LM-Steer Paper: "Word Embeddings Are Steers for Language Models" (ACL 2024 Outstanding Paper Award)☆85Updated 5 months ago
- A resource repository for representation engineering in large language models☆107Updated 3 months ago
- Code for the paper <SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning>☆48Updated last year
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆71Updated this week
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆64Updated 3 months ago
- PASTA: Post-hoc Attention Steering for LLMs☆113Updated 3 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆118Updated 5 months ago
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆51Updated 3 months ago
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆72Updated 4 months ago
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆28Updated 3 months ago
- A Survey on the Honesty of Large Language Models☆54Updated 2 months ago
- An Easy-to-use Hallucination Detection Framework for LLMs.☆57Updated 10 months ago
- ☆45Updated 4 months ago