QingruZhang / PASTA
PASTA: Post-hoc Attention Steering for LLMs
☆109Updated last month
Alternatives and similar repositories for PASTA:
Users that are interested in PASTA are comparing it to the libraries listed below
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning"☆97Updated 6 months ago
- ☆56Updated 4 months ago
- Self-Alignment with Principle-Following Reward Models☆150Updated 10 months ago
- Inspecting and Editing Knowledge Representations in Language Models☆111Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆102Updated 9 months ago
- ☆159Updated 11 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆130Updated 3 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆71Updated 7 months ago
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆157Updated 8 months ago
- Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆39Updated last month
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆42Updated last year
- Lightweight tool to identify Data Contamination in LLMs evaluation☆45Updated 10 months ago
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆64Updated 9 months ago
- ☆43Updated 5 months ago
- ☆44Updated 4 months ago
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆154Updated 3 months ago
- ☆115Updated 3 months ago
- ☆36Updated last year
- Official Code Repository for LM-Steer Paper: "Word Embeddings Are Steers for Language Models" (ACL 2024 Outstanding Paper Award)☆73Updated 3 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆131Updated 3 months ago
- ☆93Updated last year
- Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆68Updated last month
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆111Updated 4 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated 11 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆66Updated 2 years ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆106Updated 6 months ago
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆95Updated last year
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆77Updated 5 months ago
- Directional Preference Alignment☆54Updated 3 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆51Updated 9 months ago