evandez / relations
How do transformer LMs encode relations?
☆48Updated last year
Alternatives and similar repositories for relations:
Users that are interested in relations are comparing it to the libraries listed below
- ☆29Updated this week
- A library for efficient patching and automatic circuit discovery.☆64Updated 2 weeks ago
- ☆82Updated 8 months ago
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆25Updated last year
- Inspecting and Editing Knowledge Representations in Language Models☆116Updated last year
- ☆93Updated last year
- Sparse probing paper full code.☆56Updated last year
- ☆91Updated 2 months ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆91Updated 3 years ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆73Updated 5 months ago
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆46Updated 7 months ago
- ☆111Updated 5 months ago
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆31Updated 11 months ago
- Steering Llama 2 with Contrastive Activation Addition☆147Updated 11 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆54Updated 5 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆163Updated 2 weeks ago
- ☆92Updated 2 months ago
- ☆114Updated 9 months ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆73Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆76Updated last month
- CausalGym: Benchmarking causal interpretability methods on linguistic tasks☆42Updated 5 months ago
- Algebraic value editing in pretrained language models☆64Updated last year
- ☆221Updated 7 months ago
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆113Updated this week
- Synthetic question-answering dataset to formally analyze the chain-of-thought output of large language models on a reasoning task.☆144Updated 6 months ago
- Sparse Autoencoder Training Library☆49Updated this week
- Code repository for the paper "Mission: Impossible Language Models."☆52Updated 2 weeks ago
- [NeurIPS 2024] How do Large Language Models Handle Multilingualism?☆32Updated 5 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆91Updated last year
- ☆52Updated 3 weeks ago