evandez / relationsLinks
How do transformer LMs encode relations?
☆55Updated last year
Alternatives and similar repositories for relations
Users that are interested in relations are comparing it to the libraries listed below
Sorting:
- A library for efficient patching and automatic circuit discovery.☆88Updated last month
- ☆103Updated 2 years ago
- ☆83Updated 11 months ago
- Sparse probing paper full code.☆66Updated 2 years ago
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆57Updated 3 months ago
- ☆99Updated last year
- ☆137Updated last year
- [ICLR 2025] General-purpose activation steering library☆138Updated 4 months ago
- Inspecting and Editing Knowledge Representations in Language Models☆119Updated 2 years ago
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆45Updated last year
- ☆71Updated 6 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆190Updated 9 months ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆99Updated 4 years ago
- Synthetic question-answering dataset to formally analyze the chain-of-thought output of large language models on a reasoning task.☆154Updated 4 months ago
- ☆195Updated last year
- ☆115Updated 11 months ago
- ☆267Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆163Updated 7 months ago
- ☆112Updated 11 months ago
- Steering Llama 2 with Contrastive Activation Addition☆207Updated last year
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆238Updated last week
- Algebraic value editing in pretrained language models☆67Updated 2 years ago
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆12Updated last year
- Performant framework for training, analyzing and visualizing Sparse Autoencoders (SAEs) and their frontier variants.☆177Updated this week
- Steering vectors for transformer language models in Pytorch / Huggingface☆140Updated 11 months ago
- ☆205Updated 3 months ago
- ☆32Updated 11 months ago
- ☆68Updated 2 years ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆80Updated last year
- ☆117Updated last year