evandez / relationsLinks
How do transformer LMs encode relations?
☆52Updated last year
Alternatives and similar repositories for relations
Users that are interested in relations are comparing it to the libraries listed below
Sorting:
- ☆96Updated last year
- A library for efficient patching and automatic circuit discovery.☆76Updated last month
- ☆47Updated last month
- [ICLR 2025] General-purpose activation steering library☆94Updated 3 weeks ago
- ☆103Updated 6 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆176Updated 4 months ago
- Sparse probing paper full code.☆59Updated last year
- ☆162Updated 9 months ago
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆53Updated 10 months ago
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆40Updated last year
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆206Updated last week
- Inspecting and Editing Knowledge Representations in Language Models☆116Updated 2 years ago
- ☆111Updated last month
- Algebraic value editing in pretrained language models☆65Updated last year
- ☆90Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆174Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆124Updated 2 months ago
- ☆81Updated 6 months ago
- ☆184Updated last month
- ☆105Updated 6 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆96Updated last year
- Steering vectors for transformer language models in Pytorch / Huggingface☆122Updated 6 months ago
- ☆122Updated last year
- ☆237Updated 10 months ago
- Unified access to Large Language Model modules using NNsight☆38Updated last month
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆181Updated 6 months ago
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆12Updated 6 months ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆78Updated last year
- ☆126Updated last year
- Sparse Autoencoder Training Library☆54Updated 3 months ago