multimodal-interpretability / FINDLinks
Official implementation of FIND (NeurIPS '23) Function Interpretation Benchmark and Automated Interpretability Agents
☆52Updated last year
Alternatives and similar repositories for FIND
Users that are interested in FIND are comparing it to the libraries listed below
Sorting:
- Sparse and discrete interpretability tool for neural networks☆65Updated last year
- PyTorch library for Active Fine-Tuning☆95Updated 3 months ago
- Repo for: When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment☆38Updated 2 years ago
- ☆112Updated 10 months ago
- A mechanistic approach for understanding and detecting factual errors of large language models.☆49Updated last year
- ☆29Updated 2 months ago
- Online Adaptation of Language Models with a Memory of Amortized Contexts (NeurIPS 2024)☆70Updated last year
- Repository for the code of the "PPL-MCTS: Constrained Textual Generation Through Discriminator-Guided Decoding" paper, NAACL'22☆66Updated 3 years ago
- 👻 Code and benchmark for our EMNLP 2023 paper - "FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions"☆57Updated last year
- Implementation of 🌻 Mirasol, SOTA Multimodal Autoregressive model out of Google Deepmind, in Pytorch☆91Updated 2 years ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆83Updated last year
- [NeurIPS 2023] Learning Transformer Programs☆162Updated last year
- ☆107Updated last year
- ☆100Updated last year
- ☆56Updated 2 years ago
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated 2 years ago
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆20Updated last year
- ☆45Updated 2 years ago
- ☆112Updated 10 months ago
- ☆69Updated last year
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆197Updated 2 years ago
- ☆144Updated 5 months ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated 2 years ago
- ☆23Updated 11 months ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- ☆85Updated last year
- ☆33Updated last year
- Function Vectors in Large Language Models (ICLR 2024)☆189Updated 8 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Updated last year
- ☆36Updated 2 years ago