alan-turing-institute / robots-in-disguiseLinks
Information and materials for the Turing's "robots-in-disguise" reading group on fundamental AI research.
☆34Updated 3 weeks ago
Alternatives and similar repositories for robots-in-disguise
Users that are interested in robots-in-disguise are comparing it to the libraries listed below
Sorting:
- LENS Project☆51Updated last year
- 👋 Overcomplete is a Vision-based SAE Toolbox☆110Updated 3 weeks ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆40Updated last year
- ☆83Updated last year
- Tools for studying developmental interpretability in neural networks.☆117Updated 6 months ago
- ViT Prisma is a mechanistic interpretability library for Vision and Video Transformers (ViTs).☆327Updated 5 months ago
- we got you bro☆36Updated last year
- 🧠 Starter templates for doing interpretability research☆74Updated 2 years ago
- Attribution-based Parameter Decomposition☆33Updated 6 months ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆131Updated 3 years ago
- This repository collects all relevant resources about interpretability in LLMs☆389Updated last year
- Mechanistic Interpretability Visualizations using React☆303Updated last year
- ☆83Updated 10 months ago
- Official repository for CMU Machine Learning Department's 10721: "Philosophical Foundations of Machine Intelligence".☆263Updated 2 years ago
- List of ML conferences with important dates and accepted paper list☆189Updated last month
- ☆373Updated 4 months ago
- Reliable, minimal and scalable library for pretraining foundation and world models☆112Updated last month
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆738Updated last week
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆139Updated last year
- Causal Responsibility EXplanations for Image Classifiers and Tabular Data☆40Updated 2 weeks ago
- 🪄 Interpreto is an interpretability toolbox for LLMs☆91Updated last week
- Sparse Autoencoder for Mechanistic Interpretability☆286Updated last year
- This was designed for interp researchers who want to do research on or with interp agents to give quality of life improvements and fix …☆48Updated last week
- ☆76Updated 2 years ago
- Fairness toolkit for pytorch, scikit learn and autogluon☆33Updated last month
- ☆27Updated 2 years ago
- ☆69Updated 2 years ago
- epsilon machines and transformers!☆34Updated 5 months ago
- ☆122Updated 3 years ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆239Updated 4 months ago