serre-lab / LensLinks
LENS Project
β48Updated last year
Alternatives and similar repositories for Lens
Users that are interested in Lens are comparing it to the libraries listed below
Sorting:
- π Overcomplete is a Vision-based SAE Toolboxβ71Updated this week
- π Aligning Human & Machine Vision using explainabilityβ52Updated 2 years ago
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β66Updated 2 years ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ131Updated last year
- β14Updated 3 months ago
- Python package for extracting representations from state-of-the-art computer vision modelsβ168Updated this week
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ37Updated last year
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.β229Updated this week
- Dataset and code for the CLEVR-XAI dataset.β31Updated last year
- Code from the paper "Joint rotational invariance and adversarial training of a dual-stream Transformer yields state of the art Brain-Scorβ¦β16Updated 3 years ago
- ViT Prisma is a mechanistic interpretability library for Vision and Video Transformers (ViTs).β289Updated last week
- β16Updated last week
- Some methods for comparing network representations in deep learning and neuroscience.β139Updated last year
- Temporal Neural Networksβ15Updated last week
- Instructions and examples to deploy some PyTorch code on slurm using a Singularity Containerβ33Updated 2 years ago
- A framework for evaluating models on their alignment to brain and behavioral measurements (100+ benchmarks)β155Updated this week
- β22Updated last year
- Repository for PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits, accepted at CVPR 2024 XAI4CV Worksβ¦β18Updated last year
- π Influenciae is a Tensorflow Toolbox for Influence Functionsβ63Updated last year
- Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paperβ¦β64Updated 2 months ago
- Training and evaluating NBM and SPAM for interpretable machine learning.β78Updated 2 years ago
- Official PyTorch implementation of improved B-cos modelsβ51Updated last year
- [NeurIPS 2024] Code for the paper: B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable.β33Updated 2 months ago
- NeuroSurgeon is a package that enables researchers to uncover and manipulate subnetworks within models in Huggingface Transformersβ41Updated 5 months ago
- Layer-wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]β177Updated 3 weeks ago
- Spurious Features Everywhere - Large-Scale Detection of Harmful Spurious Features in ImageNetβ32Updated last year
- Large-scale uncertainty benchmark in deep learning.β61Updated 2 months ago
- Code for the paper: Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery. ECCV 2024.β47Updated 9 months ago
- Uncertainty-aware representation learning (URL) benchmarkβ105Updated 4 months ago
- Topographic Deep Artificial Neural Networksβ52Updated 9 months ago