serre-lab / Lens
LENS Project
β45Updated 11 months ago
Alternatives and similar repositories for Lens:
Users that are interested in Lens are comparing it to the libraries listed below
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β61Updated last year
- π Aligning Human & Machine Vision using explainabilityβ48Updated last year
- Python package for extracting representations from state-of-the-art computer vision modelsβ159Updated 3 weeks ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ33Updated 9 months ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ123Updated 8 months ago
- Instructions and examples to deploy some PyTorch code on slurm using a Singularity Containerβ33Updated last year
- Spurious Features Everywhere - Large-Scale Detection of Harmful Spurious Features in ImageNetβ30Updated last year
- Code for the paper: Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery. ECCV 2024.β35Updated 3 months ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.β209Updated 6 months ago
- Official PyTorch implementation of improved B-cos modelsβ45Updated 11 months ago
- NeuroSurgeon is a package that enables researchers to uncover and manipulate subnetworks within models in Huggingface Transformersβ39Updated this week
- Some methods for comparing network representations in deep learning and neuroscience.β130Updated 6 months ago
- β107Updated last year
- β50Updated 3 weeks ago
- Dataset and code for the CLEVR-XAI dataset.β31Updated last year
- Repository for PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits, accepted at CVPR 2024 XAI4CV Worksβ¦β12Updated 8 months ago
- Code for the ICLR 2022 paper. Salient Imagenet: How to discover spurious features in deep learning?β38Updated 2 years ago
- Code for the paper "Post-hoc Concept Bottleneck Models". Spotlight @ ICLR 2023β73Updated 8 months ago
- Crowdsourcing metrics and test datasets beyond ImageNet (ICML 2022 workshop)β38Updated 8 months ago
- Conformal prediction for uncertainty quantification in image segmentationβ18Updated 2 months ago
- β38Updated 9 months ago
- ViT Prisma is a mechanistic interpretability library for Vision Transformers (ViTs).β200Updated this week
- Build and train Lipschitz-constrained networks: PyTorch implementation of 1-Lipschitz layers. For TensorFlow/Keras implementation, see htβ¦β27Updated this week
- Code for the paper: B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable. NeurIPS 2024.β29Updated 2 months ago
- A framework for evaluating models on their alignment to brain and behavioral measurements (100+ benchmarks)β136Updated this week
- β72Updated last week
- Code from the paper "Joint rotational invariance and adversarial training of a dual-stream Transformer yields state of the art Brain-Scorβ¦β16Updated 2 years ago
- Updated code base for GlanceNets: Interpretable, Leak-proof Concept-based modelsβ25Updated last year
- Official repository for CMU Machine Learning Department's 10732: Robustness and Adaptivity in Shifting Environmentsβ73Updated 2 years ago