serre-lab / Horama
β10Updated 4 months ago
Alternatives and similar repositories for Horama:
Users that are interested in Horama are comparing it to the libraries listed below
- π Overcomplete is a Vision-based SAE Toolboxβ42Updated this week
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β62Updated last year
- Repository for PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits, accepted at CVPR 2024 XAI4CV Worksβ¦β14Updated 9 months ago
- LENS Projectβ47Updated last year
- Code for the paper: Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery. ECCV 2024.β38Updated 4 months ago
- β12Updated 2 years ago
- Simple, compact, and hackable post-hoc deep OOD detection for already trained tensorflow or pytorch image classifiers.β56Updated 3 weeks ago
- π Influenciae is a Tensorflow Toolbox for Influence Functionsβ61Updated 11 months ago
- Build and train Lipschitz-constrained networks: PyTorch implementation of 1-Lipschitz layers. For TensorFlow/Keras implementation, see htβ¦β29Updated last month
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ34Updated 11 months ago
- A Continual Learning Library in PyTorch and JAXβ14Updated last year
- Code for the ICLR 2022 paper. Salient Imagenet: How to discover spurious features in deep learning?β40Updated 2 years ago
- ViT Prisma is a mechanistic interpretability library for Vision Transformers (ViTs).β215Updated 2 weeks ago
- β10Updated 3 months ago
- Spurious Features Everywhere - Large-Scale Detection of Harmful Spurious Features in ImageNetβ30Updated last year
- A toolkit for quantitative evaluation of data attribution methods.β42Updated last week
- Official code for "Can We Talk Models Into Seeing the World Differently?" (ICLR 2025).β21Updated 2 months ago
- [ECCV 2024] Characterizing Robustness via Natural Input Gradientsβ10Updated 5 months ago
- Official repo for Detecting, Explaining, and Mitigating Memorization in Diffusion Models (ICLR 2024)β69Updated 11 months ago
- π Aligning Human & Machine Vision using explainabilityβ51Updated last year
- Official PyTorch implementation of improved B-cos modelsβ47Updated last year
- ImageNet Testbed, associated with the paper "Measuring Robustness to Natural Distribution Shifts in Image Classification."β118Updated last year
- What do we learn from inverting CLIP models?β53Updated last year
- [NeurIPS 2024] Code for the paper: B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable.β30Updated this week
- A simple and efficient baseline for data attributionβ11Updated last year
- The code for the Ensemble everything everywhere: Multi-scale aggregation for adversarial robustness paperβ19Updated 4 months ago
- Code for the paper "The Journey, Not the Destination: How Data Guides Diffusion Models"β22Updated last year
- β39Updated 10 months ago
- FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods (ICCV 2023)β15Updated 11 months ago
- Crowdsourcing metrics and test datasets beyond ImageNet (ICML 2022 workshop)β38Updated 10 months ago