mim-uw / eXplainableMachineLearning-2024
eXplainable Machine Learning 2023/24 at MIM UW
β17Updated last year
Alternatives and similar repositories for eXplainableMachineLearning-2024:
Users that are interested in eXplainableMachineLearning-2024 are comparing it to the libraries listed below
- Introduction to exploratory data analysis course for Mathematics and data analysis studies in Spring 2022/2023β15Updated last year
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ239Updated 6 months ago
- π‘ Adversarial attacks on explanations and how to defend themβ310Updated 2 months ago
- Repository for PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits, accepted at CVPR 2024 XAI4CV Worksβ¦β12Updated 8 months ago
- Generating and Imputing Tabular Data via Diffusion and Flow XGBoost Modelsβ148Updated 6 months ago
- ViT Prisma is a mechanistic interpretability library for Vision Transformers (ViTs).β205Updated this week
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ33Updated 10 months ago
- Shapley Interactions and Shapley Values for Machine Learningβ322Updated this week
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β62Updated last year
- Create powerful Hydra applications without the yaml files and boilerplate code.β360Updated last week
- Layer-Wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]β125Updated last week
- β34Updated last year
- A toolkit for quantitative evaluation of data attribution methods.β39Updated this week
- relplot: Utilities for measuring calibration and plotting reliability diagramsβ138Updated 8 months ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanationsβ584Updated 2 weeks ago
- CoSy: Evaluating Textual Explanationsβ14Updated last month
- Sparse Autoencoder for Mechanistic Interpretabilityβ216Updated 7 months ago
- A fast, effective data attribution method for neural networks in PyTorchβ192Updated 3 months ago
- Conference schedule, top papers, and analysis of the data for NeurIPS 2023!β118Updated last year
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.β209Updated 7 months ago
- The M2L school 2022 tutorialsβ36Updated 2 years ago
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.β19Updated last year
- π Influenciae is a Tensorflow Toolbox for Influence Functionsβ58Updated 10 months ago
- π Xplique is a Neural Networks Explainability Toolboxβ664Updated 4 months ago
- Official JAX implementation of xLSTM including fast and efficient training and inference code. 7B model available at https://huggingface.β¦β82Updated last month
- Example of how to use Weights & Biases on Slurmβ113Updated 2 years ago
- Reading list for "The Shapley Value in Machine Learning" (JCAI 2022)β146Updated 2 years ago
- Self-Supervised Learning in PyTorchβ133Updated 11 months ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniquesβ62Updated 2 years ago