mini-pw / 2023L-ExploratoryDataAnalysisLinks
Introduction to exploratory data analysis course for Mathematics and data analysis studies in Spring 2022/2023
☆16Updated 2 years ago
Alternatives and similar repositories for 2023L-ExploratoryDataAnalysis
Users that are interested in 2023L-ExploratoryDataAnalysis are comparing it to the libraries listed below
Sorting:
- [ICCV 2023] Official PyTorch implementation of "A Multidimensional Analysis of Social Biases in Vision Transformers"☆13Updated 2 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆39Updated last year
- [NeurIPS 2024] CoSy is an automatic evaluation framework for textual explanations of neurons.☆18Updated 4 months ago
- eXplainable Machine Learning 2023/24 at MIM UW☆22Updated last year
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆134Updated last year
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.☆20Updated last year
- FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods (ICCV 2023)☆17Updated last year
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆233Updated 2 months ago
- Build and train Lipschitz-constrained networks: PyTorch implementation of 1-Lipschitz layers. For TensorFlow/Keras implementation, see ht…☆34Updated this week
- (ICML 2023) High Fidelity Image Counterfactuals with Probabilistic Causal Models☆63Updated 6 months ago
- Concept Relevance Propagation for Localization Models, accepted at SAIAD workshop at CVPR 2023.☆15Updated last year
- Generating and Imputing Tabular Data via Diffusion and Flow XGBoost Models☆171Updated last year
- ☆11Updated last year
- Mechanistic understanding and validation of large AI models with SemanticLens☆37Updated 3 weeks ago
- ☆15Updated 5 months ago
- Benchmark to Evaluate EXplainable AI☆20Updated 7 months ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆627Updated 2 months ago
- Codebase for evaluation of deep generative models as presented in Exposing flaws of generative model evaluation metrics and their unfair …☆191Updated 7 months ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆248Updated last year
- ☆39Updated last year
- Simple, compact, and hackable post-hoc deep OOD detection for already trained tensorflow or pytorch image classifiers.☆60Updated 3 weeks ago
- Library for Jacobian descent with PyTorch. It enables the optimization of neural networks with multiple losses (e.g. multi-task learning)…☆271Updated this week
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layers☆100Updated 7 months ago
- Source Code of the ROAD benchmark for feature attribution methods (ICML22)☆23Updated 2 years ago
- Official Implementations of "Mixed-Type Tabular Data Synthesis with Score-based Diffusion in Latent Space""☆170Updated last year
- 👋 Xplique is a Neural Networks Explainability Toolbox☆702Updated last year
- [ICLR 2025] TabDiff: a Mixed-type Diffusion Model for Tabular Data Generation☆108Updated 4 months ago
- [NeurIPS 2024] Code for the paper: B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable.☆35Updated this week
- Open-source framework for uncertainty and deep learning models in PyTorch☆439Updated last week
- ☆10Updated last year