pietrobarbiero / logic_explained_networksLinks
Logic Explained Networks is a python repository implementing explainable-by-design deep learning models.
☆50Updated 2 years ago
Alternatives and similar repositories for logic_explained_networks
Users that are interested in logic_explained_networks are comparing it to the libraries listed below
Sorting:
- Codebase for VAEL: Bridging Variational Autoencoders and Probabilistic Logic Programming☆20Updated last year
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆55Updated 2 years ago
- Updated code base for GlanceNets: Interpretable, Leak-proof Concept-based models☆25Updated last year
- ☆65Updated 11 months ago
- This repository contains the implementation of SimplEx, a method to explain the latent representations of black-box models with the help …☆24Updated 2 years ago
- A curated collection of papers on probabilistic circuits, computational graphs encoding tractable probability distributions.☆50Updated last year
- Code in support of the paper Continuous Mixtures of Tractable Probabilistic Models☆11Updated 8 months ago
- Code to reproduce our paper on probabilistic algorithmic recourse: https://arxiv.org/abs/2006.06831☆36Updated 2 years ago
- Code accompanying paper: Meta-Learning to Improve Pre-Training☆37Updated 3 years ago
- Uncertainty in Conditional Average Treatment Effect Estimation☆33Updated 4 years ago
- MDL Complexity computations and experiments from the paper "Revisiting complexity and the bias-variance tradeoff".☆18Updated 2 years ago
- Self-Explaining Neural Networks☆42Updated 5 years ago
- This repository holds the code for the NeurIPS 2022 paper, Semantic Probabilistic Layers☆27Updated last year
- Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning (AISTATS 2022 Oral)☆41Updated 2 years ago
- ☆52Updated last year
- This is the code for the paper Jacobian-based Causal Discovery with Nonlinear ICA, demonstrating how identifiable representations (partic…☆18Updated 9 months ago
- PyTorch Explain: Interpretable Deep Learning in Python.☆156Updated last year
- ☆16Updated 4 years ago
- How to Turn Your Knowledge Graph Embeddings into Generative Models☆52Updated 11 months ago
- Tensorflow implementation and notebooks for Implicit Maximum Likelihood Estimation☆67Updated 3 years ago
- Code for gradient rollback, which explains predictions of neural matrix factorization models, as for example used for knowledge base comp…☆21Updated 4 years ago
- ZeroC is a neuro-symbolic method that trained with elementary visual concepts and relations, can zero-shot recognize and acquire more com…☆32Updated 2 years ago
- A lightweight implementation of removal-based explanations for ML models.☆59Updated 3 years ago
- Code for the paper "Getting a CLUE: A Method for Explaining Uncertainty Estimates"☆34Updated last year
- AutoML Two-Sample Test☆19Updated 2 years ago
- Code for Neural Execution Engines: Learning to Execute Subroutines☆17Updated 4 years ago
- Neural Additive Models (Google Research)☆70Updated 3 years ago
- ☆33Updated 4 years ago
- Codebase for the paper: Not All Neuro-Symbolic Concepts Are Created Equal: Analysis and Mitigation of Reasoning Shortcuts☆20Updated last year
- ☆11Updated 2 years ago