d909b / ameLinks
π€π€ Attentive Mixtures of Experts (AMEs) are neural network models that learn to output both accurate predictions and estimates of feature importance for individual samples.
β42Updated 2 years ago
Alternatives and similar repositories for ame
Users that are interested in ame are comparing it to the libraries listed below
Sorting:
- An Empirical Study of Invariant Risk Minimizationβ27Updated 4 years ago
- Code for "Neural causal learning from unknown interventions"β103Updated 4 years ago
- Code for paper EDDI: Efficient Dynamic Discovery of High-Value Information with Partial VAEβ40Updated 2 years ago
- β32Updated 6 years ago
- SparseMax activation function implementation (ICML 2016) (PyTorch)β27Updated 7 years ago
- Reproduction work of "Neural Relational Inference for Interacting Systems" in Chainerβ34Updated 6 years ago
- Source code for Naesseth et. al. "Reparameterization Gradients through Acceptance-Rejection Sampling Algorithms" (2017)β39Updated 8 years ago
- β65Updated 11 months ago
- Code for our ICML '19 paper: Neural Network Attributions: A Causal Perspective.β51Updated 3 years ago
- β11Updated 7 years ago
- Pytorch Implemetation for our NAACL2019 Paper "Riemannian Normalizing Flow on Variational Wasserstein Autoencoder for Text Modeling" httpβ¦β62Updated 5 years ago
- Exemplar VAE: Linking Generative Models, Nearest Neighbor Retrieval, and Data Augmentationβ69Updated 4 years ago
- Original implementation of Separated Paths for Local and Global Information framework (SPLIT) in TensorFlow 2.β19Updated 2 years ago
- Keras implementation of Deep Wasserstein Embeddingsβ48Updated 7 years ago
- Implementation of the models and datasets used in "An Information-theoretic Approach to Distribution Shifts"β25Updated 3 years ago
- NeurIPS 2017 best paper. An interpretable linear-time kernel goodness-of-fit test.β67Updated 5 years ago
- Code accompanying our paper at AISTATS 2020β21Updated 4 years ago
- Feature Interaction Interpretability via Interaction Detectionβ34Updated 2 years ago
- [ICLR 2020] FSPool: Learning Set Representations with Featurewise Sort Poolingβ42Updated last year
- GRACE: Generating Concise and Informative Contrastive Sample to Explain Neural Network Modelβs Prediction. Thai Le, Suhang Wang, Dongwon β¦β21Updated 4 years ago
- Variational Autoencoders with Gaussian Mixture Latent Spaceβ36Updated 7 years ago
- β17Updated 6 years ago
- Code for "Generative causal explanations of black-box classifiers"β34Updated 4 years ago
- General purpose library for BNNs, and implementation of OC-BNNs in our 2020 NeurIPS paper.β38Updated 3 years ago
- Implementation of Information Dropoutβ39Updated 7 years ago
- Variational Auto-encoder with Non-parametric Bayesian Priorβ43Updated 8 years ago
- Non-Parametric Calibration for Classification (AISTATS 2020)β19Updated 3 years ago
- This repository contains the code used in a publication 'Active Learning for Decision-Making from Imbalanced Observational Data', Iiris Sβ¦β11Updated 6 years ago
- Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.β131Updated 4 years ago
- Algorithms for abstention, calibration and domain adaptation to label shift.β36Updated 4 years ago