HEmile / a-nesiLinks
A Scalable Approximate Method for Probabilistic Neurosymbolic Inference
☆15Updated 5 months ago
Alternatives and similar repositories for a-nesi
Users that are interested in a-nesi are comparing it to the libraries listed below
Sorting:
- The Energy Transformer block, in JAX☆57Updated last year
- Codebase for VAEL: Bridging Variational Autoencoders and Probabilistic Logic Programming☆20Updated 2 years ago
- Official repository for the paper "Neural Differential Equations for Learning to Program Neural Nets Through Continuous Learning Rules" (…☆22Updated last month
- This repository holds the code for the NeurIPS 2022 paper, Semantic Probabilistic Layers☆30Updated last year
- Bayesian model reduction for probabilistic machine learning☆11Updated last week
- How to Turn Your Knowledge Graph Embeddings into Generative Models☆51Updated last year
- Code in support of the paper Continuous Mixtures of Tractable Probabilistic Models☆11Updated 9 months ago
- 🧮 Algebraic Positional Encodings.☆16Updated 6 months ago
- ZeroC is a neuro-symbolic method that trained with elementary visual concepts and relations, can zero-shot recognize and acquire more com…☆32Updated 2 years ago
- ☆22Updated 3 years ago
- We integrate discrete diffusion models with neurosymbolic predictors for scalable and calibrated learning and reasoning☆39Updated last month
- Meta-learning inductive biases in the form of useful conserved quantities.☆37Updated 2 years ago
- Code for "Bayesian Structure Learning with Generative Flow Networks"☆87Updated 3 years ago
- Logic Explained Networks is a python repository implementing explainable-by-design deep learning models.☆50Updated 2 years ago
- ☆10Updated 4 years ago
- PyTorch implementation for "Probabilistic Circuits for Variational Inference in Discrete Graphical Models", NeurIPS 2020☆17Updated 3 years ago
- ☆23Updated last year
- Codebase for Neuro-Symbolic Continual Learning.☆22Updated last year
- ☆52Updated last year
- Code for GFlowNet-EM, a novel algorithm for fitting latent variable models with compositional latents and an intractable true posterior.