sc782 / SBM-Transformer
☆13Updated 2 years ago
Alternatives and similar repositories for SBM-Transformer:
Users that are interested in SBM-Transformer are comparing it to the libraries listed below
- Official Code for Local Search GFlowNets (ICLR 2024 Spotlight)☆16Updated 2 weeks ago
- Pytorch implementation of neural processes and variants☆27Updated 7 months ago
- [NeurIPS'23 Spotlight] Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance (LPS), in PyTorch☆29Updated 11 months ago
- [ICLR 2024 Oral] Beyond Weisfeiler-Lehman: A Quantitative Framework for GNN Expressiveness.☆16Updated last year
- ☆15Updated 2 years ago
- Code for our TMLR paper "Distributional GFlowNets with Quantile Flows".☆9Updated last year
- Code for GFlowNet-EM, a novel algorithm for fitting latent variable models with compositional latents and an intractable true posterior.☆41Updated last year
- ☆24Updated 5 months ago
- [ICLR 2025] Code for the paper "Beyond Autoregression: Discrete Diffusion for Complex Reasoning and Planning"☆38Updated 3 weeks ago
- ☆22Updated 3 years ago
- Official PyTorch implementation of "Energy-Based Contrastive Learning of Visual Representations", NeurIPS 2022 Oral Paper☆10Updated 2 years ago
- ☆28Updated 4 months ago
- Code for "Can We Scale Transformers to Predict Parameters of Diverse ImageNet Models?" [ICML 2023]☆31Updated 6 months ago
- Code for our paper "Generative Flow Networks for Discrete Probabilistic Modeling"☆82Updated 2 years ago
- Bayesian Attention Modules☆35Updated 4 years ago
- ☆15Updated last year
- [NeurIPS'20] Code for the Paper Compositional Visual Generation and Inference with Energy Based Models☆44Updated last year
- Code for the paper "What Makes Better Augmentation Strategies? Augment Difficult but Not too Different" (ICLR 22)☆13Updated last year
- ICML 2022: Learning Iterative Reasoning through Energy Minimization☆45Updated 2 years ago
- Online Adaptation of Language Models with a Memory of Amortized Contexts (NeurIPS 2024)☆62Updated 7 months ago
- [NeurIPS'21] Higher-order Transformers for sets, graphs, and hypergraphs, in PyTorch☆64Updated 2 years ago
- [ICML'21] Improved Contrastive Divergence Training of Energy Based Models☆62Updated 2 years ago
- Official implementation of Transformer Neural Processes☆71Updated 2 years ago
- Official PyTorch implementation of NPwSA: "Neural Processes with Stochastic Attention: Paying more attention to the context dataset (ICLR…☆10Updated 2 years ago
- ☆51Updated 2 years ago
- NeurIPS'23: Energy Discrepancies: A Score-Independent Loss for Energy-Based Models☆14Updated 4 months ago
- ☆15Updated 10 months ago
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆34Updated last year
- [ICML 2024] Recurrent Distance Filtering for Graph Representation Learning☆14Updated 9 months ago