opallab / neural_networks_and_computationLinks
Computational abilities and efficiency of neural networks
β54Updated last month
Alternatives and similar repositories for neural_networks_and_computation
Users that are interested in neural_networks_and_computation are comparing it to the libraries listed below
Sorting:
- β274Updated last year
- π§± Modula software packageβ207Updated 3 months ago
- The Energy Transformer block, in JAXβ59Updated last year
- Hierarchical Associative Memory User Experienceβ101Updated this week
- Scalable and Stable Parallelization of Nonlinear RNNSβ17Updated 5 months ago
- Compositional Linear Algebraβ478Updated last month
- Implementation of PSGD optimizer in JAXβ33Updated 6 months ago
- Scalable training and inference for Probabilistic Circuitsβ69Updated last week
- Exact OU processes with JAXβ48Updated 3 months ago
- Loopy belief propagation for factor graphs on discrete variables in JAXβ154Updated 9 months ago
- Latent Program Network (from the "Searching Latent Program Spaces" paper)β91Updated 4 months ago
- Riemannian Optimization Using JAXβ49Updated last year
- Second Order Optimization and Curvature Estimation with K-FAC in JAX.β281Updated this week
- Parameter-Free Optimizers for Pytorchβ130Updated last year
- This repository contains the official code for Energy Transformer---an efficient Energy-based Transformer variant for graph classificatioβ¦β24Updated last year
- seqax = sequence modeling + JAXβ165Updated last month
- Uncertainty quantification with PyTorchβ362Updated 3 months ago
- About A collection of AWESOME things about information geometry Topicsβ164Updated last year
- A modular, easy to extend GFlowNet libraryβ273Updated this week
- Pytorch-like dataloaders for JAX.β90Updated last month
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation preconditionβ¦β179Updated last month
- Generative Flow Networks - GFlowNetβ258Updated 3 weeks ago
- β51Updated last year
- β29Updated 3 months ago
- Brain-Inspired Modular Training (BIMT), a method for making neural networks more modular and interpretable.β171Updated 2 years ago
- β17Updated 10 months ago
- Implementing RASP transformer programming language https://arxiv.org/pdf/2106.06981.pdf.β56Updated 3 years ago
- Named Tensors for Legible Deep Learning in JAXβ188Updated this week
- Graphically structured diffusion model.β20Updated 2 years ago
- Jax/Flax rewrite of Karpathy's nanoGPTβ59Updated 2 years ago