watcl-lab / neural_networks_and_computationLinks
Computational abilities and efficiency of neural networks
β56Updated 3 months ago
Alternatives and similar repositories for neural_networks_and_computation
Users that are interested in neural_networks_and_computation are comparing it to the libraries listed below
Sorting:
- π§± Modula software packageβ303Updated 2 months ago
- β285Updated last year
- Compositional Linear Algebraβ491Updated 3 months ago
- Second Order Optimization and Curvature Estimation with K-FAC in JAX.β295Updated last week
- Latent Program Network (from the "Searching Latent Program Spaces" paper)β103Updated last month
- Implementation of PSGD optimizer in JAXβ35Updated 10 months ago
- Named Tensors for Legible Deep Learning in JAXβ212Updated last week
- Hierarchical Associative Memory User Experienceβ104Updated last week
- Pytorch-like dataloaders for JAX.β96Updated 5 months ago
- A modular, easy to extend GFlowNet libraryβ295Updated last week
- Scalable training and inference for Probabilistic Circuitsβ86Updated last week
- β232Updated this week
- Minimal yet performant LLM examples in pure JAXβ198Updated last month
- JAX Arrays for human consumptionβ110Updated 3 weeks ago
- Turn jitted jax functions back into python source codeβ22Updated 11 months ago
- β17Updated last year
- Minimal, lightweight JAX implementations of popular models.β118Updated this week
- Parameter-Free Optimizers for Pytorchβ131Updated last year
- Uncertainty quantification with PyTorchβ375Updated last month
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation preconditionβ¦β188Updated last month
- seqax = sequence modeling + JAXβ168Updated 3 months ago
- A simple library for scaling up JAX programsβ144Updated last week
- Jax/Flax rewrite of Karpathy's nanoGPTβ62Updated 2 years ago
- Exact OU processes with JAXβ56Updated 7 months ago
- Maximal Update Parametrization (ΞΌP) with Flax & Optax.β16Updated last year
- Scalable and Stable Parallelization of Nonlinear RNNSβ24Updated 3 weeks ago
- Agustinus' very opiniated publication-ready plotting libraryβ69Updated 6 months ago
- Brain-Inspired Modular Training (BIMT), a method for making neural networks more modular and interpretable.β173Updated 2 years ago
- LoRA for arbitrary JAX models and functionsβ142Updated last year
- nanoGPT using Equinoxβ13Updated 2 years ago