srush / parallaxLinks
☆153Updated 5 years ago
Alternatives and similar repositories for parallax
Users that are interested in parallax are comparing it to the libraries listed below
Sorting:
- Mixture Density Networks (Bishop, 1994) tutorial in JAX☆60Updated 5 years ago
- ☆70Updated last year
- 🧀 Pytorch code for the Fromage optimiser.☆128Updated last year
- Framework-agnostic library for checking array/tensor shapes at runtime.☆46Updated 4 years ago
- 👩 Pytorch and Jax code for the Madam optimiser.☆52Updated 4 years ago
- A Pytree Module system for Deep Learning in JAX☆214Updated 2 years ago
- Tensor Shape Annotation Library (numpy, tensorflow, pytorch, ...)☆266Updated 5 years ago
- Python implementation of GLN in different frameworks☆97Updated 5 years ago
- ☆158Updated last year
- ☆13Updated 5 years ago
- Experiment orchestration☆102Updated 5 years ago
- ☆108Updated 2 years ago
- A selection of neural network models ported from torchvision for JAX & Flax.☆44Updated 2 months ago
- Neural Turing Machines in pytorch☆48Updated 3 years ago
- Pip-installable differentiable stacks in PyTorch!☆65Updated 4 years ago
- Normalizing Flows in Jax☆107Updated 5 years ago
- Code for Neural Arithmetic Units (ICLR) and Measuring Arithmetic Extrapolation Performance (SEDL|NeurIPS)☆146Updated 4 years ago
- Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions☆258Updated last year
- Configure Python functions explicitly and safely☆127Updated 10 months ago
- Lightweight interface to AWS☆47Updated 5 years ago
- This repository contains the results for the paper: "Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers"☆184Updated 4 years ago
- ☆83Updated 5 years ago
- Annotating tensor shapes using Python types☆158Updated 2 years ago
- Implementation of Model-Agnostic Meta-Learning (MAML) in Jax☆190Updated 3 years ago
- Official code for the Stochastic Polyak step-size optimizer☆139Updated last year
- Code for NeurIPS 2019 paper: "Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes…☆247Updated 5 years ago
- Functional machine learning for fun☆85Updated 4 years ago
- ☆100Updated 3 years ago
- Training Transformer-XL on 128 GPUs☆140Updated 5 years ago
- Docs☆143Updated 10 months ago