jxbz / fromageLinks
π§ Pytorch code for the Fromage optimiser.
β124Updated last year
Alternatives and similar repositories for fromage
Users that are interested in fromage are comparing it to the libraries listed below
Sorting:
- Codebase for Learning Invariances in Neural Networksβ95Updated 2 years ago
- π© Pytorch and Jax code for the Madam optimiser.β51Updated 4 years ago
- Easy-to-use AdaHessian optimizer (PyTorch)β79Updated 4 years ago
- Pytorch implementation of Variational Dropout Sparsifies Deep Neural Networksβ83Updated 3 years ago
- β78Updated 5 years ago
- This repository contains the results for the paper: "Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers"β180Updated 3 years ago
- Official code for the Stochastic Polyak step-size optimizerβ139Updated last year
- DeepOBS: A Deep Learning Optimizer Benchmark Suiteβ107Updated last year
- β100Updated 3 years ago
- Very deep VAEs in JAX/Flaxβ46Updated 4 years ago
- Code for: Implicit Competitive Regularization in GANsβ114Updated 3 years ago
- Memory efficient MAML using gradient checkpointingβ85Updated 5 years ago
- β133Updated 4 years ago
- Experiment code for "Randomized Automatic Differentiation"β67Updated 4 years ago
- β153Updated 5 years ago
- Implements stochastic line searchβ118Updated 2 years ago
- A library for evaluating representations.β76Updated 3 years ago
- PyTorch-SSO: Scalable Second-Order methods in PyTorchβ147Updated last year
- Structured matrices for compressing neural networksβ67Updated last year
- Neural Turing Machines in pytorchβ48Updated 3 years ago
- Loss Patterns of Neural Networksβ85Updated 3 years ago
- Mixture Density Networks (Bishop, 1994) tutorial in JAXβ60Updated 5 years ago
- Code for the Thermodynamic Variational Objectiveβ26Updated 3 years ago
- Hessian spectral density estimation in TF and Jaxβ123Updated 4 years ago
- code to reproduce the empirical results in the research paperβ36Updated 3 years ago
- The original code for the paper "How to train your MAML" along with a replication of the original "Model Agnostic Meta Learning" (MAML) pβ¦β41Updated 4 years ago
- Prescribed Generative Adversarial Networksβ143Updated 4 years ago
- β144Updated 2 years ago
- PadΓ© Activation Units: End-to-end Learning of Activation Functions in Deep Neural Networkβ63Updated 4 years ago
- Code for NeurIPS 2019 paper: "Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processesβ¦β243Updated 4 years ago