dcmocanu / sparse-evolutionary-artificial-neural-networksLinks
Always sparse. Never dense. But never say never. A Sparse Training repository for the Adaptive Sparse Connectivity concept and its algorithmic instantiation, i.e. Sparse Evolutionary Training, to boost Deep Learning scalability on various aspects (e.g. memory and computational time efficiency, representation and generalization power).
☆246Updated 3 years ago
Alternatives and similar repositories for sparse-evolutionary-artificial-neural-networks
Users that are interested in sparse-evolutionary-artificial-neural-networks are comparing it to the libraries listed below
Sorting:
- ☆71Updated 5 years ago
- Neural Architecture Search with Bayesian Optimisation and Optimal Transport☆134Updated 6 years ago
- Naszilla is a Python library for neural architecture search (NAS)☆311Updated 2 years ago
- ☆182Updated 11 months ago
- Gradient based hyperparameter optimization & meta-learning package for TensorFlow☆188Updated 5 years ago
- Hypergradient descent☆149Updated last year
- Experiments for the paper "Exponential expressivity in deep neural networks through transient chaos"☆71Updated 9 years ago
- Keras implementation of Legendre Memory Units☆215Updated last week
- ☆144Updated 2 years ago
- Adaptive Neural Trees☆153Updated 6 years ago
- Example of "biological" learning for MNIST☆298Updated 3 years ago
- A general, modular, and programmable architecture search framework☆124Updated 2 years ago
- Deep Neural Networks Entropy from Replicas☆33Updated 5 years ago
- Discovering Neural Wirings (https://arxiv.org/abs/1906.00586)☆137Updated 5 years ago
- Guided Evolutionary Strategies☆271Updated 2 years ago
- Starter kit for the black box optimization challenge at Neurips 2020☆113Updated 4 years ago
- An implementation of KFAC for TensorFlow☆198Updated 3 years ago
- explore DNNs via Infomration☆265Updated 5 years ago
- Evolution Strategy Library☆55Updated 5 years ago
- [IJCAI'19, NeurIPS'19] Anode: Unconditionally Accurate Memory-Efficient Gradients for Neural ODEs☆106Updated 4 years ago
- ☆115Updated last year
- Sample implementation of Neural Ordinary Differential Equations☆263Updated 6 years ago
- Code for the paper Gaussian process behaviour in wide deep networks☆46Updated 6 years ago
- ☆133Updated 7 years ago
- Code for experiments regarding importance sampling for training neural networks☆329Updated 3 years ago
- paper lists and information on mean-field theory of deep learning☆78Updated 6 years ago
- Code for NeurIPS 2019 paper: "Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes…☆243Updated 4 years ago
- End-to-end training of sparse deep neural networks with little-to-no performance loss.☆323Updated 2 years ago
- hessian in pytorch☆187Updated 4 years ago
- Train self-modifying neural networks with neuromodulated plasticity☆77Updated 5 years ago