dcmocanu / sparse-evolutionary-artificial-neural-networksLinks
Always sparse. Never dense. But never say never. A Sparse Training repository for the Adaptive Sparse Connectivity concept and its algorithmic instantiation, i.e. Sparse Evolutionary Training, to boost Deep Learning scalability on various aspects (e.g. memory and computational time efficiency, representation and generalization power).
☆251Updated 4 years ago
Alternatives and similar repositories for sparse-evolutionary-artificial-neural-networks
Users that are interested in sparse-evolutionary-artificial-neural-networks are comparing it to the libraries listed below
Sorting:
- ☆71Updated 5 years ago
- Neural Architecture Search with Bayesian Optimisation and Optimal Transport☆135Updated 6 years ago
- ☆182Updated last year
- Gradient based hyperparameter optimization & meta-learning package for TensorFlow☆190Updated 5 years ago
- Naszilla is a Python library for neural architecture search (NAS)☆314Updated 2 years ago
- ☆144Updated 2 years ago
- Discovering Neural Wirings (https://arxiv.org/abs/1906.00586)☆136Updated 5 years ago
- Starter kit for the black box optimization challenge at Neurips 2020☆115Updated 5 years ago
- Sample implementation of Neural Ordinary Differential Equations☆264Updated 6 years ago
- Evolution Strategy Library☆55Updated 5 years ago
- Experiments for the paper "Exponential expressivity in deep neural networks through transient chaos"☆74Updated 9 years ago
- explore DNNs via Infomration☆266Updated 5 years ago
- Gradient based Hyperparameter Tuning library in PyTorch☆291Updated 5 years ago
- Hypergradient descent☆147Updated last year
- An implementation of KFAC for TensorFlow☆198Updated 3 years ago
- a python implementation of various versions of the information bottleneck, including automated parameter searching☆130Updated 5 years ago
- Deep Neural Networks Entropy from Replicas☆34Updated 6 years ago
- Keras implementation of Legendre Memory Units☆214Updated 2 weeks ago
- paper lists and information on mean-field theory of deep learning☆78Updated 6 years ago
- [IJCAI'19, NeurIPS'19] Anode: Unconditionally Accurate Memory-Efficient Gradients for Neural ODEs☆108Updated 5 years ago
- A general, modular, and programmable architecture search framework☆123Updated 2 years ago
- Functional ANOVA☆125Updated 8 months ago
- Code for NeurIPS 2019 paper: "Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes…☆248Updated 5 years ago
- Adaptive Neural Trees☆155Updated 6 years ago
- Sparse learning library and sparse momentum resources.☆384Updated 3 years ago
- ☆155Updated 5 years ago
- Neural Ordinary Differential Equation☆103Updated 6 years ago
- Train self-modifying neural networks with neuromodulated plasticity☆78Updated 6 years ago
- pyhessian is a TensorFlow module which can be used to estimate Hessian matrices☆25Updated 4 years ago
- BOAH: Bayesian Optimization & Analysis of Hyperparameters☆67Updated 5 years ago