dcmocanu / sparse-evolutionary-artificial-neural-networksLinks
Always sparse. Never dense. But never say never. A Sparse Training repository for the Adaptive Sparse Connectivity concept and its algorithmic instantiation, i.e. Sparse Evolutionary Training, to boost Deep Learning scalability on various aspects (e.g. memory and computational time efficiency, representation and generalization power).
☆246Updated 4 years ago
Alternatives and similar repositories for sparse-evolutionary-artificial-neural-networks
Users that are interested in sparse-evolutionary-artificial-neural-networks are comparing it to the libraries listed below
Sorting:
- ☆71Updated 5 years ago
- Neural Architecture Search with Bayesian Optimisation and Optimal Transport☆135Updated 6 years ago
- Keras implementation of Legendre Memory Units☆215Updated last month
- Naszilla is a Python library for neural architecture search (NAS)☆312Updated 2 years ago
- ☆144Updated 2 years ago
- Evolution Strategy Library☆55Updated 5 years ago
- ☆183Updated last year
- Train self-modifying neural networks with neuromodulated plasticity☆77Updated 5 years ago
- Discovering Neural Wirings (https://arxiv.org/abs/1906.00586)☆137Updated 5 years ago
- A general, modular, and programmable architecture search framework☆124Updated 2 years ago
- Gradient based hyperparameter optimization & meta-learning package for TensorFlow☆188Updated 5 years ago
- Experiments for the paper "Exponential expressivity in deep neural networks through transient chaos"☆72Updated 9 years ago
- Hypergradient descent☆149Updated last year
- End-to-end training of sparse deep neural networks with little-to-no performance loss.☆324Updated 2 years ago
- Sample implementation of Neural Ordinary Differential Equations☆263Updated 6 years ago
- ☆78Updated 5 years ago
- Example of "biological" learning for MNIST☆298Updated 3 years ago
- Adaptive Neural Trees☆155Updated 6 years ago
- Padé Activation Units: End-to-end Learning of Activation Functions in Deep Neural Network☆63Updated 4 years ago
- ☆152Updated 5 years ago
- Implementing Randomly Wired Neural Networks for Image Recognition, Using CIFAR-10 dataset, CIFAR-100 dataset☆88Updated 6 years ago
- Deep Learning without Weight Transport☆36Updated 5 years ago
- ☆96Updated 6 years ago
- Guided Evolutionary Strategies☆272Updated 2 years ago
- ☆116Updated last year
- Population Based Training (in PyTorch with sqlite3). Status: Unsupported☆162Updated 7 years ago
- Deep Neural Networks Entropy from Replicas☆33Updated 5 years ago
- Gradient based Hyperparameter Tuning library in PyTorch☆290Updated 5 years ago
- paper lists and information on mean-field theory of deep learning☆78Updated 6 years ago
- a python implementation of various versions of the information bottleneck, including automated parameter searching☆128Updated 5 years ago