dcmocanu / sparse-evolutionary-artificial-neural-networksLinks
Always sparse. Never dense. But never say never. A Sparse Training repository for the Adaptive Sparse Connectivity concept and its algorithmic instantiation, i.e. Sparse Evolutionary Training, to boost Deep Learning scalability on various aspects (e.g. memory and computational time efficiency, representation and generalization power).
☆249Updated 4 years ago
Alternatives and similar repositories for sparse-evolutionary-artificial-neural-networks
Users that are interested in sparse-evolutionary-artificial-neural-networks are comparing it to the libraries listed below
Sorting:
- ☆71Updated 5 years ago
- Naszilla is a Python library for neural architecture search (NAS)☆313Updated 2 years ago
- Neural Architecture Search with Bayesian Optimisation and Optimal Transport☆135Updated 6 years ago
- Evolution Strategy Library☆55Updated 5 years ago
- Keras implementation of Legendre Memory Units☆215Updated 2 months ago
- Gradient based hyperparameter optimization & meta-learning package for TensorFlow☆190Updated 5 years ago
- Adaptive Neural Trees☆155Updated 6 years ago
- ☆183Updated last year
- Discovering Neural Wirings (https://arxiv.org/abs/1906.00586)☆137Updated 5 years ago
- Sample implementation of Neural Ordinary Differential Equations☆262Updated 6 years ago
- ☆144Updated 2 years ago
- Starter kit for the black box optimization challenge at Neurips 2020☆113Updated 5 years ago
- Hypergradient descent☆149Updated last year
- Experiments for the paper "Exponential expressivity in deep neural networks through transient chaos"☆73Updated 9 years ago
- a python implementation of various versions of the information bottleneck, including automated parameter searching☆128Updated 5 years ago
- An implementation of KFAC for TensorFlow☆198Updated 3 years ago
- Train self-modifying neural networks with neuromodulated plasticity☆78Updated 5 years ago
- End-to-end training of sparse deep neural networks with little-to-no performance loss.☆326Updated 2 years ago
- Deep Neural Networks Entropy from Replicas☆33Updated 5 years ago
- [IJCAI'19, NeurIPS'19] Anode: Unconditionally Accurate Memory-Efficient Gradients for Neural ODEs☆107Updated 4 years ago
- A general, modular, and programmable architecture search framework☆124Updated 2 years ago
- Gradient based Hyperparameter Tuning library in PyTorch☆290Updated 5 years ago
- Example of "biological" learning for MNIST☆298Updated 3 years ago
- BOAH: Bayesian Optimization & Analysis of Hyperparameters☆67Updated 5 years ago
- ☆153Updated 5 years ago
- Functional ANOVA☆125Updated 6 months ago
- Code for experiments regarding importance sampling for training neural networks☆329Updated 3 years ago
- explore DNNs via Infomration☆265Updated 5 years ago
- ☆78Updated 5 years ago
- paper lists and information on mean-field theory of deep learning☆78Updated 6 years ago