dcmocanu / sparse-evolutionary-artificial-neural-networksLinks
Always sparse. Never dense. But never say never. A Sparse Training repository for the Adaptive Sparse Connectivity concept and its algorithmic instantiation, i.e. Sparse Evolutionary Training, to boost Deep Learning scalability on various aspects (e.g. memory and computational time efficiency, representation and generalization power).
☆250Updated 4 years ago
Alternatives and similar repositories for sparse-evolutionary-artificial-neural-networks
Users that are interested in sparse-evolutionary-artificial-neural-networks are comparing it to the libraries listed below
Sorting:
- ☆71Updated 5 years ago
- Neural Architecture Search with Bayesian Optimisation and Optimal Transport☆135Updated 6 years ago
- Naszilla is a Python library for neural architecture search (NAS)☆313Updated 2 years ago
- ☆144Updated 2 years ago
- ☆182Updated last year
- Hypergradient descent☆149Updated last year
- Discovering Neural Wirings (https://arxiv.org/abs/1906.00586)☆137Updated 5 years ago
- Starter kit for the black box optimization challenge at Neurips 2020☆113Updated 5 years ago
- Gradient based hyperparameter optimization & meta-learning package for TensorFlow☆190Updated 5 years ago
- Keras implementation of Legendre Memory Units☆214Updated this week
- A general, modular, and programmable architecture search framework☆124Updated 2 years ago
- Evolution Strategy Library☆55Updated 5 years ago
- BOAH: Bayesian Optimization & Analysis of Hyperparameters☆67Updated 5 years ago
- An implementation of KFAC for TensorFlow☆198Updated 3 years ago
- Gradient based Hyperparameter Tuning library in PyTorch☆290Updated 5 years ago
- Adaptive Neural Trees☆155Updated 6 years ago
- Code for experiments regarding importance sampling for training neural networks☆330Updated 3 years ago
- paper lists and information on mean-field theory of deep learning☆78Updated 6 years ago
- pyhessian is a TensorFlow module which can be used to estimate Hessian matrices☆24Updated 4 years ago
- Deep Neural Networks Entropy from Replicas☆33Updated 5 years ago
- Guided Evolutionary Strategies☆272Updated 2 years ago
- Visualizing the the loss landscape of Fully-Connected Neural Networks☆46Updated 2 years ago
- Padé Activation Units: End-to-end Learning of Activation Functions in Deep Neural Network☆63Updated 4 years ago
- Experiments for the paper "Exponential expressivity in deep neural networks through transient chaos"☆73Updated 9 years ago
- ☆117Updated 2 years ago
- [IJCAI'19, NeurIPS'19] Anode: Unconditionally Accurate Memory-Efficient Gradients for Neural ODEs☆106Updated 4 years ago
- explore DNNs via Infomration☆265Updated 5 years ago
- A Neuromodulated Meta-Learning algorithm☆117Updated 5 years ago
- Example of "biological" learning for MNIST☆300Updated 3 years ago
- Deep Learning without Weight Transport☆36Updated 5 years ago