dcmocanu / sparse-evolutionary-artificial-neural-networks
Always sparse. Never dense. But never say never. A Sparse Training repository for the Adaptive Sparse Connectivity concept and its algorithmic instantiation, i.e. Sparse Evolutionary Training, to boost Deep Learning scalability on various aspects (e.g. memory and computational time efficiency, representation and generalization power).
☆242Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for sparse-evolutionary-artificial-neural-networks
- ☆70Updated 4 years ago
- End-to-end training of sparse deep neural networks with little-to-no performance loss.☆317Updated last year
- Naszilla is a Python library for neural architecture search (NAS)☆304Updated last year
- Hypergradient descent☆138Updated 5 months ago
- Neural Architecture Search with Bayesian Optimisation and Optimal Transport☆133Updated 5 years ago
- ☆218Updated 3 months ago
- Code for the neural architecture search methods contained in the paper Efficient Forward Neural Architecture Search☆110Updated last year
- ☆143Updated last year
- Keras implementation of Legendre Memory Units☆210Updated 3 months ago
- A general, modular, and programmable architecture search framework☆121Updated last year
- An implementation of KFAC for TensorFlow☆197Updated 2 years ago
- Gradient based hyperparameter optimization & meta-learning package for TensorFlow☆186Updated 4 years ago
- ☆182Updated 3 months ago
- Gradient based Hyperparameter Tuning library in PyTorch☆288Updated 4 years ago
- PyTorch code for training neural networks without global back-propagation☆162Updated 5 years ago
- Deep Neural Decision Trees☆159Updated 2 years ago
- Python implementation of GLN in different frameworks☆95Updated 4 years ago
- Code for Neural Architecture Search without Training (ICML 2021)☆460Updated 3 years ago
- [IJCAI'19, NeurIPS'19] Anode: Unconditionally Accurate Memory-Efficient Gradients for Neural ODEs☆104Updated 4 years ago
- Guided Evolutionary Strategies☆264Updated last year
- Sparse learning library and sparse momentum resources.☆378Updated 2 years ago
- BOAH: Bayesian Optimization & Analysis of Hyperparameters☆67Updated 4 years ago
- ☆132Updated 7 years ago
- ☆92Updated 5 years ago
- Starter kit for the black box optimization challenge at Neurips 2020☆113Updated 4 years ago
- Experiments with Direct Feedback Alignment training scheme for DNNs☆31Updated 7 years ago
- Code for the paper: Putting An End to End-to-End: Gradient-Isolated Learning of Representations☆284Updated last year
- Study on the applicability of Direct Feedback Alignment to neural view synthesis, recommender systems, geometric learning, and natural la…☆84Updated 2 years ago
- Functional ANOVA☆122Updated 7 months ago
- Train self-modifying neural networks with neuromodulated plasticity☆76Updated 5 years ago