ml-jku / hopfield-layersLinks
Hopfield Networks is All You Need
☆1,849Updated 2 years ago
Alternatives and similar repositories for hopfield-layers
Users that are interested in hopfield-layers are comparing it to the libraries listed below
Sorting:
- Pytorch library for fast transformer implementations☆1,732Updated 2 years ago
- Reformer, the efficient Transformer, in Pytorch☆2,181Updated 2 years ago
- Fast and Easy Infinite Neural Networks in Python☆2,357Updated last year
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,148Updated 3 years ago
- JAX-based neural network library☆3,093Updated this week
- Long Range Arena for Benchmarking Efficient Transformers☆763Updated last year
- Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch☆1,171Updated 2 years ago
- Toolbox of models, callbacks, and datasets for AI/ML researchers.☆1,739Updated 3 weeks ago
- A Graph Neural Network Library in Jax☆1,443Updated last year
- Type annotations and dynamic checking for a tensor's shape, dtype, names, etc.☆1,446Updated 4 months ago
- Pytorch Lightning code guideline for conferences☆1,279Updated 2 years ago
- ML Collections is a library of Python Collections designed for ML use cases.☆982Updated 3 weeks ago
- [NeurIPS'19] Deep Equilibrium Models☆765Updated 3 years ago
- torch-optimizer -- collection of optimizers for Pytorch☆3,139Updated last year
- Repository for NeurIPS 2020 Spotlight "AdaBelief Optimizer: Adapting stepsizes by the belief in observed gradients"☆1,064Updated last year
- higher is a pytorch library allowing users to obtain higher order gradients over losses spanning training loops rather than individual tr…☆1,624Updated 3 years ago
- Fast, general, and tested differentiable structured prediction in PyTorch☆1,115Updated 3 years ago
- Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"☆1,587Updated 5 years ago
- Fast, differentiable sorting and ranking in PyTorch☆834Updated 3 months ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆797Updated last year
- Riemannian Adaptive Optimization Methods with pytorch optim☆981Updated last month
- Structured state space sequence models☆2,725Updated last year
- ☆786Updated 3 weeks ago
- maximal update parametrization (µP)☆1,599Updated last year
- disentanglement_lib is an open-source library for research on learning disentangled representations.☆1,407Updated 4 years ago
- ☆381Updated last year
- Code for visualizing the loss landscape of neural nets☆3,066Updated 3 years ago
- Implementation of LambdaNetworks, a new approach to image recognition that reaches SOTA with less compute☆1,530Updated 4 years ago
- Tips for releasing research code in Machine Learning (with official NeurIPS 2020 recommendations)☆2,828Updated 2 years ago
- Python 3.8+ toolbox for submitting jobs to Slurm☆1,503Updated 4 months ago