IDSIA / modern-srwmLinks
Official repository for the paper "A Modern Self-Referential Weight Matrix That Learns to Modify Itself" (ICML 2022 & NeurIPS 2021 Deep RL Workshop) and "Accelerating Neural Self-Improvement via Bootstrapping" (ICLR 2023 Workshop)
☆172Updated 2 weeks ago
Alternatives and similar repositories for modern-srwm
Users that are interested in modern-srwm are comparing it to the libraries listed below
Sorting:
- Neural Turing Machines in pytorch☆48Updated 3 years ago
- Reference implementation of "An Algorithm for Routing Vectors in Sequences" (Heinsen, 2022) and "An Algorithm for Routing Capsules in All…☆169Updated 2 years ago
- Hierarchical Associative Memory User Experience☆100Updated last year
- Easy Hypernetworks in Pytorch and Jax☆101Updated 2 years ago
- ☆192Updated 2 months ago
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆81Updated 3 years ago
- Neural Networks and the Chomsky Hierarchy☆205Updated last year
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆105Updated 4 years ago
- Meta-learning inductive biases in the form of useful conserved quantities.☆37Updated 2 years ago
- Official Implementation of the ICML 2023 paper: "Neural Wave Machines: Learning Spatiotemporally Structured Representations with Locally …☆72Updated 2 years ago
- The Abstraction and Reasoning Corpus made into a web game☆89Updated 9 months ago
- ☆39Updated 3 years ago
- The Energy Transformer block, in JAX☆58Updated last year
- Brain-Inspired Modular Training (BIMT), a method for making neural networks more modular and interpretable.☆171Updated 2 years ago
- Code for "Meta Learning Backpropagation And Improving It" @ NeurIPS 2021 https://arxiv.org/abs/2012.14905☆32Updated 3 years ago
- Pretrained deep learning models for Jax/Flax: StyleGAN2, GPT2, VGG, ResNet, etc.☆254Updated 3 months ago
- Gaussian-Bernoulli Restricted Boltzmann Machines☆104Updated 2 years ago
- Implementation of Hierarchical Transformer Memory (HTM) for Pytorch☆74Updated 3 years ago
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆239Updated 2 years ago
- Image augmentation library for Jax☆39Updated last year
- Differentiable Algorithms and Algorithmic Supervision.☆116Updated last year
- Automatic gradient descent☆208Updated 2 years ago
- ☆17Updated 10 months ago
- Code for the paper "Predictive Coding Approximates Backprop along Arbitrary Computation Graphs"☆156Updated 4 years ago
- Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers" (NeurIPS 2021)☆49Updated 2 weeks ago
- ☆67Updated 3 years ago
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated last year
- Sequence Modeling with Structured State Spaces☆64Updated 2 years ago
- Running Jax in PyTorch Lightning☆102Updated 6 months ago
- ☆56Updated 2 years ago