lucidrains / glom-pytorchLinks
An attempt at the implementation of Glom, Geoffrey Hinton's new idea that integrates concepts from neural fields, top-down-bottom-up processing, and attention (consensus between columns), for emergent part-whole heirarchies from data
☆193Updated 4 years ago
Alternatives and similar repositories for glom-pytorch
Users that are interested in glom-pytorch are comparing it to the libraries listed below
Sorting:
- Self-supervised learning through the eyes of a child☆142Updated 4 years ago
- An implementation of 2021 paper by Geoffrey Hinton: "How to represent part-whole hierarchies in a neural network" in Pytorch.☆57Updated 4 years ago
- Implementation of Feedback Transformer in Pytorch☆107Updated 4 years ago
- Official PyTorch implementation of the paper "Self-Supervised Relational Reasoning for Representation Learning", NeurIPS 2020 Spotlight.☆143Updated last year
- Official codebase for Pretrained Transformers as Universal Computation Engines.☆247Updated 3 years ago
- ☆47Updated 2 years ago
- Understanding Training Dynamics of Deep ReLU Networks☆296Updated 3 weeks ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆105Updated 4 years ago
- Simple and efficient RevNet-Library for PyTorch with XLA and DeepSpeed support and parameter offload☆129Updated 3 years ago
- 🧀 Pytorch code for the Fromage optimiser.☆127Updated last year
- Benchmark for Lifelong learning research☆118Updated 4 years ago
- Drop-in replacement for any ResNet with a significantly reduced memory footprint and better representation capabilities☆208Updated last year
- ☆133Updated 4 years ago
- ☆67Updated 4 years ago
- Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding☆73Updated 3 years ago
- ☆87Updated 3 years ago
- A JAX implementation of Broaden Your Views for Self-Supervised Video Learning, or BraVe for short.☆48Updated this week
- A Domain-Agnostic Benchmark for Self-Supervised Learning☆107Updated 2 years ago
- Multi-object image datasets with ground-truth segmentation masks and generative factors.☆273Updated 3 years ago
- TF/Keras code for DiffStride, a pooling layer with learnable strides.☆124Updated 3 years ago
- Code for "Supermasks in Superposition"☆124Updated last year
- [CogSci'21] Study of human inductive biases in CNNs and Transformers.☆43Updated 4 years ago
- Pytorch implementation of Compressive Transformers, from Deepmind☆163Updated 3 years ago
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆51Updated 3 years ago
- A simple to use pytorch wrapper for contrastive self-supervised learning on any neural network☆146Updated 4 years ago
- Official implementation of the paper "Topographic VAEs learn Equivariant Capsules"☆80Updated 3 years ago
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆80Updated 3 years ago
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)☆486Updated 4 years ago
- Labels and other data for the paper "Are we done with ImageNet?"☆193Updated 3 years ago
- Official PyTorch implementation of "SPACE: Unsupervised Object-Oriented Scene Representation via Spatial Attention and Decomposition"☆104Updated last year